Graffiti

An augmented reality social media application



About

Graffiti is a cloudlet based social media application which allows users to annotate their surroundings and share these annotations with other users.  It is like public virtual graffiti that covers the world, allowing users to share experiences through augmented reality.  This research was done during my internship with Professor Satyanarayanan and Dr. Zhuo Chen from Carnegie Mellon University and was funded by the NSF.  The cloudlet interface code was built on the Gabriel Cognitive Assistant platform.


How it Works

This project, at its core, uses SURF feature matching against a key point database to retrieve and display annotations.  SURF is a faster but less accurate version of SIFT.  When a user adds an annotation, they must take an image of the object they want to draw on.  This image is stored in the key point database, along with the relative location of the annotation.  When a user is walking around, their video stream queries the database for matches using FLANN (a faster approximate nearest neighbor algorithm).  Once a match is found, a homographic matrix is computed and the querying is stopped.  The annotation is then displayed and tracked using optical flow algorithms.


Challenges

SURF and SIFT, while robust on most objects, perform inadequately on others.  For instance, foliage, which produces too many key points, and untextured surfaces, which produce too few key points, both cause SIFT and SURF to mismatch.  This is the nature of matching algorithms and there is no better solution in the current state-of-art.  However, in our specific case, we are able to use GPS location from the user to prune the search space, reducing mismatches.  Unfortunately, we did not have enough time to implement the GPS search space pruning.


My Contribution

I modified existing code to interface between the Gabriel Cognitive Assistance platform and the server running the computer vision algorithms. I also wrote the server code using OpenCV and ZeroMQ. Additionally, I tested many different algorithms to determine which would best suit the problem. Lastly, I modified existing code to allow the Android app to display and track annotations.