Boston University Research Internship

Posted: September 27, 2011 in Uncategorized

This summer, I participated in the research internship program at Boston University. I was with the Department of Electrical Engineering and the Department of Cognitive and Neural Systems to work on a collaborative project. I worked with a close friend I got to know very well, Vincent Kee, from Los Angeles, California. Within the CNS department in their Neuromorphics Lab, I worked with a post-doc graduate, Florian Raudies. Our mentor was professor Ajay Joshi from the integrated circuits lab in the Department of Electrical Engineering.

Before I entered the program, I knew little of programming, image processing, or even what a transistor was. And looking back, to this very day, I am still surprised and amazed to how much I accomplished and learn in those short 6 weeks.

I was part of a multi-university research group called CELEST (Center of Excellence for Learning in Education, Science, and Technology) that is trying to understand how the human brain works in order to imitate its functions for autonomous behavior. My project focused on how to imitate the brain’s function of vision for navigation. Essentially, the main goal or the big picture of the project was to create autonomous nanoaerial vehicles or drones that can adapt in any environment and navigate through their surroundings. This would be used mostly in the military or for emergency situations, responding to circumstances that require attention as swiftly as possible. With this type of technology, we can use it to detect weapons or for reconnaissance missions.

Here is a video I found from youtube showing a drone using optical flow for navigation, the next step of the project. The video is shown from the perspective of the flying drone as each of the circles is detecting optic flow.

However, it would be many years into the future before we achieve that goal, but it is very interesting to think about. This goal is only but one of many different applications of reverse-engineering the brain, and it is exciting to imagine what the future holds. But before we can talk about the future, let’s start from the beginning.

Optic Flow: When Mr. Joshi introduced this concept to me, I was very puzzled by what he said. I knew optic dealt with vision, but I only knew the general idea of the topic. It took me a few weeks before I realized we utilize optic flow all the time whenever we see. Optic flow is nothing but the perception of motion, the movement of surfaces, objects, and edges. Motion is all relative as it is perceived by pixel shifts from the original image. Depending where an object is before to where an object is now, motion is detected.  Here’s a simple example. When we see something approach us, the image of that object gets bigger and bigger in all directions. This indicates to us that we should probably avoid this object and move away. This is the basic idea behind utilizing optic flow in order to program the robots to navigates. Now how do we program something that took us years for us to figure out? It didn’t just take one day for us to say that “Oh, something is coming towards us. Let’s avoid it.” To translate this mindset to a program was the real challenge. Thankfully, my project dealt with only obstacle avoidance and navigation, something easily comprehendible and reachable.

This was the conceptual portion of my project. Now for the application!  Vincent and I worked mostly in MATLAB, which stands for matrix laboratory because it stores all its variables as matrixes. I thought the concept was weird at first, but then I got used to the idea and realized how practical it was to do it that way. For the programs, we had to find a way for some image acquisition tool such as a webcam could detect motion. One easy way we did it was using filters. I worked with implementing gabor filters, a spatiotemporal sensitive filter, and Vincent worked with the motion correlation method, comparing two frames at different times. This is further described in our paper. What the filters do is essentially how we utilize optic flow. Because a computer is generated or computing the motion, the motion can be described as vectors. This was the basis of how we computed optic flow in MATLAB.

We applied our filters and programs by interfacing with the iRobot Create, or in other words, as many people know it as, the autonomous vaccuum cleaner Roomba.

A webcam was attached to the top of the robot because it was easier for the robot to navigate rather than attaching a laptop and using its webcam. The webcam also had a wider a field of view. The robot successfully navigated in a controlled texture environment. Below is a CAD image of the textured environment. When obstacles were introduced, this created problems when computing optic flow. We called this the object avoidance dilemma Object Avoidance Dilemma Diagram

ABSTRACT:

As new technologies continue to develop, more and more robots are replacing humans in situations deemed too dangerous. However, current solutions are not fully automated, requiring offsite human operators for executing basic actions. The ideal solution would be a fully autonomous vehicle that could complete its objectives without any human intervention. In this project, the viability of optical flow based navigation was investigated. Optical flow, or optic flow, is the perception of object motion due to the object’s pixel shifts as the viewer moves relative to the environment. First, motion detection filters were developed and applied to image sequences in MATLAB. Then, they were implemented in optic flow based navigation MATLAB programs for an iRobot Create with a camera to provide video input. The robot successfully traversed through a textured environment but encountered difficulties when attempting to avoid textured obstacles and corners. Experiments were developed to compare the effectiveness of the Correlation and Gabor filters and to find the relationship between increased motion detection ability and processing time per frames. Possible future directions for this project include implementing GPU (Graphics Processing Unit), FPGA (Field-Programmable Gate Array), or even ASIC (Application-Specific Integrated Circuit) chips to speed up computation time, utilizing a wide-angle camera or an array of cameras to get a wider field of view, and integrating a q learning system.

Future Directions: I often found myself pondering about the future directions of this project. There are so many applications with a project of this scale. If we were able to implement GPUs,  FPGAs, or even ASICs, our processing speed would decrease exponentially, which would save us time in order to focus more on the navigation portion. But the most intriguing thing about the project was its ultimate goal, creating a robot to act with a brain. One possibility to achieve this is to incorporate a q learning system in the robot by use of positive and negative reinforcement. Just like teaching a kid to grow up, the robot would receive negative “points” for bumping into walls or turning the wrong direction and receive positive “points” for doing its given task. The robot would then be able to learn over time.

Reverse-engineering the brain research has been ongoing for over 20 years, and even though we had experienced major breakthroughs, such as the iRobots, which was founded by Colin Angle and Helen Greiner from the Artificial Intelligence lab at MIT, there is still a long way to go. Hopefully in the future, I will be able to discover new ways of reverse-engineering the brain on my own. Whether it is trying to create new autonomous systems for the military or for commercial use or making hardware and computational models that simulate the brain’s neural network, I want to contribute my own ideas no matter where I go.

Research Posters

.

.

.

.

Research posters were presented at the Boston University Research Internship in Science and Engineering Poster Session held on August 12th in the Boston University Life Science and Engineering Department.

Attached is our collaborative research paper

Optic Flow Based Navigation Research Paper

Attached is our presentation to the Integrated Circuits and Systems Research Group in the Electrical and Computer Engineering Department.

Optic Flow Based Navigation

Additional Resources:

http://nl.bu.edu/research/projects/neuromorphic-hardware/neuromorphic-hardware-and-robots/

http://nl.bu.edu/news-and-events/news/high-school-student-interns-complete-robotic-project/

http://www.bu.edu/cas/magazine/fall11/versace/index.shtml

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s