We’ve always known size matters. But not exactly in the brain, that is until now.
MIT researchers have not found that bigger brains are betterThe university unveiled its findings that could lead to a better understanding of how the brain processes information, and how robots might classify objects they detect as well. – but that discerning the size of objects, which the brain does automatically, might have real relevance for the field of robotics.
According to Abby Abazorius, the human brain can recognize thousands of different objects, but no one has known for sure how the brain perceives and identifies different objects. Abazorius writes MIT scientists have found that the brain “organizes objects based on their physical size, with a specific region of the brain reserved for recognizing large objects and another reserved for small objects.”
According to Abazorius, Aude Oliva, an associate professor in the MIT Department of Brain and Cognitive Sciences and senior author of the study told Mas High Tech that she and graduate student Talia Konkle took 3D scans of brain activity during experiments in which participants were asked to look at images of big and small objects.
By evaluating the scans, Abazorius writes, the researchers found that while one part of the brain responds to large objects like a chair or table, another area is used when looking at small objects like a paperclip.
Large objects, the researchers learned, are processed in the region of the brain located next to the hippocampus, which is responsible for navigating through spaces and for processing the location of different places. Small objects are handled near the regions of the brain that are active when the brain has to manipulate tools.
“It’s like another continent, in terms of brain distances,” Oliva told Abazorius, adding that “large objects are dealt with in the same area of the brain that’s used for spaces, like rooms, while small objects are seen as things to be picked up or manipulated.”Oliva, who told Abazorius she does not have a background in robotics, said the findings may lead to ways to “teach robots to recognize size first, before identifying what the object is, thereby routing the information to the correct area.”
Oliva gave the example of a robot which helps a blind person could have different areas to process a chair, in which the person would sit, and food, which the person would use one or two hands to pick up, she told Abazorius in their interview.MIT’s findings published in the June 21 issue of the journal Neuron.
Want to learn more about the latest in communications and technology? Then be sure to attend ITEXPO West 2012, taking place Oct. 2-5, in Austin, TX. ITEXPO offers an educational program to help corporate decision makers select the right IP-based voice, video, fax and unified communications solutions to improve their operations. It's also where service providers learn how to profitably roll out the services their subscribers are clamoring for – and where resellers can learn about new growth opportunities. For more information on registering for ITEXPO click here.
Stay in touch with everything happening at ITEXPO. Follow us on Twitter.
Edited by Jamie Epstein