SOFT Rocker. Designed by MIT Architecture students.
The concept of people creating power with motion was introduced to me by my spinning studio. It makes complete sense to capture the energy of a room full of people pedaling away like mad men for an hour and use that energy to power the guy taking a stroll on the treadmill. Simple motions are the basis of energy sources like windmills and water wheels, so why not turn human behaviors into energy sources? A group of architecture students at MIT sees the casual act of rocking as the perfect way to charge small electronics.
MIT professor Sheila Kennedy and a group of MIT architecture students developed the SOFT Rocker, which is a rocking chair/lounge chair for the great outdoors. The SOFT Rocker uses the human power of balance to create an interactive 1.5 axis, 35 watt solar tracking system. During daylight hours, the lounger captures solar power in a 12-ampere hour battery.
For maximum power absorption, the curved, solar-panel-covered seats rotate on an axis to keep them facing the sun. Additional energy is generated from the rocking motion created when people climb inside. All the energy that is harvested can be used to recharge gadgets plugged into the three USB ports and to illuminate a light strip on the inside of the loop.
The SOFT Rockers were created for the Festival of Art+Science+Technology (FAST) as an antidote to “conventional ‘hard’ urban infrastructure.
About: MIT is a prestigious Boston based university that reputation for its technology and engineering programs. The School of Architecture and Planning has gotten press in recent years for the construction of the Media Lab, designed by Pritzker Prize-winning Japanese architect Fumihiko Maki. Professor Sheila Kennedy is an expert in the integration of solar cell technology in architecture, changing the way buildings receive and distribute energy.
CAMBRIDGE, Mass. — Paul Barone, a postdoctoral researcher in MIT Department of Chemical Engineering, and professor Michael Strano are working on a new type of blood glucose monitor that could not only eliminate the need for finger pricks but also offer more accurate readings.
“Diabetes is an enormous problem, global in scope, and despite decades of engineering advances, our ability to accurately measure glucose in the human body still remains quite primitive,” says Strano, the Charles and Hilda Roddey Associate Professor of Chemical Engineering. “It is a life-and-death issue for a growing number of people.”
Strano and Barone’s sensing system consists of a “tattoo” of nanoparticles designed to detect glucose, injected below the skin. A device similar to a wristwatch would be worn over the tattoo, displaying the patient’s glucose levels.
A 2008 study in the New England Journal of Medicine showed that continuous monitoring helped adult type I diabetes patients who were at least 25 years old better control their blood glucose levels. However, existing wearable devices are not as accurate as the finger-prick test and have to be recalibrated once or twice a day — a process that still involves pricking the finger.
“The most problematic consequences of diabetes result from relatively short excursions of a person’s blood sugar outside of the normal physiological range, following meals, for example,” says Strano. “If we can detect and prevent these excursions, we can go a long way toward reducing the devastating impact of this disease.”
Most existing continuous glucose sensors work via an injection of an enzyme called glucose oxidase, which breaks down glucose. An electrode placed on the skin interacts with a by-product of that reaction, hydrogen peroxide, allowing glucose levels to be indirectly measured. However, none of those sensors have been approved for use longer than seven days at a time.
The technology behind the MIT sensor, described most recently in a December 2009 issue of ACS Nano, is fundamentally different from existing sensors, says Strano. The sensor is based on carbon nanotubes wrapped in a polymer that is sensitive to glucose concentrations. When this sensor encounters glucose, the nanotubes fluoresce, which can be detected by shining near-infrared light on them. Measuring the amount of fluorescence reveals the concentration of glucose.
The researchers plan to create an “ink” of these nanoparticles suspended in a saline solution that could be injected under the skin like a tattoo. The “tattoo” would last for a specified length of time, probably six months, before needing to be refreshed.
To get glucose readings, the patient would wear a monitor that shines near-infrared light on the tattoo and detects the resulting fluorescence. One advantage of this type of sensor is that, unlike some fluorescent molecules, carbon nanotubes aren’t destroyed by light exposure. “You can shine the light as long as you want, and the intensity won’t change,” says Barone. Because of this, the sensor can give continuous readings.
Development of the nanoparticles and the wearable monitor is being funded by MIT’s Deshpande Center for Technological Innovation.
Barone and Strano are now working to improve the accuracy of their sensor. Any glucose monitor must pass a test known as the Clarke Error Grid, the gold standard for glucose-sensor accuracy. The test, which compares sensor results to results from a lab-based glucose meter, needs to be very stringent, since mistakes in glucose detection can be fatal.
They are still years away from human trials, says Barone, but they may soon start trials in animals. Those tests will be key to determining the value of this approach, says Buckingham. “You don’t know how good it will be until you put it in someone and see how strong the signal is,” he says.
A new video of a Boston Dynamics robot has surfaced--possibly the scariest yet. The company's name alone should give you the willies if you've watched the Terminator movies or seen the previous chilling robots BD makes. We've rounded up the videos for you. Happy Halloween.
Boston Dynamics is a small engineering and robotics firm spun off from MIT in 1992. According to its own Web site it "builds advanced robots with remarkable behavior: mobility, agility, dexterity and speed." Those parameters sound kinda military-like for a reason: BD has worked with DARPA, the Army, the Navy, and Marine Corps, as well as the more innocuous-sounding Sony. Their current starting lineup:
Petman
And before you think that sounds pretty Cyberdine Systems-sy, check out the video of Petman in action. It's a prototype bipedal robot from BD that walks with extraordinarily human-like gait--it even heel-toes, and does so with a dynamic sense of balance that means it can take a solid kick and still keep walking.
Makes the cute Asimo seem like the walking equivalent of a robotic grandpa doesn't it? Technically BD says its an "anthropomorphic robot for testing chemcial protection clothing [...] Unlike previous suit testers, which had to be supported mechanically an had a limited repertoire of motion, Petman will balance itself and move freely; walking, crawling [...] Petman will also simulate human physiology within the protective suit by [...] sweating when necessary." But if you didn't get a shiver from imagining the next gen of the sweaty thing with arms and a head, painted silver instead of black and wielding a gun then ... well, you've not got a very active imagination.
Big Dog
This is perhaps BD's most famous 'bot, designed to act something like a robotic packhorse to aid soldiers in the field of battle. Its four legs make it highly sure-footed, and it'll be smart enough to be able to maneuver semi-autonomously when it hits its final military specification levels.
Little Dog
Think Big Dog but more petite--and try to banish thoughts of tiny robots creeping into buildings via drains or under fences, all to assassinate bad guys.
RiSE
The last beastie is a six-legged smart climbing robot that has adhesive feet to let it scale even the most unforgiving and sheer building walls.
All that's left for us is to wonder what Petman and Big Dog and all the rest will evolve into over time: We're pretty sure we heard Petman muttering something about his plan to "be back."
Slumping sluggers could soon get help from MIT Media Lab researchers.
Larry Hardesty, MIT News Office
A player at the Boston Red Sox preseason training camp is wired with sensors developed by the MIT Media Lab, which gauge the forces he exerts when he swings the bat. Photos: Joseph Paradiso and Alexander Reben
On Wednesday, the Boston Red Sox reached Major League Baseball's postseason playoffs for the sixth time in seven years. But whether or not they go on to win another World Series, when the Sox report to spring training next year, they could be spending some time in the trainer's room with members of the MIT Media Lab.
For three of the last four years, Professor Joseph Paradiso and other members of the lab's Responsive Environments Group have been strapping sensors on players at the Red Sox preseason camp to gauge the physical forces they exert when they swing a bat or throw a ball. So far, the researchers have been working mainly with minor-league players, trying to determine what kind of useful information they can extract from the sensors. But next spring, Paradiso hopes to gather more data on more players engaged in a wider range of activities.
If trainers could wire up a hitter when he's on a hot streak, and then again when he's in a slump, they might be able to determine how the mechanics of his swing have changed and how he can fix them. "There's many areas where this technique will have a meaningful influence on how things are perceived and how data is interpreted," says Eric Berkson, one of the Sox' team physicians. As a doctor, Berkson is particularly interested in how the technology could be used to identify behaviors that can lead to injury. "And then, using the same technology, we can try to find better ways to figure out when someone's able to come back from an injury and make sure they don't injure themselves in that process," he says.
The Responsive Environments Group's work with the Sox grew out of a project to allow dancers' movements to control the music accompanying them — "a very Media Lab thing," Paradiso says. While exploring the work's implications for gait analysis with collaborators at Massachusetts General Hospital, Paradiso met some physicians who worked with the Red Sox and were intrigued by the technology. Paradiso and his graduate student Michael Lapinski did the initial work on sensors customized for baseball players, and they've recently been joined by Clemens Satzger of the Technical University of Munich, who's at MIT until January.
Baseball teams had been trying to collect data on the biomechanics of players' swings and pitching motions for years, but they'd relied on optical systems that worked only in the lab and produced data that could be difficult to analyze. "Trying to measure pitching with just an accelerometer has never been done effectively before," says Berkson. "The idea that we can potentially do this biomechanics evaluation during real activities is a huge step."
In fact, the MIT team's sensors use more than just an accelerometer; they also use gyroscopes, and recent versions include a magnetometer that measures joint angles. But the biggest difficulty in developing the sensor, Paradiso says, is that no single accelerometer can handle the range of accelerations — measured as multiples of earth's gravity, or Gs — that a professional athlete can produce during a routine motion. The same goes for gyros and angular acceleration. "For dancers — even thought they're pretty kinetic, too — the 10-G range was okay," says Paradiso. "But for athletes — especially for pitchers — you have to go up to 120, 130 Gs to get that full range of motion, and for angular rate, you have to go up to about 10,000 degrees per second." The beginning of a pitcher's windup, however, though no less crucial to his delivery, is so slow that it won't register very clearly on a high-G accelerometer or high-rate gyro. So each Media Lab sensor includes two sets of three-axis accelerometers and gyros that span different ranges, to capture an athlete's full range of activity.
In the hope of further reducing the size of the sensors, Paradiso is talking with a Boston-area device manufacture about developing accelerometers and gyros that can handle a wide range of accelerations and angular rates, avoiding the need for dual sets of devices. But in the near term, a greater concern is developing ways to get the sensors on and off athletes more efficiently. "In spring training, even with a minor-league player, they're so tightly scheduled, that if you've got 20 minutes with this player, and you take a half-hour, that's going to throw their day off," Paradiso says. The lab is currently experimenting with a new method for mounting the sensors that should be more efficient but will also provide an attachment secure enough to withstand huge accelerations.
Ultimately, Paradiso believes, his group's work with the Sox will aid in the development of commercial devices for measuring biomechanics. Berkson finds that prospect exciting. "We can now look at what's causing injury in a shoulder through a pitcher's real activity, in real-world situations," he says, "and this will help us understand why kids get injured, and why Little League pitching is dangerous for some kids, and why there's an increased number of surgeries happening on these kids on a yearly basis." To some people outside New England, that could sound like an even more important application than helping the Sox to another World Series victory.
In the image on the bottom, the eye is in the foreground and the text is in the background — and both are blurry because the photographer has focused on a point between the two. A new MIT system instead captures multiple images at several focal depths and stitches them into a sharper composite (top).
Courtesy Sam Hasinoff
For photographers, it's sometimes difficult to keep both the foreground and background of an image in focus. Focusing somewhere between the two can ensure that neither is blurry; but neither will be particularly sharp, either. On Friday, at the IEEE Conference on Computer Vision in Kyoto, Japan, members of the MIT Graphics Group will show that combining several low-quality exposures with different focal depths can yield a sharper photo than a single, higher-quality exposure.
Given enough time, a digital camera could take a dozen well-exposed photos, and software could stitch them into a perfectly focused composite. But if the scene is changing, or if the photographer is trying to hold the camera steady by hand, there may not be time for a dozen photos. When time is short, says postdoc Sam Hasinoff, lead author on the paper, "there's a trade-off between blur, on the one hand — not having an image which is in focus — and noise, on the other. If you take an image really fast, it's really dark; it's not going to be of high quality."
Hasinoff, MIT professors Fredo Durand and William Freeman, and Kiriakos Kutulakos of the University of Toronto devised a mathematical model that determines how many exposures will yield the sharpest image given a time limit, a focal distance, and a light-meter reading. Hasinoff says that experiments in the lab, where the number and duration of digital-camera exposures were controlled by laptop, bore out the model's predictions.
A digital camera could easily store a table that specifies the ideal number of exposures for any set of circumstances, Hasinoff says, and the camera could have a distinct operational setting that invokes the table. The multiple-exposure approach, he says, offers particular advantages in low light or when the scene covers a large range of distances.
For the time being, however, the technique is limited by the speed of camera sensors. Today's fastest consumer cameras can capture about 60 images in a second, Hasinoff says. If the MIT researchers' model determined that, under certain conditions, the ideal number of exposures in a tenth of a second would be eight, the fastest cameras could manage only six. "But there's still a big gain to be had," Hasinoff says.
The Graphics Group's work on multiple-exposure composites uses an analytical approach first presented at this summer's Siggraph — the major conference in the field of computer graphics. There, Anat Levin, who was a postdoc at the time, Durand, Freeman, and colleagues described their "lattice-focal lens," an ordinary lens filter with what look like 12 tiny boxes of different heights clustered at its center. Each box is in fact a lens with a different focal length, which projects an image onto a different part of the camera's sensor. The raw image would look like gobbledygook, but the same type of algorithm that can combine multiple exposures into a coherent composite can also recover a regular photo from the raw image.
"Only time will tell whether that new, proposed piece of hardware will be better than the others, but I think their way of analyzing the whole thing is brilliant," says Marc Levoy, a professor of computer science and electrical engineering at Stanford University. "There's been a lot of work on different ways of extending the depth of field, and what this paper did was, it tried to analyze all of them together. And I actually think that it's a seminal paper. I think it's a landmark paper."
Massachusetts Institute of Technology (MIT), the leading US science university, has been rickrolled on a grand scale.
The 'hack' was carried out by students at MITPhoto: Greg Steinbrecher: The Tech
Students have plastered the first eight notes of Rick Astley's Never Gonna Give You Up on scaffolding surrounding theBostonresearch centre'sGreat Dome.
The pranksters dreamed up the sophisticated stunt after noticing that the horizontal lines of the scaffolding cover resembled unfilled sheet music, according to MIT's student newspaper The Tech.
Rickrolling began as an online trend which involved tricking people into watching YouTube videos of Astley's 1987 hit, but quickly entered the "real" world.
Never Gonna Give You Up was selected as the new anthem for the New York Mets baseball team, and Astley was named best act ever at last year's MTV Europe Music Awards after fans of the crooner hijacked online votes.
Astley himself made a surprise appearance at the Macy's Thanksgiving Day parade in New York in 2008, emerging from the back of a float to perform his most well-known song.
MIT students - who include some of the most promising young minds in the world - have developed a reputation for carrying out clever and ambitious pranks, known as "hacks".
On Sept 11 2006, professors and college officials woke up to find that a 25ft long fire engine had been placed on top of the Great Dome, apparently to mark five years since the 9/11 terrorist attacks.
The rickroll stunt was carried out under the cover of darkness in the early hours of Wednesday morning last week.
New robots mimic fish's swimming, could be used in underwater exploration
Anne Trafton, News Office August 24, 2009
Borrowing from Mother Nature, a team of MIT researchers has built a school of swimming robo-fish that slip through the water just as gracefully as the real thing, if not quite as fast.
Mechanical engineers Kamal Youcef-Toumi and Pablo Valdivia Y Alvarado have designed the sleek robotic fish to more easily maneuver into areas where traditional underwater autonomous vehicles can't go. Fleets of the new robots could be used to inspect submerged structures such as boats and oil and gas pipes; patrol ports, lakes and rivers; and help detect environmental pollutants.
"Given the (robotic) fish's robustness, it would be ideal as a long-term sensing and exploration unit. Several of these could be deployed, and even if only a small percentage make it back there wouldn't be a terrible capital loss due to their low cost," says Valdivia Y Alvarado, a recent MIT PhD recipient in mechanical engineering.
Robotic fish are not new: In 1994, MIT ocean engineers demonstrated Robotuna, a four-foot-long robotic fish. But while Robotuna had 2,843 parts controlled by six motors, the new robotic fish, each less than a foot long, are powered by a single motor and are made of fewer than 10 individual components, including a flexible, compliant body that houses all components and protects them from the environment. The motor, placed in the fish's midsection, initiates a wave that travels along the fish's flexible body, propelling it forward.
Video: Kamal Youcef-Toumi and Pablo Valdivia Y Alvarado/MIT News Office
The robofish bodies are continuous (i.e., not divided into different segments), flexible and made from soft polymers. This makes them more maneuverable and better able to mimic the swimming motion of real fish, which propel themselves by contracting muscles on either side of their bodies, generating a wave that travels from head to tail.
"Most swimming techniques can be copied by exploiting natural vibrations of soft structures," says Valdivia Y Alvarado.
As part of his doctoral thesis, Valdivia Y Alvarado created a model to calculate the optimal material properties distributions along the robot's body to create a fish with the desired speed and swimming motion. The model, which the researchers initially proposed in the journal Dynamic Systems Measurements and Control (ASME), also takes into account the robot's mass and volume. A more detailed model is described in Valdivia Y Alvarado's thesis and will soon be published along with new applications by the group.
Other researchers, including a team at the University of Essex, have developed new generations of robotic fish using traditional assembly of rigid components to replicate the motions of fish, but the MIT team is the only one using controlled vibrations of flexible bodies to mimic biological locomotion.
"With these polymers, you can specify stiffness in different sections, rather than building a robot with discrete sections," says Youcef-Toumi. "This philosophy can be used for more than just fish" - for example, in robotic prosthetic limbs.
Mimicking fish
With motors in their bellies and power cords trailing as they swim, the robo-fish might not be mistaken for the real thing, but they do a pretty good fish impersonation.
The team's first prototypes, about five inches long, mimic the carangiform swimming technique used by bass and trout. Most of the movement takes place in the tail end of the body. Fish that use this type of motion are generally fast swimmers, with moderate maneuverability.
Later versions of the robo-fish, about eight inches long, swim like tuna, which are adapted for even higher swimming speeds and long distances. In tuna, motion is concentrated in the tail and the peduncle region (where the tail attaches to the body), and the amplitude of body motions in this region is greater than in carangiform fish.
Real fish are exquisitely adapted to moving through their watery environment, and can swim as fast as 10 times their body length per second. So far, the MIT researchers have gotten their prototypes close to one body length per second - much slower than their natural counterparts but faster than earlier generations of robotic fish.
The new robo-fish are also more durable than older models - with their seamless bodies, there is no chance of water leaking into the robots and damaging them. Several four-year-old prototypes are still functioning after countless runs through the testing tank, which is filled with tap water.
Current prototypes require 2.5 to 5 watts of power, depending on the robot's size. That electricity now comes from an external source, but in the future the researchers hope to power the robots with a small internal battery.
Later this fall, the researchers plan to expand their research to more complex locomotion and test some new prototype robotic salamanders and manta rays.
"The fish were a proof of concept application, but we are hoping to apply this idea to other forms of locomotion, so the methodology will be useful for mobile robotics research - land, air and underwater - as well," said Valdivia Y Alvarado.
The work was funded by the Singapore-MIT Alliance and Schlumberger Ltd.
by Kevin Dalias Civil engineers at MIT are currently developing a new breed of concrete that will be able to last for 16,000 years. Concrete is one of the most frequently used and widely produced man-made building material on earth, with over 20 billion tons produced per year globally. The use of this new ultra high density concrete will have enormous environmental implications, given its ability to deliver lighter, stronger structures capable of lasting many civilizations, while drastically decreasing the carbon emissions sent into the atmosphere by its inferior predecessor.
One of the inventors of the new material, Franz-Josef Ulm offers, “More durable concrete means that less building material and less frequent renovations will be required.” Ulm, alongside Georgios Constantinides successfully designed this long lasting concrete, with significantly reduced creep, (the time-dependent deformation of structural concrete), by increasing its density and slowing its creep by a rate of 2.6. “The thinner the structure, the more sensitive it is to creep, so up until now, we have been unable to build large-scale lightweight, durable concrete structures,” said Ulm. “With this new understanding of concrete, we could produce filigree: light, elegant, strong structures that will require far less material.” With regard to environmental impact, the annual worldwide production of concrete creates between 5 and 10% of all atmospheric CO2. Ulm explains, “If concrete were to be produced with the same amount of initial material to be seven times normal strength, we could reduce the environmental impact by 1/7. Maybe we can use nanoengineering to create such a green high-performance concrete.” The ultra high density concrete could deliver exponential results both in terms of strength and durability, and is undoubtedly poised to redefine architects’ relationship with man’s most reliable building material while literally changing the face of the earth. + MIT Lead photo by Jeff Kubina
Eco Factor: Recycled washing machine works without electricity.
Recently we saw a team of students at MIT develop an energy harnessing shock absorber system that could provide electricity for your next-gen electric cars, and today we have another green design from the same institute. “Bicilavadora” is a new pedal-powered washing machine that has been designed by a team of students at MIT using nothing more than an old oil drum, an old bicycle and some pieces of plastic joined together.
The gadget can look at an airplane ticket and let the user know whether the flight is on time, or recognize books in a book store and then project reviews or author information from the Internet onto blank pages
Long Beach: US university researchers have created a portable “sixth sense” device powered by commercial products that can seamlessly channel Internet information into daily routines.
The device created by Massachusetts Institute of Technology (MIT) scientists can turn any surface into a touch-screen for computing, controlled by simple hand gestures.
The gadget can even take photographs if a user frames a scene with his or her hands, or project a watch face with the proper time on a wrist if the user makes a circle there with a finger.
The MIT wizards cobbled a Web camera, a battery-powered projector and a mobile telephone into a gizmo that can be worn like jewellery. Signals from the camera and projector are relayed to smart phones with Internet connections.
“Other than letting some of you live out your fantasy of looking as cool as Tom Cruise in ‘Minority Report´ it can really let you connect as a sixth sense device with whatever is in front of you,” said MIT researcher Patty Maes.
Maes used a Technology, Entertainment, Design Conference stage in Southern California on Wednesday to unveil the futuristic gadget made from store-bought components costing about $300.
The device can recognize items on store shelves, retrieving and projecting information about products or even providing quick signals to let users know which choices suit their tastes.
The gadget can look at an airplane ticket and let the user know whether the flight is on time, or recognize books in a book store and then project reviews or author information from the Internet onto blank pages.
The gizmo can recognize articles in newspapers, retrieve the latest related stories or video from the Internet and play them on pages.
“You can use any surface, including your hand if nothing else is available, and interact with the data,” Maes said.
“It is very much a work in progress. Maybe in ten years we will be here with the ultimate sixth-sense brain implant.”
The MIT Media Lab has announced its latest endeavor -- the creation of a Center for Future Storytelling. The center will use new technologies to make stories more interactive, improvisational and social, according to an official statement.
Graphic: Diego Aguirre
The center is being funded by a seven-year, US$25 million commitment from Plymouth Rock Studios, a major motion picture and television studio scheduled to open in 2010 south of Boston.
Three researchers from MIT's Media Lab will co-direct the center. They are V. Michael Bove Jr., who studies object-based media and interative television, Cynthia Breazeal, who focuses on robotics, and Ramesh Rasker, who researches imaging, display and performance-capture technologies.
The goal is to create "a sort of living story that can continue to evolve and shape depending on who is listening to it and how they can derive meaning from it," Breazeal said in a taped interview.
The center already has more than a dozen research projects in the works. They include:
Everything Tells A Story: A project that will enable everyday objects to keep running "diaries," of what happened to them. The information could be used for "personal story creation" by individuals.
Tofu: A robot that uses cartoon-animation style movement to work with kids. The researchers describe it as "LEGO Mindstorms meets Muppets." Future versions of Tofu will allow children to design, program and remotely operate their own puppets to tell stories.
Nexi: A project to create a social robot, or a "synthetic performer." The project combines mobility, dexterity, and most remarkably, sociality. The robot's expressive face is capable of multiple human facial expressions. A video of Nexi can be viewed below.
Programmable Movies: A research project to turn movies into a customized experience based on certain parameters like emotions, place or time. The idea is to let users piece together different images using metadata encoded in the images.
MIT's Media Lab was started more than 20 years ago to develop innovative technologies for human expression and interactivity. To read about other projects at the Center for Storytelling, click here.
.
All you art collectors out there. Here is a chance to get a Giclee copy of some of Ian M Sherwin work. Ian is planning on doing a whole series of Marblehead, Massachusetts paintings. His work is amazing.