Friday, 5 October 2012

The New Crash Test That Will Change Your Next Car Read more: The New Crash Test That Will Change Your Next Car

Courtesy: Popular Mechanics

                                        

News cars already get smashed in the front and the back, sideways and from above, in a battery of tests that has made vehicles safer than ever. But people still die in crashes even in these very safe new cars, so the Insurance Institute for Highway Safety (IIHS) has cooked up another way to demolish cars to test an overlooked type of crash that catches just the front edge of a car.

The IIHS already employs an offset frontal crash test, a side-impact test, a rear-impact test, and a roof-crush test to determine its Top Safety Pick award. These are not the U.S. government’s official tests; the National Highway Traffic Safety Administration does those. But scoring an "acceptable" or a "good" on the IIHS tests is necessary to earn a recommendation from the influential Consumer Reports (and to boast in commercials of being a top safety pick), so carmakers take these tests very seriously.


The new test is called a small offset frontal crash test. Currently, NHTSA does a frontal crash test that effectively rams a car into a wall at 35 mph to test head-on hits. This offset front crash test is more realistic because few head-on crashes see the cars perfectly lined up with each another. When they are offset, which is what happens when drivers swerve to avoid the impact, the crash force gets concentrated in just the part of the car that’s hit.

The IIHS’s existing offset frontal crash test sees a car plow straight into a crushable barrier (designed to replicate a collision with another car) at 40 mph, with the force concentrated on 40 percent of the front end. For the new test, that impact strikes just the corner of the car, concentrating all the force on only 25 percent of the front end. That focuses all the energy in a part of the car where there is typically no frame structure to absorb the blow, making it a tougher safety challenge for new cars.

Thursday, 13 September 2012

How to clean up oil spills

MIT researchers have developed a new technique for
magnetically separating oil and water that could be used to
clean up oil spills. They believe that, with their technique,
the oil could be recovered for use, offsetting much of the
cost of cleanup.
The researchers will present their work at the International
Conference on Magnetic Fluids in January. Shahriar
Khushrushahi, a postdoc in MIT’s Department of Electrical
Engineering and Computer Science, is lead author on the
paper, joined by Markus Zahn, the Thomas and Gerd
Perkins Professor of Electrical Engineering, and T. Alan
Hatton, the Ralph Landau Professor of Chemical
Engineering. The team has also filed two patents on its
work.
In the MIT researchers’ scheme, water-repellent ferrous
nanoparticles would be mixed with the oil, which could then
be separated from the water using magnets. The
researchers envision that the process would take place
aboard an oil-recovery vessel, to prevent the nanoparticles
from contaminating the environment. Afterward, the
nanoparticles could be magnetically removed from the oil
and reused.
According to Zahn, there’s a good deal of previous research
on separating water and so-called ferrofluids — fluids with
magnetic nanoparticles suspended in them. Typically, these
involve pumping a water-and-ferrofluid mixture through a
channel, while magnets outside the channel direct the flow
of the ferrofluid, perhaps diverting it down a side channel or
pulling it through a perforated wall.
This approach can work if the concentration of the ferrofluid
is known in advance and remains constant. But in water
contaminated by an oil spill, the concentration can vary
widely. Suppose that the separation system consists of a
branching channel with magnets along one side. If the oil
concentration were zero, the water would naturally flow
down both branches. By the same token, if the oil
concentration is low, a lot of the water will end up flowing
down the branch intended for the oil; if the oil concentration
is high, a lot of the oil will end up flowing down the branch
intended for the water.

Orthogonal thinking

The MIT researchers vary the conventional approach in two
major ways: They orient their magnets perpendicularly to
the flow of the stream, not parallel to it; and they immerse
the magnets in the stream, rather than positioning them
outside of it.
The magnets are permanent magnets, and they’re
cylindrical. Because a magnet’s magnetic field is strongest
at its edges, the tips of each cylinder attract the oil much
more powerfully than its sides do. In experiments the MIT
researchers conducted in the lab, the bottoms of the
magnets were embedded in the base of a reservoir that
contained a mixture of water and magnetic oil;
consequently, oil couldn’t collect around them. The tops of
the magnets were above water level, and the oil shot up the
sides of the magnets, forming beaded spheres around the
magnets’ ends.
The design is simple, but it provides excellent separation
between oil and water. Moreover, Khushrushahi says,
simplicity is an advantage in a system that needs to be
manufactured on a large scale and deployed at sea for
days or weeks, where electrical power is scarce and
maintenance facilities limited. “The process may seem
simple,” he says, “but it is, inherently, supposed to be
simple.”
In their experiments, the MIT researchers used a special
configuration of magnets, called a Halbach array, to extract
the oil from the tops of the cylindrical magnets. When
attached to the cylinders, the Halbach array looks kind of
like a model-train boxcar mounted on pilings. The magnets
in a Halbach array are arranged so that on one side of the
array, the magnetic field is close to zero, but on the other
side, it’s roughly doubled. In the researchers’ experiments,
the oil in the reservoir wasn’t attracted to the bottom of the
array, but the top of the array pulled the oil off of the
cylindrical magnets.

Leaving the lab

Whether the Halbach array would be the most practical way
to remove oil from the cylindrical magnets in an actual oil-
recovery system remains to be seen. The researchers also
need to determine how much water gets dissolved in the
oil, and how it can best be removed. “To our eye, you don’t
see much moisture in there, but I’m sure that there is some
moisture that adheres to it,” Zahn says. “We might have to
run it through multiple cycles.” On a commercial scale, it
could make sense for an oil-recovery vessel to perform an
initial separation of oil and water and then haul the oil
ashore for further refinement.
“This oil-spill problem has not really been worked on
intensively that I know of, and of course it’s a big problem,”
says Ronald Rosensweig, a former Exxon researcher and a
pioneer in the study of ferrofluids who wrote the field’s first
textbook. “You could think of separating oil from water by
centrifuging or something like that, but in a lot of cases, the
fluids are pretty much equal in density: Some of the oil
sinks, some of it floats, and a lot of it is in between. The
magnetic hook could, hopefully, make separation faster and
better.”
Adding nanoparticles to oil mixed with water to produce a
ferrofluid aboard a ship should be “no problem,”
Rosensweig says. And with a technique called high-
gradient magnetic separation, “It’s known that the gradient
can pull the particles out of suspension,” he says, so
recovering both the nanoparticles and the oil is feasible.
“It’s been done on a small scale,” Rosensweig says.


Thursday, 29 March 2012

Energy efficient buildings

Many workplaces feature major changes in occupancy over the course of a week. In academic buildings, hundreds of students may pour in for a lecture, then leave an hour or two later, while faculty, researchers and staff can enter and exit in irregular patterns. In commercial structures, workers may come and go en masse during short time periods during the day. As a result, energy use in virtually all workspaces can rapidly become inefficient — too large or too small — in relation to the number of people inside.

Now, a new study done on MIT buildings reveals some data that could help designers and building managers, on campuses or in the commercial sector, optimize energy usage — and suggests a template for conducting more research on the subject. 

The study, published in the April issue of the journal Energy and Buildings, examines data from MIT’s buildings M37 and E52, and finds that while electricity use corresponds to occupancy fairly well in those spaces, the activity of heating, ventilation and air conditioning (HVAC) systems in the buildings does not correlate closely to occupancy. 

“It can take a huge amount of energy to heat up buildings during the day, and then suddenly there may be nobody there,” says co-author Carlo Ratti, director of the Senseable City Laboratory and associate professor of the practice in MIT’s Department of Urban Studies and Planning (DUSP). “There is significant potential for improvement and savings, if you could get a more dynamic usage of energy in this area.” 

As Ratti and the co-authors note, the federal government estimates that commercial buildings account for about 20 percent of U.S. energy consumption, and 12 percent of U.S. contributions to greenhouse gas emissions; past studies have shown that commercial buildings could reduce their energy use 20 to 30 percent by implementing strategies that better match energy use and need.  

An energy disconnect

The new study uses data about Wi-Fi connections as a proxy for building occupancy, a method the researchers believe could be replicated elsewhere at low cost; while the data does not necessarily reveal an exact population count inside buildings, it does indicate relative occupancy levels over time. “It’s a way of trying to make use of information that already exists,” Ratti says. 

The researchers analyzed two very different kinds of buildings. Building M37 at MIT — the Ronald M. McNair Building, named for the former astronaut — houses researchers in astrophysics, aeronautics and astronautics in a combination of offices, classrooms and labs. By contrast, Building E52, the longtime home of MIT’s Department of Economics, is an Art Deco box standing alone on Memorial Drive by the Charles River, with a sizable entrance atrium and a large number of offices inside.

Analyzing data from all four seasons of 2006, the researchers found that Building E52 has lower levels of energy use, and that both buildings have a distinctive cyclical “signature” of electricity usage that rises and falls daily. Both buildings use more steam (for heat) in winter and spring, and more chilled water (for air conditioning) in summer and fall. But while about two-thirds of the variation in electricity levels can be accounted for by changing occupancy levels, the use of the HVAC systems correlated only weakly to occupancy.

In short, these MIT buildings tended to be heated or cooled over extended periods of time according to season, but not in a way that optimized the use of energy. “We were not surprised to see this disconnect, but it was good to be able to quantify it,” says Prudence Robinson, a researcher in the Senseable City Lab and a co-author of the paper. 

In addition to Ratti, the co-authors of the new paper — titled “ENERNET: Studying the dynamic relationship between building occupancy and energy consumption” — are David Lee, an MIT PhD student in DUSP; Claudio Martani, a former visiting PhD student in DUSP from the Politecnico di Milano in Italy; Rex Britter, an engineer specializing in fluid dynamics who is currently a research scientist in the Senseable City Lab; and Robinson. 

Experts on energy use in the workplace say the paper deals with a significant issue in their field. “This paper is very much in harmony with our recent efforts to design and modify building infrastructure and systems to match varying occupancies,” says Jay Phillips, senior director of operations for Harvard University’s Faculty of Arts and Sciences, who was not involved in this study. He adds: “There is no question that matching building and equipment schedules with dynamic occupancy patterns is a significant conservation opportunity.”

‘You want to have a better match’

Part of the “disconnect” between occupancy and energy use, the researchers acknowledge, may derive from the special needs of academic buildings, which often house labs with around-the-clock heating or refrigeration demands. 

M37 has some spaces like this, although E52 does not. Either way, though, the nuances of academic buildings make them “an ideal test bed” for thinking about ways of saving energy, according to Britter. Suppose a building’s largest room is a lecture hall that is only used for an hour or two a day, but in winter demands a significant amount of energy in order to become warm. One possible way of using energy more efficiently would be to surround the lecture hall with heavily used offices, which can then receive the heat seeping out of the larger space.

In such cases, Britter says, “you’ve got to think through the best way of having a fairly complicated arrangement of offices, so that heat could be distributed from the lecture hall to more rooms.”

There are many possible architectural or engineering solutions to this issue, Ratti notes. The larger point, he says, is that “you want to have a better match. You can move the people to the energy, in which case the architecture can help a lot, or you can move the energy to the people, which is more futuristic.” New sensing-based thermostats, for instance, could regulate temperatures on a more granular basis within buildings, according to the distribution of people within them. 

Alternately, Ratti adds, existing spaces can be used for new purposes — such as a once rarely used large common area in MIT’s Building 9, which now houses its Steam CafĂ©, and thus has chairs and tables occupied throughout the day. “That’s a much more intensive and better use of the space, with the same embedded energy cost,” Ratti says.

Ratti and the Senseable City researchers are currently pursuing several other studies that use IT data to track the flow of people in urban areas, and say they would like to continue to analyze energy use as part of their ongoing work.

Tuesday, 27 March 2012

A new dimension for solar energy

Two small-scale versions of three-dimensional photovoltaic arrays were among those tested by Jeffrey Grossman and his team on an MIT rooftop to measure their actual electrical output throughout the day.





Intensive research around the world has focused on improving the performance of solar photovoltaic cells and bringing down their cost. But very little attention has been paid to the best ways of arranging those cells, which are typically placed flat on a rooftop or other surface, or sometimes attached to motorized structures that keep the cells pointed toward the sun as it crosses the sky.

Now, a team of MIT researchers has come up with a very different approach: building cubes or towers that extend the solar cells upward in three-dimensional configurations. Amazingly, the results from the structures they’ve tested show power output ranging from double to more than 20 times that of fixed flat panels with the same base area. 

The biggest boosts in power were seen in the situations where improvements are most needed: in locations far from the equator, in winter months and on cloudier days. The new findings, based on both computer modeling and outdoor testing of real modules, have been published in the journal Energy and Environmental Science.

“I think this concept could become an important part of the future of photovoltaics,” says the paper’s senior author, Jeffrey Grossman, the Carl Richard Soderberg Career Development Associate Professor of Power Engineering at MIT.

The MIT team initially used a computer algorithm to explore an enormous variety of possible configurations, and developed analytic software that can test any given configuration under a whole range of latitudes, seasons and weather. Then, to confirm their model’s predictions, they built and tested three different arrangements of solar cells on the roof of an MIT laboratory building for several weeks.

While the cost of a given amount of energy generated by such 3-D modules exceeds that of ordinary flat panels, the expense is partially balanced by a much higher energy output for a given footprint, as well as much more uniform power output over the course of a day, over the seasons of the year, and in the face of blockage from clouds or shadows. These improvements make power output more predictable and uniform, which could make integration with the power grid easier than with conventional systems, the authors say.

The basic physical reason for the improvement in power output — and for the more uniform output over time — is that the 3-D structures’ vertical surfaces can collect much more sunlight during mornings, evenings and winters, when the sun is closer to the horizon, says co-author Marco Bernardi, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE).

The time is ripe for such an innovation, Grossman adds, because solar cells have become less expensive than accompanying support structures, wiring and installation. As the cost of the cells themselves continues to decline more quickly than these other costs, they say, the advantages of 3-D systems will grow accordingly.

“Even 10 years ago, this idea wouldn’t have been economically justified because the modules cost so much,” Grossman says. But now, he adds, “the cost for silicon cells is a fraction of the total cost, a trend that will continue downward in the near future.” Currently, up to 65 percent of the cost of photovoltaic (PV) energy is associated with installation, permission for use of land and other components besides the cells themselves.

Although computer modeling by Grossman and his colleagues showed that the biggest advantage would come from complex shapes — such as a cube where each face is dimpled inward — these would be difficult to manufacture, says co-author Nicola Ferralis, a research scientist in DMSE. The algorithms can also be used to optimize and simplify shapes with little loss of energy. It turns out the difference in power output between such optimized shapes and a simpler cube is only about 10 to 15 percent — a difference that is dwarfed by the greatly improved performance of 3-D shapes in general, he says. The team analyzed both simpler cubic and more complex accordion-like shapes in their rooftop experimental tests.

At first, the researchers were distressed when almost two weeks went by without a clear, sunny day for their tests. But then, looking at the data, they realized they had learned important lessons from the cloudy days, which showed a huge improvement in power output over conventional flat panels. 

For an accordion-like tower — the tallest structure the team tested — the idea was to simulate a tower that “you could ship flat, and then could unfold at the site,” Grossman says. Such a tower could be installed in a parking lot to provide a charging station for electric vehicles, he says.

So far, the team has modeled individual 3-D modules. A next step is to study a collection of such towers, accounting for the shadows that one tower would cast on others at different times of day. In general, 3-D shapes could have a big advantage in any location where space is limited, such as flat-rooftop installations or in urban environments, they say. Such shapes could also be used in larger-scale applications, such as solar farms, once shading effects between towers are carefully minimized.

A few other efforts — including even a middle-school science-fair project last year — have attempted 3-D arrangements of solar cells. But, Grossman says, “our study is different in nature, since it is the first to approach the problem with a systematic and predictive analysis.” 

David Gracias, an associate professor of chemical and biomolecular engineering at Johns Hopkins University who was not involved in this research, says that Grossman and his team “have demonstrated theoretical and proof-of-concept evidence that 3-D photovoltaic elements could provide significant benefits in terms of capturing light at different angles. The challenge, however, is to mass produce these elements in a cost-effective manner.”

Monday, 26 March 2012

A faster way to catch cells-New microfluidic device could be used to diagnose and monitor cancer and other diseases.



This microfluidic device can rapidly isolate target cells using a nanoporous membrane sandwiched between two channels.


Separating complex mixtures of cells, such as those found in a blood sample, can offer valuable information for diagnosing and treating disease. However, it may be necessary to search through billions of other cells to collect rare cells such as tumor cells, stem cells or fetal cells. “You’re basically looking for a needle in a haystack,” says Sukant Mittal, a graduate student in the Harvard-MIT Division of Health Sciences and Technology (HST). 

Mittal and his colleagues at MIT and Massachusetts General Hospital (MGH) have now demonstrated a new microfluidic device that can isolate target cells much faster than existing devices. Such technology could be used in applications such as point-of-care diagnostics and personalized medicine.

The researchers describe their results in the Feb. 21 issue of Biophysical Journal. Other authors of the paper are Ian Wong, a postdoc at MGH and Harvard Medical School (HMS), MIT chemical engineering professor William Deen, and Mehmet Toner, a professor of biomedical engineering at MGH, HMS and HST.

Researchers have used a number of techniques to sort cells based on differences in size, density or electrical properties. However, since the physical characteristics of cells can vary significantly, these techniques risk separating cells incorrectly, leading to an erroneous diagnosis. A more specific way to isolate cells is to use antibodies that latch on to distinctive molecules displayed on the surfaces of the target cells.

However, this selective approach only works if the target cells come into contact with the antibodies designed to capture them. This is unlikely to happen when the cells are moving at relatively high speeds.

“Imagine you’re standing on a bridge over a river, and you throw a message in a bottle out in the middle,” Wong says. “If the river is moving really slowly, you could imagine that eventually the bottle will drift over to the riverbank and somebody can grab it. But if the river is flowing too quickly, then the bottle is swept downstream without ever approaching the sides.”

That’s the problem the team needed to solve, Wong says: “Can we steer the bottle toward the riverbank so that it can get caught?” To achieve that, the MIT and MGH researchers designed their device to guide the fluid toward the bottom of the channel as it flows, bringing more of the cells in contact with the antibodies. Key to their new design is the use of a soft membrane with nanoscale pores, which separates two adjacent microchannels. 

Cells enter one channel only, and as they flow through the channel, the fluid is rapidly drawn to the porous divider, bringing the cells with it. Fluid can pass into the other channel, but the cells cannot. Once they reach the surface, they start rolling — slowly enough that target cells have time to attach to the antibodies and get captured, but fast enough to keep the other cells moving. Such rolling behavior is similar to how white blood cells or stem cells selectively “home in” to sites of infection and injury in the body. 

Shashi Murthy, an associate professor of chemical engineering at Northeastern University, says the device is simple but very well-designed. “The field of microfluidics is very largely done by experimental trial and error,” says Murthy, who was not involved in this research. “One seldom sees as in-depth an analysis, and one so well-grounded in theory.”

One potential application for these devices is to isolate cancer cells from patient blood samples. Toner’s group has previously shown that the number of circulating tumor cells in the bloodstream correlates with the clinical response to treatment in a given patient, suggesting the potential for personalized medicine for cancer patients. 

“Considerable validation and testing will be necessary before this early-stage device can be deployed in the clinic,” Toner says. “Nevertheless, this novel approach may enable exciting diagnostic and therapeutic opportunities that are not feasible using existing technologies.” 

Sunday, 25 March 2012

A biplane to break the sound barrier-Cheaper, quieter and fuel-efficient biplanes could put supersonic travel on the horizon

For 27 years, the Concorde provided its passengers with a rare luxury: time saved. For a pricey fare, the sleek supersonic jet ferried its ticketholders from New York to Paris in a mere three-and-a-half hours — just enough time for a nap and an aperitif. Over the years, expensive tickets, high fuel costs, limited seating and noise disruption from the jet’s sonic boom slowed interest and ticket sales. On Nov. 26, 2003, the Concorde — and commercial supersonic travel — retired from service. 

Since then, a number of groups have been working on designs for the next generation of supersonic jets. Now an MIT researcher has come up with a concept that may solve many of the problems that grounded the Concorde. Qiqi Wang, an assistant professor of aeronautics and astronautics, says the solution, in principle, is simple: Instead of flying with one wing to a side, why not two? 

Wang and his colleagues Rui Hu, a postdoc in the Department of Aeronautics and Astronautics, and Antony Jameson, a professor of engineering at Stanford University, have shown through a computer model that a modified biplane can, in fact, produce significantly less drag than a conventional single-wing aircraft at supersonic cruise speeds. The group will publish their results in the Journal of Aircraft

This decreased drag, according to Wang, means the plane would require less fuel to fly. It also means the plane would produce less of a sonic boom. 

“The sonic boom is really the shock waves created by the supersonic airplanes, propagated to the ground,” Wang says. “It’s like hearing gunfire. It’s so annoying that supersonic jets were not allowed to fly over land.” 

Double the wings, double the fun

With Wang’s design, a jet with two wings — one positioned above the other — would cancel out the shock waves produced from either wing alone. Wang credits German engineer Adolf Busemann for the original concept. In the 1950s, Busemann came up with a biplane design that essentially eliminates shock waves at supersonic speeds.

Normally, as a conventional jet nears the speed of sound, air starts to compress at the front and back of the jet. As the plane reaches and surpasses the speed of sound, or Mach 1, the sudden increase in air pressure creates two huge shock waves that radiate out at both ends of the plane, producing a sonic boom.  

Through calculations, Busemann found that a biplane design could essentially do away with shock waves. Each wing of the design, when seen from the side, is shaped like a flattened triangle, with the top and bottom wings pointing toward each other. The configuration, according to his calculations, cancels out shock waves produced by each wing alone. 

However, the design lacks lift: The two wings create a very narrow channel through which only a limited amount of air can flow. When transitioning to supersonic speeds, the channel, Wang says, could essentially “choke,” creating incredible drag. While the design could work beautifully at supersonic speeds, it can’t overcome the drag to reach those speeds. 

Giving lift to a grounded theory

To address the drag issue, Wang, Hu and Jameson designed a computer model to simulate the performance of Busemann’s biplane at various speeds. At a given speed, the model determined the optimal wing shape to minimize drag. The researchers then aggregated the results from a dozen different speeds and 700 wing configurations to come up with an optimal shape for each wing. 

They found that smoothing out the inner surface of each wing slightly created a wider channel through which air could flow. The researchers also found that by bumping out the top edge of the higher wing, and the bottom edge of the lower wing, the conceptual plane was able to fly at supersonic speeds, with half the drag of conventional supersonic jets such as the Concorde. Wang says this kind of performance could potentially cut the amount of fuel required to fly the plane by more than half.

“If you think about it, when you take off, not only do you have to carry the passengers, but also the fuel, and if you can reduce the fuel burn, you can reduce how much fuel you need to carry, which in turn reduces the size of the structure you need to carry the fuel,” Wang says. “It’s kind of a chain reaction.”

The team’s next step is to design a three-dimensional model to account for other factors affecting flight. While the MIT researchers are looking for a single optimal design for supersonic flight, Wang points out that a group in Japan has made progress in designing a Busemann-like biplane with moving parts: The wings would essentially change shape in mid-flight to attain supersonic speeds. 

“Now people are having more ideas on how to improve [Busemann’s] design,” Wang says. “This may lead to a dramatic improvement, and there may be a boom in the field in the coming years.”

“There are many challenges in designing realistic supersonic aircraft, such as high drag, efficient engines and low sonic-boom signature,” says Karthik Duraisamy, assistant professor of aeronautics and astronautics at Stanford University, who was not involved in the research. “Dr. Wang’s paper presents an important first step towards reducing drag, and there is also potential to address structural issues.” 

Thursday, 22 March 2012

Liquid-like Materials May Pave Way for New Thermoelectric Devices


In the continual quest for better thermoelectric materials—which convert heat into electricity and vice versa—researchers have identified a liquid-like compound whose properties give it the potential to be even more efficient than traditional thermoelectrics.
Thermoelectric materials have been used to power spacecraft ranging from Apollo to the Curiosity rover now headed for Mars. Recently, however, scientists and engineers have been turning to these materials to use wasted heat—released from automobiles or industrial machinery, for instance—as an efficient energy source. They have also proposed using these materials to create more efficient heating systems in electric cars or even as new ways to exploit solar power.
In identifying this new type of thermoelectric material, the researchers studied a material made from copper and selenium. Although it is physically a solid, it exhibits liquid-like behaviors due to the way its copper atoms flow through the selenium's crystal lattice.
"It's like a wet sponge," explains Jeff Snyder, a faculty associate in applied physics and materials science in the Division of Engineering and Applied Science at the California Institute of Technology (Caltech) and a member of the research team. "If you have a sponge with very fine pores in it, it looks and acts like a solid. But inside, the water molecules are diffusing just as fast as they would if they were a regular liquid. That's how I imagine this material works. It has a solid framework of selenium atoms, but the copper atoms are diffusing around as fast as they would in a liquid."
The research, led by scientists from the Chinese Academy of Science's Shanghai Institute of Ceramics in collaboration with researchers from Brookhaven National Laboratory and the University of Michigan, as well as from Caltech, is described in a paper recently published in the journal Nature Materials.
A thermoelectric material generates electricity when there is a temperature difference between one end of the material and the other. For example, if you place a thermoelectric device right next to a heat source—say a laptop battery—then the side closest to the battery will be hotter. The electrons in the hot end will diffuse to the cool end, producing an electric current.
A good thermoelectric material must be good at conducting electricity but bad at conducting heat. If it were good at conducting heat, the heat from the hot end would move to the cool end so fast that the whole material would rapidly reach the same temperature. When that happens, the electrons stop flowing.
One way to improve thermoelectric efficiency, then, is to decrease a material's ability to conduct heat. To that end, researchers have been developing thermoelectric materials with a mix of crystalline and amorphous properties, Snyder says. A crystalline atomic structure allows electrons to flow easily, while an amorphous material, such as glass, has a more irregular atomic structure that hinders heat-carrying vibrations from traveling.
These heat-carrying vibrations travel via two types of waves. The first type is a longitudinal or pressure wave, in which the direction of displacement—in this case, the jiggling of atoms—is the same as the direction of the wave. The second type is a transverse wave, in which the direction of displacement is perpendicular to the direction of the wave, like when you shake a jump rope up and down, resulting in waves that travel horizontally along the rope.
In a solid material, a transverse wave travels because there is friction between the atoms, meaning that when one atom vibrates up and down, an adjacent atom moves with it, and the wave propagates. But in a liquid, there is minimal friction between the atoms, and a vibrating atom just slides up and down next to its neighbor. As a result, transverse waves cannot travel inside a liquid. Ocean waves are different because they have an interface between the liquid and the air.
The team found that because heat-carrying vibrations in a liquid can travel only via longitudinal waves, a material with liquid-like properties is less thermally conductive. Therefore, a liquid-like material that's also good at conducting electrically should be more thermoelectrically efficient than traditional amorphous materials, Snyder says.
In the case of the copper-selenium material that the researchers studied, the crystal structure of the selenium helps conduct electricity, while the free-flowing copper atoms behave like a liquid, damping down thermal conductivity. The efficiency of a thermoelectric material is quantified using a number called a "thermoelectric figure of merit." The copper-selenium material has a thermoelectric figure of merit of 1.5 at 1000 degrees Kelvin, one of the highest values in any bulk material, the researchers say.
NASA engineers first used this copper-selenium material roughly 40 years ago for spacecraft design, Snyder says. But its liquid-like properties—which were not understood at the time—made it difficult to work with. This new research, he says, has identified and explained why this copper-selenium material has such efficient thermoelectric properties, potentially opening up a whole new class of liquid-like thermoelectric materials for investigation.
"Hopefully, the scientific community now has another strategy to work with when looking for materials with a high thermoelectric figure of merit," Snyder says.

Sunday, 12 February 2012

Free Piston Stirling Engines


The Stirling engine is a heat engine that is vastly different from the internal-combustion engine in your car. Invented by Robert Stirling in 1816, the Stirling engine has the potential to be much more efficient than a gasoline or diesel engine. But today, Stirling engines are used only in some very specialized applications, like in submarines or auxiliary power generators for yachts, where quiet operation is important. Although there hasn't been a successful mass-market application for the Stirling engine, some very high-power inventors are working on it.
A Stirling engine uses the Stirling cycle,­ which is unlike the cycles used in internal-combustion engines.
  • The gasses used inside a Stirling engine never leave the engine. There are no exhaust valves that vent high-pressure gasses, as in a gasoline or diesel engine, and there are no explosions taking place. Because of this, Stirling engines are very quiet.
  • The Stirling cycle uses an external heat source, which could be anything from gasoline to solar energy to the heat produced by decaying plants. No combustion takes place inside the cylinders of the engine.

    Free Piston Stirling Engines:

    "Free piston" Stirling engines include those with liquid pistons and those with diaphragms as pistons. In a "free piston" device, energy may be added or removed by an electrical linear alternator, pump or other coaxial device. This avoids the need for a linkage, and reduces the number of moving parts. In some designs, friction and wear are nearly eliminated by the use of non-contact gas bearings or very precise suspension through planar springs.
    Four basic steps in the cycle of a “Free piston” Stirling engine,
    1. The power piston is pushed outwards by the expanding gas thus doing work. Gravity plays no role in the cycle.
    2. The gas volume in the engine increases and therefore the pressure reduces, which will cause a pressure difference across the displacer rod to force the displacer towards the hot end. When the displacer moves the piston is almost stationary and therefore the gas volume is almost constant. This step results in the constant volume cooling process which reduces the pressure of the gas.
    3. The reduced pressure now arrests the outward motion of the piston and it begins to accelerate towards the hot end again and by its own inertia, compresses the now cold gas which is mainly in the cold space.
    4. As the pressure increases, a point is reached where the pressure differential across the displacer rod becomes large enough to begin to push the displacer rod (and therefore also the displacer) towards the piston and thereby collapsing the cold space and transferring the cold, compressed gas towards the hot side in an almost constant volume process. As the gas arrives in the hot side the pressure increases and begins to move the piston outwards to initiate the expansion step as.
    In the early 1960s, W.T. Beale invented a free piston version of the Stirling engine in order to overcome the difficulty of lubricating the crank mechanism. While the invention of the basic free piston Stirling engine is generally attributed to Beale, independent inventions of similar types of engines were made by E.H. Cooke-Yarborough and C. West at the Harwell Laboratories of the UKAERE. G.M. Benson also made important early contributions and patented many novel free-piston configurations.
    The figure above shows v
    arious Free-Piston Stirling Configurations. F."free cylinder", G. Fluidyne, H. "double-acting" Stirling (typically 4 cylinders)

    What appears to be the first mention of a Stirling cycle machine using freely moving components is a British patent disclosure in 1876. This machine was envisaged as a refrigerator (i.e., the reversed Stirling cycle). The first consumer product to utilize a free piston Stirling device was a portable refrigerator manufactured by Twinbird Corporation of Japan and offered in the US by Coleman in 2004.