Friday, February 25, 2011

Single Photon Management for Quantum Computers !!


The quantum computers of tomorrow might use photons, or particles of light, to move around the data they need to make calculations, but photons are tricky to work with. Two new papers* by researchers working at the National Institute of Standards and Technology (NIST) have brought science closer to creating reliable sources of photons for these long-heralded devices.

In principle, quantum computers can perform calculations that are impossible or impractical using conventional computers by taking advantage of the peculiar rules of quantum mechanics. To do this, they need to operate on things that can be manipulated into specific quantum states. Photons are among the leading contenders.

The new NIST papers address one of the many challenges to a practical quantum computer: the need for a device that produces photons in ready quantities, but only one at a time, and only when the computer's processor is ready to receive them. Just as garbled data will confuse a standard computer, an information-bearing photon that enters a quantum processor together with other particles -- or when the processor is not expecting it -- can ruin a calculation.

The single-photon source has been elusive for nearly two decades, in part because no method of producing these particles individually is ideal. "It's a bit like playing a game of whack-a-mole, where solving one problem creates others," says Alan Migdall of NIST's Optical Technology Division. "The best you can do is keep all the issues under control somewhat. You can never get rid of them."

The team's first paper addresses the need to be certain that a photon is indeed coming when the processor is expecting it, and that none show up unexpected. Many kinds of single-photon sources create a pair of photons and send one of them to a detector, which tips off the processor to the fact that the second, information-bearing photon is on its way. But since detectors are not completely accurate, sometimes they miss the "herald" photon -- and its twin zips into the processor, gumming up the works.

The team effort, in collaboration with researchers from the Italian metrology laboratory L'Istituto Nazionale di Ricerca Metrologica (INRIM), handled the issue by building a simple gate into the source. When a herald photon reaches the detector, the gate opens, allowing the second photon past. "You get a photon when you expect one, and you don't get one when you don't," Migdall says. "It was an obvious solution; others proposed it long ago, we were just the first ones to build it. It makes the single photon source better."

Read More: Single Photon Management for Quantum Computers

Saturday, February 12, 2011

How Much Information Is There in the World !!


Science Express, an electronic journal that provides select Science articles ahead of print, calculates the world's total technological capacity -- how much information humankind is able to store, communicate and compute.

"We live in a world where economies, political freedom and cultural growth increasingly depend on our technological capabilities," said lead author Martin Hilbert of the USC Annenberg School for Communication & Journalism. "This is the first time-series study to quantify humankind's ability to handle information."

So how much information is there in the world? How much has it grown?

Prepare for some big numbers:

* Looking at both digital memory and analog devices, the researchers calculate that humankind is able to store at least 295 exabytes of information. (Yes, that's a number with 20 zeroes in it.)

Put another way, if a single star is a bit of information, that's a galaxy of information for every person in the world. That's 315 times the number of grains of sand in the world. But it's still less than one percent of the information that is stored in all the DNA molecules of a human being.
* 2002 could be considered the beginning of the digital age, the first year worldwide digital storage capacity overtook total analog capacity. As of 2007, almost 94 percent of our memory is in digital form.
* In 2007, humankind successfully sent 1.9 zettabytes of information through broadcast technology such as televisions and GPS. That's equivalent to every person in the world reading 174 newspapers every day.
* On two-way communications technology, such as cell phones, humankind shared 65 exabytes of information through telecommunications in 2007, the equivalent of every person in the world communicating the contents of six newspapers every day.
* In 2007, all the general-purpose computers in the world computed 6.4 x 10^18 instructions per second, in the same general order of magnitude as the number of nerve impulses executed by a single human brain. Doing these instructions by hand would take 2,200 times the period since the Big Bang.
* From 1986 to 2007, the period of time examined in the study, worldwide computing capacity grew 58 percent a year, ten times faster than the United States' GDP.

Telecommunications grew 28 percent annually, and storage capacity grew 23 percent a year.

Source: How Much Information Is There in the World

Friday, December 31, 2010

Happy New Year 2011 !!

A Robot With Finger-Tip Sensitivity !!


Two arms, three cameras, finger-tip sensitivity and a variety of facial expressions -- these are the distinguishing features of the pi4-workerbot. Similar in size to a human being, it can be employed at any modern workstation in an industrial manufacturing environment. Its purpose is to help keep European production

Dr.-Ing. Dragoljub Surdilovic, head of the working group at the Fraunhofer Institute for Production Systems and Design Technology IPK in Berlin, says: "We developed the workerbot to be roughly the same size as a human being." Which means it can be employed at any modern standing or sitting workstation in an industrial manufacturing environment.

The robot is equipped with three cameras. A state-of-the-art 3D camera in its forehead captures its general surroundings, while the two others are used for inspection purposes. The workerbot can perform a wide range of tasks. Matthias Krinke, Managing Director of pi4-Robotics, the company that is bringing the workerbot onto the market, explains: "It can measure objects or inspect a variety of surfaces." To give an example, the robot can identify whether or not the chromium coating on a workpiece has been perfectly applied by studying how light reflects off the material. Krinke adds: "If you use two different cameras, it can inspect one aspect with its left eye, and another with its right." Moreover, the workerbot is also capable of inspecting components over a continuous 24-hour period -- an important advantage when precision is of the utmost importance, such as in the field of medical technology, where a defective part can, in the worst case scenario, endanger human life.

Another distinctive feature of the pi4-workerbot is that it has two arms. "This allows it to carry out new kinds of operations," says Surdilovic. "These robots can transfer a workpiece from one hand to the other." Useful, for instance, for observing complex components from all angles. The Fraunhofer researcher continues: "Conventional robotic arms generally only have one swivel joint at the shoulder; all their other joints are articulated. In other words, they have six degrees of freedom, not seven like a human arm." However, as well as the swivel joint at its shoulder, the workerbot has an additional rotation facility which corresponds to the wrist on a human body. Surdilovic's working group developed the control system for the workerbot. He recalls: "Programming the two arms to work together -- for example, to inspect a workpiece or assemble two components -- was a real challenge. It requires additional sensor systems."

Sources: A Robot With Finger-Tip Sensitivity

Saturday, November 20, 2010

Quantum Simulator and Supercomputer at the Crossroads !!


Scientists in an international collaboration measure for the first time a many-body phase diagram with ultracold atoms in optical lattices at finite temperatures.

Transitions between different phases of matter are a phenomenon occurring in everyday life. For example water -- depending on its temperature -- can take the form of a solid, a liquid or a gas. The circumstances that lead to the phase-transition of a substance are of fundamental interest in understanding emergent quantum phenomena of a many-particle system. In this respect, the ability to study phase transition between novel states of matter with ultracold atoms in optical lattices has raised the hope to answer open questions in condensed matter physics. MPQ-LMU scientists around Prof. Immanuel Bloch in collaboration with physicists in Switzerland, France, the United States and Russia have now for the first time determined the phase-diagram of an interacting many-particle system at finite temperatures.

Employing state-of-the art numerical quantum "Monte Carlo" methods implemented on a supercomputer, it was possible to validate the measurements and the strategies used to extract the relevant information from them. This exemplary benchmarking provides an important milestone on the way towards quantum simulations with ultracold atoms in optical lattices beyond the reach of numerical methods and present day super computers.

In the experiments, a sample of up to 300.000 "bosonic" rubidium atoms was cooled down to a temperature close to absolute zero -- approximately minus 273°C. At such low temperatures, all atoms in the ultracold gas tend to behave exactly the same, forming a new state of matter known as Bose-Einstein condensate (BEC). Once this state is reached, the researchers "shake" the atoms to intentionally heat them up again, thereby controlling the temperature of the gas to better than one hundredth of a millionth of a degree. The so-prepared ultracold -- yet not as cold -- gas is then loaded into a three-dimensional optical lattice. Such a lattice is created by three mutually orthogonal standing waves of laser light, forming "a crystal of light" in which the atoms are trapped. Much like electrons in a real solid body, they can move within the lattice and interact with each other repulsively. It is this analogy that has sparked a vast interest in this field, since it allows for the study of complex condensed matter phenomena in a tunable system without defects.

When being loaded into the optical lattice, the atoms can arrange in three different phases depending on their temperature, their mobility and the strength of the repulsion between them. If the strength of the repulsion between the atoms is much larger than their mobility, a so-called Mott-insulator will form at zero temperature in which the atoms are pinned to their lattice sites. If the mobility increases, a quantum phase transition is crossed towards a superfluid phase in which the wave functions of the atoms are delocalized over the whole lattice. The superfluid phase exists up to a transition temperature above which a normal gas is formed. This temperature tends to absolute zero as the phase transition between the superfluid and the Mott-insulator is approached -- a feature which is typical in the vicinity of a quantum phase transition.

In order to determine the phase of the atoms in the experiments, they are instantaneously released from the optical lattice. Now, according to the laws of quantum mechanics, a matter wave expands from each of the lattice sites, much like electromagnetic waves expanding from an array of light sources. And as in the latter case, an interference pattern emerges that reflects the coherence properties of the array of sources.

It is this information of the coherence properties that the scientists are looking at in order to read out the many-body phase of the atoms in the artificial crystal: The normal gas in the lattice shows little coherence and almost no interference pattern would be visible after releasing the atoms. The superfluid, however, does exhibit long-range phase coherence which results in sharp interference peaks. By determining the temperature of the onset of these defined structures for various ratios of interaction strength and mobility, the researchers could map out the complete phase boundary between the superfluid and the normal gas.

Read more: Quantum Simulator and Supercomputer at the Crossroads

Tuesday, November 16, 2010

Racetrack' Magnetic Memory Could Make Computer Memory 100,000 Times Faster !!


Imagine a computer equipped with shock-proof memory that's 100,000 times faster and consumes less power than current hard disks. EPFL Professor Mathias Kläui is working on a new kind of "Racetrack" memory, a high-volume, ultra-rapid non-volatile read-write magnetic memory that may soon make such a device possible.

Annoyed by how long it took his computer to boot up, Kläui began to think about an alternative. Hard disks are cheap and can store enormous quantities of data, but they are slow; every time a computer boots up, 2-3 minutes are lost while information is transferred from the hard disk into RAM (random access memory). The global cost in terms of lost productivity and energy consumption runs into the hundreds of millions of dollars a day.

Like the tried and true VHS videocassette, the proposed solution involves data recorded on magnetic tape. But the similarity ends there; in this system the tape would be a nickel-iron nanowire, a million times smaller than the classic tape. And unlike a magnetic videotape, in this system nothing moves mechanically. The bits of information stored in the wire are simply pushed around inside the tape using a spin polarized current, attaining the breakneck speed of several hundred meters per second in the process. It's like reading an entire VHS cassette in less than a second.

In order for the idea to be feasible, each bit of information must be clearly separated from the next so that the data can be read reliably. This is achieved by using domain walls with magnetic vortices to delineate two adjacent bits. To estimate the maximum velocity at which the bits can be moved, Kläui and his colleagues* carried out measurements on vortices and found that the physical mechanism could allow for possible higher access speeds than expected.

Their results were published online October 25, 2010, in the journal Physical Review Letters. Scientists at the Zurich Research Center of IBM (which is developing a racetrack memory) have confirmed the importance of the results in a Viewpoint article. Millions or even billions of nanowires would be embedded in a chip, providing enormous capacity on a shock-proof platform. A market-ready device could be available in as little as 5-7 years.

Racetrack memory promises to be a real breakthrough in data storage and retrieval. Racetrack-equipped computers would boot up instantly, and their information could be accessed 100,000 times more rapidly than with a traditional hard disk. They would also save energy. RAM needs to be powered every millionth of a second, so an idle computer consumes up to 300 mW just maintaining data in RAM.

Because Racetrack memory doesn't have this constraint, energy consumption could be slashed by nearly a factor of 300, to a few mW while the memory is idle. It's an important consideration: computing and electronics currently consumes 6% of worldwide electricity, and is forecast to increase to 15% by 2025.

Read more: Racetrack' Magnetic Memory Could Make Computer Memory 100,000 Times Faster !!