Sunday, July 31, 2011

Google Ups Ante with 1,000 Patents from IBM


Google has acquired more than 1,000 patents from IBM in order to pad its portfolio. Patent litigation is a theater of the absurd in most cases, but it has evolved into a standard business practice among tech companies, and Google needs more fodder to defend itself.

Once upon a time, a patent had a purpose. Someone who creates a unique process, or innovative product should be rewarded for his or her efforts, and that accomplishment should be safeguarded from simply being copied or stolen by rivals.

Patent litigationPatent infringement suits are just stanard business procedure these days.When it comes to patents today, though, is any of it really unique or innovative anymore? Tech patents seem to be predominantly vague and over-reaching. The intent is to be ambiguous enough in defining what exactly the patent is that you can apply it to virtually anything in the event that you choose to instigate a patent infringement lawsuit, or end up needing to defend yourself against one.

Google's general counsel, Kent Walker, recognizes that patent litigation is absurd. He recently stated that the plague of patent infringement lawsuits is stifling innovation, and that companies are using their patent portfolios to bully rivals and prevent products or services from competing.

Of course, Walker finished that argument by using it as a justification for why Google has to get in on the patent portfolio game. Google is a relative new kid on the block compared with its major rivals, and it doesn't have the depth of patents necessary to adequately defend itself from companies like Apple or Microsoft.

US Patent OfficeIt would be nice to fix the patent system, but that is easier said than done.Florian Mueller, a technology patent and intellectual property analyst, shared with me via email, "In my opinion the root cause of the problem is that politicians believe larger numbers of patents granted by a patent office correspond to more innovation. If the economy or even just the tech sector had grown at a rate anywhere near the rate at which the numbers of patent applications and grants increased over the last 10 to 15 years, we'd be living in a period of unprecedented growth."

Mueller explains that it is a complex catch-22. The patent system is broken to some extent, but drafting a solution that can somehow differentiate between desirable or undesirable patents in a way that patent examiners, judges, or juries can easily understand and apply consistently is virtually impossible. He sums up with, "Any major change would inevitably come with substantial collateral damage and screaming protest from those who see themselves affected by any such proposal."
Read more: http://goo.gl/4MPKD

Tuesday, May 31, 2011

Nanowire Measurements Could Improve Computer Memory

The nascent technology is based on silicon formed into tiny wires, approximately 20 nanometers in diameter. These "nanowires" form the basis of memory that is non-volatile, holding its contents even while the power is off -- just like the flash memory in USB thumb drives and many mp3 players. Such nanowire devices are being studied extensively as the possible basis for next-generation computer memory because they hold the promise to store information faster and at lower voltage.

Nanowire memory devices also hold an additional advantage over flash memory, which despite its uses is unsuitable for one of the most crucial memory banks in a computer: the local cache memory in the central processor.

"Cache memory stores the information a microprocessor is using for the task immediately at hand," says NIST physicist Curt Richter. "It has to operate very quickly, and flash memory just isn't fast enough. If we can find a fast, non-volatile form of memory to replace what chips currently use as cache memory, computing devices could gain even more freedom from power outlets -- and we think we've found the best way to help silicon nanowires do the job."

While the research team is by no means the only lab group in the world working on nanowires, they took advantage of NIST's talents at measurement to determine the best way to design charge-trapping memory devices based on nanowires, which must be surrounded by thin layers of material called dielectrics that store electrical charge. By using a combination of software modeling and electrical device characterization, the NIST and GMU team explored a wide range of structures for the dielectrics. Based on the understanding they gained, Richter says, an optimal device can be designed.

Read more: http://goo.gl/iOlnA

Friday, February 25, 2011

Single Photon Management for Quantum Computers !!


The quantum computers of tomorrow might use photons, or particles of light, to move around the data they need to make calculations, but photons are tricky to work with. Two new papers* by researchers working at the National Institute of Standards and Technology (NIST) have brought science closer to creating reliable sources of photons for these long-heralded devices.

In principle, quantum computers can perform calculations that are impossible or impractical using conventional computers by taking advantage of the peculiar rules of quantum mechanics. To do this, they need to operate on things that can be manipulated into specific quantum states. Photons are among the leading contenders.

The new NIST papers address one of the many challenges to a practical quantum computer: the need for a device that produces photons in ready quantities, but only one at a time, and only when the computer's processor is ready to receive them. Just as garbled data will confuse a standard computer, an information-bearing photon that enters a quantum processor together with other particles -- or when the processor is not expecting it -- can ruin a calculation.

The single-photon source has been elusive for nearly two decades, in part because no method of producing these particles individually is ideal. "It's a bit like playing a game of whack-a-mole, where solving one problem creates others," says Alan Migdall of NIST's Optical Technology Division. "The best you can do is keep all the issues under control somewhat. You can never get rid of them."

The team's first paper addresses the need to be certain that a photon is indeed coming when the processor is expecting it, and that none show up unexpected. Many kinds of single-photon sources create a pair of photons and send one of them to a detector, which tips off the processor to the fact that the second, information-bearing photon is on its way. But since detectors are not completely accurate, sometimes they miss the "herald" photon -- and its twin zips into the processor, gumming up the works.

The team effort, in collaboration with researchers from the Italian metrology laboratory L'Istituto Nazionale di Ricerca Metrologica (INRIM), handled the issue by building a simple gate into the source. When a herald photon reaches the detector, the gate opens, allowing the second photon past. "You get a photon when you expect one, and you don't get one when you don't," Migdall says. "It was an obvious solution; others proposed it long ago, we were just the first ones to build it. It makes the single photon source better."

Read More: Single Photon Management for Quantum Computers

Saturday, February 12, 2011

How Much Information Is There in the World !!


Science Express, an electronic journal that provides select Science articles ahead of print, calculates the world's total technological capacity -- how much information humankind is able to store, communicate and compute.

"We live in a world where economies, political freedom and cultural growth increasingly depend on our technological capabilities," said lead author Martin Hilbert of the USC Annenberg School for Communication & Journalism. "This is the first time-series study to quantify humankind's ability to handle information."

So how much information is there in the world? How much has it grown?

Prepare for some big numbers:

* Looking at both digital memory and analog devices, the researchers calculate that humankind is able to store at least 295 exabytes of information. (Yes, that's a number with 20 zeroes in it.)

Put another way, if a single star is a bit of information, that's a galaxy of information for every person in the world. That's 315 times the number of grains of sand in the world. But it's still less than one percent of the information that is stored in all the DNA molecules of a human being.
* 2002 could be considered the beginning of the digital age, the first year worldwide digital storage capacity overtook total analog capacity. As of 2007, almost 94 percent of our memory is in digital form.
* In 2007, humankind successfully sent 1.9 zettabytes of information through broadcast technology such as televisions and GPS. That's equivalent to every person in the world reading 174 newspapers every day.
* On two-way communications technology, such as cell phones, humankind shared 65 exabytes of information through telecommunications in 2007, the equivalent of every person in the world communicating the contents of six newspapers every day.
* In 2007, all the general-purpose computers in the world computed 6.4 x 10^18 instructions per second, in the same general order of magnitude as the number of nerve impulses executed by a single human brain. Doing these instructions by hand would take 2,200 times the period since the Big Bang.
* From 1986 to 2007, the period of time examined in the study, worldwide computing capacity grew 58 percent a year, ten times faster than the United States' GDP.

Telecommunications grew 28 percent annually, and storage capacity grew 23 percent a year.

Source: How Much Information Is There in the World

Friday, December 31, 2010

Happy New Year 2011 !!

A Robot With Finger-Tip Sensitivity !!


Two arms, three cameras, finger-tip sensitivity and a variety of facial expressions -- these are the distinguishing features of the pi4-workerbot. Similar in size to a human being, it can be employed at any modern workstation in an industrial manufacturing environment. Its purpose is to help keep European production

Dr.-Ing. Dragoljub Surdilovic, head of the working group at the Fraunhofer Institute for Production Systems and Design Technology IPK in Berlin, says: "We developed the workerbot to be roughly the same size as a human being." Which means it can be employed at any modern standing or sitting workstation in an industrial manufacturing environment.

The robot is equipped with three cameras. A state-of-the-art 3D camera in its forehead captures its general surroundings, while the two others are used for inspection purposes. The workerbot can perform a wide range of tasks. Matthias Krinke, Managing Director of pi4-Robotics, the company that is bringing the workerbot onto the market, explains: "It can measure objects or inspect a variety of surfaces." To give an example, the robot can identify whether or not the chromium coating on a workpiece has been perfectly applied by studying how light reflects off the material. Krinke adds: "If you use two different cameras, it can inspect one aspect with its left eye, and another with its right." Moreover, the workerbot is also capable of inspecting components over a continuous 24-hour period -- an important advantage when precision is of the utmost importance, such as in the field of medical technology, where a defective part can, in the worst case scenario, endanger human life.

Another distinctive feature of the pi4-workerbot is that it has two arms. "This allows it to carry out new kinds of operations," says Surdilovic. "These robots can transfer a workpiece from one hand to the other." Useful, for instance, for observing complex components from all angles. The Fraunhofer researcher continues: "Conventional robotic arms generally only have one swivel joint at the shoulder; all their other joints are articulated. In other words, they have six degrees of freedom, not seven like a human arm." However, as well as the swivel joint at its shoulder, the workerbot has an additional rotation facility which corresponds to the wrist on a human body. Surdilovic's working group developed the control system for the workerbot. He recalls: "Programming the two arms to work together -- for example, to inspect a workpiece or assemble two components -- was a real challenge. It requires additional sensor systems."

Sources: A Robot With Finger-Tip Sensitivity

Saturday, November 20, 2010

Quantum Simulator and Supercomputer at the Crossroads !!


Scientists in an international collaboration measure for the first time a many-body phase diagram with ultracold atoms in optical lattices at finite temperatures.

Transitions between different phases of matter are a phenomenon occurring in everyday life. For example water -- depending on its temperature -- can take the form of a solid, a liquid or a gas. The circumstances that lead to the phase-transition of a substance are of fundamental interest in understanding emergent quantum phenomena of a many-particle system. In this respect, the ability to study phase transition between novel states of matter with ultracold atoms in optical lattices has raised the hope to answer open questions in condensed matter physics. MPQ-LMU scientists around Prof. Immanuel Bloch in collaboration with physicists in Switzerland, France, the United States and Russia have now for the first time determined the phase-diagram of an interacting many-particle system at finite temperatures.

Employing state-of-the art numerical quantum "Monte Carlo" methods implemented on a supercomputer, it was possible to validate the measurements and the strategies used to extract the relevant information from them. This exemplary benchmarking provides an important milestone on the way towards quantum simulations with ultracold atoms in optical lattices beyond the reach of numerical methods and present day super computers.

In the experiments, a sample of up to 300.000 "bosonic" rubidium atoms was cooled down to a temperature close to absolute zero -- approximately minus 273°C. At such low temperatures, all atoms in the ultracold gas tend to behave exactly the same, forming a new state of matter known as Bose-Einstein condensate (BEC). Once this state is reached, the researchers "shake" the atoms to intentionally heat them up again, thereby controlling the temperature of the gas to better than one hundredth of a millionth of a degree. The so-prepared ultracold -- yet not as cold -- gas is then loaded into a three-dimensional optical lattice. Such a lattice is created by three mutually orthogonal standing waves of laser light, forming "a crystal of light" in which the atoms are trapped. Much like electrons in a real solid body, they can move within the lattice and interact with each other repulsively. It is this analogy that has sparked a vast interest in this field, since it allows for the study of complex condensed matter phenomena in a tunable system without defects.

When being loaded into the optical lattice, the atoms can arrange in three different phases depending on their temperature, their mobility and the strength of the repulsion between them. If the strength of the repulsion between the atoms is much larger than their mobility, a so-called Mott-insulator will form at zero temperature in which the atoms are pinned to their lattice sites. If the mobility increases, a quantum phase transition is crossed towards a superfluid phase in which the wave functions of the atoms are delocalized over the whole lattice. The superfluid phase exists up to a transition temperature above which a normal gas is formed. This temperature tends to absolute zero as the phase transition between the superfluid and the Mott-insulator is approached -- a feature which is typical in the vicinity of a quantum phase transition.

In order to determine the phase of the atoms in the experiments, they are instantaneously released from the optical lattice. Now, according to the laws of quantum mechanics, a matter wave expands from each of the lattice sites, much like electromagnetic waves expanding from an array of light sources. And as in the latter case, an interference pattern emerges that reflects the coherence properties of the array of sources.

It is this information of the coherence properties that the scientists are looking at in order to read out the many-body phase of the atoms in the artificial crystal: The normal gas in the lattice shows little coherence and almost no interference pattern would be visible after releasing the atoms. The superfluid, however, does exhibit long-range phase coherence which results in sharp interference peaks. By determining the temperature of the onset of these defined structures for various ratios of interaction strength and mobility, the researchers could map out the complete phase boundary between the superfluid and the normal gas.

Read more: Quantum Simulator and Supercomputer at the Crossroads