29.6.07

> La revista PC Magazine detalla 5 ideas que cambiarán la computación.

1 comentario:

Andrés Hax dijo...

Five Ideas That Will Reinvent Modern Computing
ARTICLE DATE: 06.20.07

By Cade Metz and Jamie Bsales
What's in the works at the leading high-tech research labs? Some awfully cool stuff—to say the least. This spring, we checked in on five of our favorites—Bell Labs, HP Labs, IBM Research, Microsoft Research, and the granddaddy of them all: the Palo Alto Research Center (PARC), the former Xerox facility that spawned Ethernet, laser printing , the GUI operating system, and so much more.

These research powerhouses have gone through a fair number of changes in recent years—PARC is now a completely independent operation—but they continue to push the outside of the high-tech envelope. Here, we profile a particularly clever project from each one, showcasing five ideas that reinvent everything from pointing devices to artificial intelligence. Some could bear fruit in a matter of months. Others might need years. But all will pique your interest. — next: IMAX At Home


IMAX at Home
You thought LAN parties were fun? Get ready for the projector party. At HP Labs, Nelson Chang and Niranjan Damera-Venkata have spent the past few years developing a technology that reinvents the notion of a home theater. With Pluribus, you can build a cineplex-quality image using a handful of ordinary, $1,000 PC projectors—in less time than it takes to pop the popcorn.

For a mere $12,000, you could build a home theater that stands up to the $100,000 image at local movie houses. Better yet, you could throw a projector party. Twelve friends show up with 12 off-the-shelf projectors, and suddenly you've got a wall-size image none of you could hope to produce on your own. And this mega-display is good for more than just movies. It might be even better for 3D games.

Chang and Damera-Venkata describe their project as "cluster computing for projectors." In much the same way a cluster pools the resources of multiple PCs, duplicating the effect of a supercomputer, Pluribus pools the resources of multiple projectors. "We can take several less-expensive projectors and create a super-projector," says Chang. "Automatically accounting for differences between each device, it builds a single, stable image."


ADVERTISEMENT Pluribus can seamlessly "tile" images from multiple projectors, fitting them together like pieces of a jigsaw puzzle. Or it can superimpose images from multiple projectors, putting one atop the other. This vastly improves resolution, sharpness, brightness, contrast, and more, but it also gives you redundancy. If one projector breaks, you still have your full image. The real trick with Pluribus, however, is that it builds these über-images so quickly. You needn't spend hours adjusting the physical position of your projectors. You simply plop them down, plug them in, and point them in the general direction you'd like them to point. Pluribus does the rest, in minutes.

The system consists of an ordinary PC workstation, a camera, and some ingenious C code. In essence, the camera grabs a snapshot of the many images streaming from your projectors and feeds that snapshot back to the software. The software then adjusts each image so they all fit together, using Chang and Venkata's algorithms—mathematical models that stretch the limits of modern computer science. "People didn't think it could be done," Damera-Venkata says.

A gaming PC with dueling graphics cards can line up 12 projectors in as little as 5 minutes, producing a 16-by-9 foot image with 4,096-by-2,304 resolution. But the system can scale up to even larger images. As you add more PCs, you could, in theory, add as many projectors as you like. A true home theater is closer than you think. — next: The Midair Mouse

The Midair Mouse
Your brand-new wireless mouse? That solves only half the problem. Sure, you're untethered, free to drive your PC from afar. But you still need a flat surface. You may be camped out on the couch or curled up in bed, but you're never more than half an arm's length from an end table or a lap desk.

Soap goes one step further: It works in midair. With this new-age pointing device, now under development at Microsoft Research, you can navigate your PC using nothing but a bare hand. You can lose the end table and the lap desk. You can even lose the couch and the bed, driving your machine while walking across the room. It's a bit like the Wii remote—only more accurate and far easier to use.

Dreamed up by Patrick Baudisch, part of Microsoft's adaptive systems and interaction research group and an affiliate professor of computer science at the University of Washington, Soap is essentially a wireless optical mouse surrounded by a fabric hull. Think of it as a beanbag with some hardware inside. As you push the fabric back and forth, across the face of the mouse, the cursor moves on your PC display.

It's called Soap because it spins in your hand like a wet bar of soap in the shower. "Basically, it's a mouse and a mouse pad in the same device," Baudisch says. "But instead of moving your mouse over your mouse pad, you move your mouse pad over your mouse."

Baudisch built his original prototype using an optical mouse he found lying around the lab and a few household items he picked up from a local RiteAid. He simply pried open his pointing device, pulled out the innards, and remounted them inside an empty bottle of hand sanitizer. Yes, an empty bottle of hand sanitizer—something that was transparent and would easily rotate inside his fabric hull. Once he pulled the hull around the bottle—and slipped some lubricant between the two—he had a mouse that worked in midair.

Because the bottle moves independently of the hull, the mouse can sense relative motion, much as it does when dragged across a tabletop. "The optical sensor looks outward, so it can see the fabric moving," Baudish continues, "and that's all the input you need." It's such a simple idea, but the results are astounding.

Soap is so accurate that you can use it to play a high-speed first-person shooter. At this year's Computer/Human Interaction (CHI) conference in San Jose, California, Baudisch played his favorite games standing up, in open space, without a tabletop in sight. The only restriction was that he couldn't rotate upward (that is, he couldn't make his 3D character do a back flip).

Baudisch plans to add that extra degree of movement, and he hopes to eliminate the lubricant inside the hull—a feature less than conducive to mass production. But his Soap prototype works today, and it has the potential to give PC users a whole new level of physical freedom.

"You get the same functionality of a mouse," Baudisch says. "And it works with any PC or display—whether it's a pocket display or a wall display. The difference is that you can use it in the living room. Or in the classroom. Or even on the subway." — next: The Perfect Machine

The Perfect Machine
Yes, we're still waiting for a full-fledged quantum computer—a machine that uses the mind-bending principles of quantum physics to achieve processing speeds even today's supercomputers have no hope of reaching. But at Bell Labs, there's a new quantum computing project in the works, a method that could finally make this long-held dream a reality. "We're still 10 to 20 years away from a quantum computer," says Bells Labs' Steven Simon. "But we're getting closer and closer."

First proposed in the early eighties by the Nobel Prize-winning physicist Richard Feynman, the quantum computer is an idea that defies common sense. Whereas today's computers obey the classical physics that govern our everyday lives, a quantum computer relies on the seemingly magical physics associated with very small particles, such as atoms.

With a classical computer, transistors store bits of information, and each bit has a value of either 1 or 0. Turn a transistor on, for instance, and it represents a 1. Turn it off and it holds a 0. With a quantum computer, the classical bit gives way to something called a quantum bit, or qubit. A qubit is stored not in a transistor or some other classical system, but in a quantum system, such as the spin of an atom's nucleus. An "up" spin might indicate a 1, and a "down" spin might indicate a 0.

The trick is that, thanks to the superposition principle of quantum mechanics, a quantum system can exist in multiple states at the same time. At any given moment, the spin of a nucleus can be both up and down, holding values of both a 1 and a 0. Put two qubits together and they can hold four values simultaneously (00, 01, 10, and 11). That makes a quantum computer exponentially faster than the classical model—fast enough, for instance, to crack today's most secure encryption algorithms.

But there's a problem. When a quantum system interacts with the classical world, it decoheres: It loses its ability to exist simultaneously in multiple states, collapsing into a single state. This means that when you read a group of qubits, they become classical bits, capable of holding only a single value.

Over the past decade, researchers have proposed several ways of getting around this problem, and some have actually built quantum computers—on a very small scale. But we've yet to see one that does more than basic calculations. Bell Labs has long been a leader in quantum computing research, and today, along with Microsoft Research and a few other labs across the country, it's working on a brand-new method that could finally make the breakthrough. Bell calls it "topological quantum computing."

In the simplest of terms, Bell researchers are tying quantum systems into knots. "In very exotic circumstances, such as very low temperature and high magnetic field, we're essentially grabbing onto the particles and moving them around each other, forming knots in what we call the space-time path," Simon explains. "If you can form the right knot, you can do the right quantum computation."

In short, these knots are a great way of solving the decoherence problem. With a topological quantum computer, your information corresponds to the topology of the knot you make, and these topologies aren't as easily disturbed as ordinary quantum systems. Will this method work? That's yet to be seen. But it certainly has potential. "Even though this approach is well behind the others—no one has even built a single qubit—many believe that it will eventually come out in front. Unlike the others, it avoids decoherence." — next: Extreme Peer-to-Peer

Extreme Peer-to-Peer
In 1543, Nicolas Copernicus forever changed the way we view the cosmos. He put the Sun at the center of things—not the Earth. Today, at the famed Palo Alto Research Center, Van Jacobson hopes to lead a similar revolution, one that forever changes the way we view PC networking. He aims to put the data at the center of things—not the server.

Jacobson likes to tell a story about a video that NBC posted to the Web during the 2006 Winter Olympics. The video showed U.S. skier Bode Miller as he was famously disqualified during the Alpine combined event, and within seconds of its posting, there was severe congestion on the router downstream from NBC's servers. At that moment, the router held 6,000 copies of the same video. Six thousand people had made 6,000 individual TCP/IP connections to the same server, and the network had no way of knowing that most of those connections were redundant. It couldn't understand that the 6,000 videos were identical. It couldn't do anything to relieve the congestion.

The classic point-to-point networking model is fundamentally flawed. Today, if you want a piece of data from the network, you almost always need a direct connection to the data's original source—the server. That's true even if the data has already been downloaded to a device that's much closer. So often, tapping into a distant server wastes time. And if the server is unavailable, you're out of luck entirely.

With a project called Content-Centric Networking, or CCN, Jacobson and his team of PARC networking gurus are turning this model on its head. They're building a networking system that revolves around the data itself, a system in which a router can actually identify that Bode Miller video and act accordingly. Under the CCN model, you don't tell the network that you're interested in connecting to a server. You tell it that you want a particular piece of data. You broadcast a request to all the machines on the network, and if one of them has what you're looking for, it responds. "You can authenticate and validate information using the information itself—independent of whom you got it from," says Jacobson. "So if you want The New York Times, you can pick it up from any machine that has a copy."

It's a bit like BitTorrent, but on a grander scale. CCN can improve everything from the public Internet to your private LAN. It can get you that Bode Miller video even if NBC's servers go down. But it's also an efficient means of keeping your calendar synchronized. You needn't set up three separate links between your PC, laptop, and a handheld. Each device simply broadcasts a request for calendar updates to all the others—wirelessly, say. Initially, Jacobson plans to roll out CCN on top of today's networking infrastructure—in much the same way BitTorrent was deployed across the existing Internet. But eventually, he wants to push these new ideas down to the Net's grass roots, to change in a fundamental way how machines speak to one another at the packet level. Yes, he's facing a mammoth task. But so did Copernicus. — next: The Man-Made Brain

The Man-Made Brain
It could be the most ambitious computer science project of all time. At IBM's Almaden Research Center, just south of South Francisco, Dharmendra Modha and his team are chasing the holy grail of artificial intelligence. They aren't looking for ways of mimicking the human brain, they're looking to build one—neuron by neuron, synapse by synapse.

"We're trying to take the entire range of qualitative neuroscientific data and integrate it into a single unified computing platform," says Modha. "The idea is to re-create the ‘wetware' brain using hardware and software."

The project is particularly daunting when you consider that modern neurology has yet to explain how the brain actually works. Yes, we know the fundamentals. But we can't be sure of every biological transaction, all the way down to the cellular level. Three years into this Cognitive Computing project, Modha's team isn't just building a brain from an existing blueprint. They're helping to create the blueprint as they build. It's reverse engineering of the highest order.

Their first goal is to build a "massively parallel cortical simulator" that re-creates the brain of a mouse, an organ 3,500 times less complex than a human brain (if you count each individual neuron and synapse). But even this is an undertaking of epic proportions. A mouse brain houses over 16 million neurons, with more than 128 billion synapses running between them. Even a partial simulation stretches the boundaries of modern hardware. No, we don't mean desktop hardware. We're talkin' supercomputers.

So far, the team has been able to fashion a kind of digital mouse brain that needs about 6 seconds to simulate 1 second of real thinking time. That's still a long way from a true mouse-size simulation, and it runs on a Blue Gene/L supercomputer with 8,192 processors, four terabytes of memory, and 1 Gbps of bandwidth running to and from each chip. "Even a mouse-scale cortical simulation places an extremely heavy load on a supercomputer," Modha explains. "We're leveraging IBM's technological resources to the limit."

Written with ordinary C code, this initial simulation is a remarkable proof of concept. As neuroscience and computing power continue to advance, Modha and his team are confident they can build cortical simulators of even greater complexity. And as they do, they hope to advance neuroscience even further, learning more and more about the inner workings of the brain and getting closer and closer to their ultimate goal.

Once they've simulated a mouse brain in real time, the team plans on tackling a rat cortex, which is about three and a half times larger. And then a cat brain, which is ten times larger than that. And so on, until they've built a cortical simulator on a human scale.

What's that good for? Anything and everything. "What we're seeking with cognitive computing is a universal cognitive mechanism, something that can give rise to the entire range of mental phenomena exhibited by humans," says Modha. "That is the ultimate goal." — next: Milestones of the Future

Milestones of the Future
Want a list of all the groundbreaking technologies due over the next decade? Tough luck. We've got neither the time nor the space. But we can give you the milestones—the 13 technologies guaranteed to change the world between now and 2020.

Summer 2007
The Real Quad-Core
AMD releases the first single-chip quad-core CPU. Code-named Barcelona, it promises 20 to 50 percent better performance than the competing multichip design from you-know-who.

Late 2007
Hello, OLED
Sony introduces the first OLED (organic light-emitting diode) television. It's too small and too expensive for mass consumption, but early adopters love its 3mm profile and 1,000,000-to-1 contrast ratio.

2008
Like Wi-Fi—but Everywhere
Carriers launch the first WiMAX services in the U.S., giving major metro areas wireless access that rivals the speeds of Wi-Fi. The difference? No more hot spots. It's everywhere you go.

2008
Eight-Core and More
Intel unveils an eight-core processor and completely revamps its Core architecture, moving the memory controller and graphics circuitry from distinct chipsets onto the CPU itself.

2010
So Long, Laser Printer
The first Memjet ink-based printers hit the market, delivering 60 pages per minute at a reasonable cost per page. The trick: multiple print heads that span the entire width of the paper you're printing on.

2010
The High-Def DVR
Seagate releases a 3.5-inch hard drive that stores 3 terabytes of data. That's 3,000 gigabytes. We're talking about a digital video recorder that records nothing but high-def video.

2011
Can You Say 4G?
Fourth-generation cellular networks debut in the United States. The LTE (Long Term Evolution) standard doubles the throughput of 3G networks, offering 3 to 4 Mbps to real-world users.

2011
Chips Go Optical
IBM perfects a chip for mainframes and other high-end machines that uses optical connections instead of copper. Moving photons instead of electrons improves data transfer speeds eightfold.

2015
A Cure for Jersey Drivers
The first cars equipped with Motorola's MotoDrive technology roll off the assembly line. Able to calculate their speed and position relative to other vehicles, these cars can automatically avoid accidents.

2016
HDTV Is Obsolete
Ultra High Definition Television (UHDTV) debuts with a resolution of 7,680-by-4,320 and 22 speakers of surround sound, dwarfing today's HDTVs, which top out at 1,920-by-1,080.

2016
Power Off, Memory On
Manufacturers use carbon nanotubes to offer NRAM (nonvolatile random-access memory). Unlike today's SDRAM and flash memory technologies, it can hold information even when you lose power.

2019
Wash 'N' Wear iPods
Flexible, washable OLED screens hit the market. That means laptops that roll up like place mats—not to mention smartphone and music-player displays built right into your clothing.

2020
Offices Everywhere
Wall-sized displays made of low-power polymers and improved video-conferencing technologies let groups of home-based workers interact as if they were sitting face to face.

Copyright (c) 2007 Ziff Davis Media Inc. All Rights Reserved.