Skip to main content
Advertising

Originally published March 6, 2012 at 8:06 PM | Page modified March 7, 2012 at 11:31 AM

  • Share:
             
  • Comments ((0))
  • Print

Microsoft's TechFest trots out 'what is now possible' for computers

On Tuesday, Microsoft held its annual TechFest, where advanced researchers at the company demonstrate the projects they've been working on. This year's TechFest focused on merging the virtual and natural worlds, and big data.

Seattle Times technology reporter

Comments
No comments have been posted to this article.

advertising

From technology that allows people to use the palms of their hands as telephone touch screens to avatars that can look and sound like you while talking in languages you never knew, TechFest 2012 showcased some of leading advances in computer science that Microsoft researchers are working on.

Microsoft holds TechFest each year — a sort of high-level science fair in which researchers from the company's far-flung labs display some of their projects.

Microsoft spends about $9 billion a year on research and development and has 850 Ph.D. researchers in labs worldwide. That's only 1 percent of the company's employees but it's the largest computer-science research organization in the world, according to Microsoft.

On Tuesday, during the public preview of TechFest, 20 of the 150 demos that will be available for viewing by Microsoft employees later this week, were on display. Up to 7,000 employees are expected to attend.

The projects shown Tuesday were in various stages, from early development to beta. Some will find their way into commercial Microsoft products, says Kevin Schofield, general manager and chief operations officer of Microsoft Research.

"We think of TechFest as a conversation starter," Schofield said. "We don't think of it as a shopping mall. It's a set of ideas — a conversation about what is now possible."

The projects this year fell largely into two themes: merging the virtual and natural worlds, and big data — meaning systems that can collect a huge amount of data and make sense of that data in useful ways.

Introducing the topic of merging the virtual and natural worlds, Rick Rashid, chief research officer at Microsoft, said, "We're giving computers the same senses that we have" — the ability to see, hear, understand, speak, touch, know where they are, and to sense emotions.

"As that happens, we're changing the applications computers can be used for and the way we interact with them," he said.

Some of the highlights of TechFest 2012 included:

• Wearable Multitouch Projector: This technology turns various surfaces — say, a hand, notebook or wall — into a multitouch screen. Users simply wear a depth-sensing and projection system on their shoulder. The system then projects onto the surface, allowing that surface to have the capabilities of a touch screen or mouse.

• Holoflector: This interactive, augmented-reality mirror allows users to see their reflections while graphics are superimposed onto those life-size reflections. Made possible by an LCD panel placed three feet from the translucent (one-way) mirror, and a Kinect motion sensor, the Holoflector can superimpose images of, say, a ball to interact with your reflection, making it look like you're bouncing or throwing the ball.

• IllumiShare: It looks like a desk lamp, but one that allows people in different places to collaborate on things on the same surface. Each person must have an IllumiShare "lamp" (includes a camera and projector), which lights up the surface at which it is pointed. Anything on that surface — whether it's drawn or written, or a physical object laid on the surface — can be seen by the other person. So people in different parts of the world can draw together, write on a whiteboard together or even play with real toys together.

• Turn a Monolingual TTS into Mixed Language: After recording 20 minutes of audio and video of a person speaking, this technology can be used to create an avatar that can speak in the language the person uses or a different one. The audio of a person speaking in, say, English, can be broken down into fragments that can then be used as the basis for creating a voice that will speak text — in the same voice — in, say, Mandarin Chinese.

• Among the natural user interface (NUI) demonstrations was a project using the Kinect to allow surgeons in an operating room to use gestures — a hand waving or finger pointing — to manipulate 3-D images. Starting this spring, vascular surgeons at Guy's & St. Thomas' Hospital in London will be using this technology.

• An example of a ready-to-deploy project using big data is ChronoZoom, which brings together vast amounts of scientific and humanities data, ranging from 13.7 billion years ago to now. Users can zoom in on big historical or scientific events, finding everything from scientists' data to lectures to personal tours on the subjects. Researchers from several universities and other institutions are working on the content. The beta of ChronoZoom is scheduled to go live on March 14 at www.chronozoomproject.org.

Janet I. Tu: 206-464-2272 or jtu@seattletimes.com. On Twitter @janettu.

News where, when and how you want it

Email Icon

Relive the magic

Relive the magic

Shop for unique souvenirs highlighting great sports moments in Seattle history.

Advertising

Advertising

NDN Video

Advertising