Do you ever wonder how the future of technology might look like? See for yourself!
Microsoft’s Envisioning team puts out a video every few years sharing its vision for the future of technology. These high quality concept videos are not only exciting, but, they also give us a slight insight into what teams like Microsoft Research are conceptualizing for the future of computer interaction. Here’s their latest video and it’s the most intriguing of all. Have a look and read on for my in-depth look at the concepts shown in this video.
Nowadays, text, audio and pictures get translated but there’s no real time translation. Data is input from a source, sent to the cloud for recognition and processing & the processed, translated data is sent back to the device. All of this takes, at least, a few seconds. The first scene shows a lady in a foreign country able to translate data in real-time. No delays whatsoever. This could be because of huge upcoming advancements in microphones, processors, servers, data speeds & more.
Next up is the future of mobile. What appears to be is a semi-transparent device with a fully touch enabled surface. The screen is edge-to-edge with no bezel at all. The scenario shown is the lady drawing a heart in front of the display (not on the display) and it being recognized in real time & processed in real-time. The device could be having a Kinect-esque 3D camera sensor in front to be able to recognize gestures just like what happens via Xbox and Kinect on TVs today. The lady sends the heart and a small note straight to her daughter via the residence’s kitchen wall. This suggests every surface will be an interactive touch surface.
Also, what is very prevalent is the insanely eye pleasing evolution of Microsoft’s minimalistic Metro design language. Everything’s authentically digital, no single sign of skeuomorphism. There doesn’t appear to be a single element of Chrome (decoration, ornamentation by adding elements like fake textures, depth, reflections & shadows). Metro is being wildly spread across all Microsoft elements from Windows Phone to the Xbox dashboard to Windows 8 to their web services. It’s a different take on user interface design that sets Microsoft apart from other brands.
Next up is the car’s glass panel. On the move, a location appears which is, supposedly, the lady’s meeting location for the next day. The car recognizes it and displays a reminder on the digital window for the same. The window also shows the destination the car’s heading towards and the estimated arrival time. All of this would be enabled by advancements in GPS systems and more.
Next up is Travel Hub which looks like a unification of all location based services. What’s interesting is the digital room key which could work via touch or NFC. More digital, less physical!
Microsoft has showed that it’s working on an interface that lets you interact with a touch display by using gestures and more on not just the front of a display, but also the back. This enables you to control software elements without covering anything up with your finger. It’s an interesting concept.
The next thing to observe is a biggie. The lady reads an email from a co-worker asking to respond to something project-related. She hits the reply button and dictates the message to the device. She says, “Hi Qin, I’ll review it first thing tomorrow.” The software is smart enough to understand that “first thing tomorrow” meant to do something the other day. So, it suggested to add “Review Proposal” to the calendar. The task of reading an email, replying to it and creating a calendar appointment is reduced to just a few taps in just a few seconds with full seamlessness. This could, perhaps, be one of the most important productivity boosters in the society. Right now, we have to deal with manually marking emails as unread so that we don’t miss acting on them later & manually creating calendar appointments by switching to multiple apps, interfaces. This software implementation is just… perfect.
The next scene shows a similar mobile device, a future Windows Phone, perhaps. The glance ability of data has been highly increased. Travel recommendations, weather, news headlines, project pictures, health tracker notifications etc. are all giving glance able information without having to dig into any experience. Another thing to take note of is “5 minute focus” area on the Start screen. The scenario suggests that the person is waiting for a train which is going to arrive in 5 minutes. The device knows this and suggests you tasks that could be done within 5 minutes. It suggests to answer a one-line question instead of a big review. The device is aware of the scenarios that involve the time required for tasks as well as the available time. It displays a birthday reminder and it also suggests directions to a nearby café based on the geo-location which is a nice touch. The scene also shows how the person donates some money to charity straight from the device via sensors including camera etc. and advancements in NFC technologies.
The next scene shows the lady in her hotel room. Apparently, the TV has a proximity sensor (or Kinect built-in, probably) and start playing a video of her as soon as she enters the room. A digital table (something like Microsoft PixelSense), a tablet and a phone are shown. Data across all these devices is being synced and updated in real-time. The tablet appears to be fold-able like a book (Courier, anyone?).
Next up is the future of desktop computers. The display seems to be a glasses-free 3D one. A Kinect like sensor appears to be built-in to this computer as well which allows manipulation of data with hand gestures. Data from multiple devices is being shared at each other by flicking gestures akin to Xbox SmartGlass.
The last scene is a home focused scenario. A little girl is learning on a touch enabled tablet. The entire kitchen wall is digital. The note that was sent by the lady in the car is displayed on the wall. The refrigerator’s door, also, is a touch enabled interactive surface. It displays information about the products inside, reminds about product expirations and even shows information about ordered foodstuffs and their estimated delivery date. It integrates with the calendar as well and displays that a particular item is reserved for an upcoming dinner.
The level of interaction between multiple devices is unparalleled. Apparently, every device would have a Kinect like sensor, natural gestures are recognized all over and everything’s delightfully seamless. The girl drags (air gestures, not touch gestures) the recipe file her mom sent during the video chat and drops it on to the digital kitchen surface. The surface recognizes the file format and organizes itself to help you make the recipe. It shows the required ingredients as well as the step-by-step procedure to complete the recipe.
So, overall, according to this video, the future of computing looks very intriguing. Interactive touch displays would be ubiquitous, Kinect like depth sensors would be built-in to everyday devices, context-aware data and task surfacing and overall, a more “glance and go” experience which requires very less manual thinking than ever before. Natural User Interface (NUI) seems to be a huge part of everyday computing. According to Microsoft, we’ll probably, be experiencing all of this in 5 to 10 years. Exciting times!