Friday, January 30, 2015

The Technology that Unmasks Your Hidden Emotions


Paul Ekman, perhaps the world’s most famous face reader, fears he has created a monster.
The 80-year-old psychologist pioneered the study of facial expressions in the 1970s, creating a catalog of more than 5,000 muscle movements to show how the subtlest wrinkling of the nose or lift of an eyebrow reveal hidden emotions.
Now, a group of young companies with names like Emotient Inc., Affectiva Inc. and Eyeris are using Dr. Ekman’s research as the backbone of a technology that relies on algorithms to analyze people’s faces and potentially discover their deepest feelings. Collectively, they are amassing an enormous visual database of human emotions, seeking patterns that can predict emotional reactions and behavior on a massive scale.
Dr. Ekman, who agreed to become an adviser to Emotient, says he is torn between the potential power of all this data and the need to ensure it is used responsibly, without infringing on personal privacy.
So far, the technology has been used mostly for market research. Emotient, a San Diego startup whose software can recognize emotions from a database of microexpressions that happen in a fraction of a second, has worked with Honda Motor Co. and Procter & GambleCo. to gauge people’s emotions as they try out products. Affectiva, an emotion-detection software maker based in Waltham, Mass., has used webcams to monitor consumers as they watch ads for companies like Coca-Cola Co. and Unilever PLC.
The evolving technology has the potential to help people or even save lives. Cameras that could sense when a trucker is exhausted might prevent him from falling asleep at the wheel. Putting cameras embedded with emotion sensing software in the classroom, could help teachers determine whether they were holding their students’ attention.
But other applications are likely to breed privacy concerns. One retailer, for instance, is starting to test software embedded in security cameras that can scan people’s faces and divine their emotions as they walk in and out of its stores. Eyeris, based in Mountain View, Calif., says it has sold its software to federal law-enforcement agencies for use in interrogations.
The danger, Dr. Ekman and privacy advocates say, is that the technology could reveal people’s emotions without their consent, and their feelings could be misinterpreted. People might try to use the software to determine whether their spouse was lying, police might read the emotions of crowds or employers might use it to secretly monitor workers or job applicants.
“I can’t control usage,” Dr. Ekman says of his catalog, called the Facial Action Coding System. “I can only be certain that what I’m providing is at least an accurate depiction of when someone is concealing emotion.”
In Dr. Ekman’s analysis, there is no such thing as a simple smile or a frown. Facial movements are broken down into more-nuanced expressions; there are seven ways a forehead can furrow.
Psychologist Paul Ekman’s research on emotions and their relation to facial expressions is the basis for the software being used by advertisers and retailers to study customers.ENLARGE
Psychologist Paul Ekman’s research on emotions and their relation to facial expressions is the basis for the software being used by advertisers and retailers to study customers. PHOTO: RAMIN RAHIMIAN FOR THE WALL STREET JOURNAL
Dr. Ekman’s atlas has been used extensively by psychologists and by law-enforcement and military personnel—including interrogators at the Abu Ghraib prison in Iraq—and was the inspiration for the TV drama “Lie to Me.”
To train its software’s algorithm, Emotient has recorded the facial reactions of an ethnically diverse group of hundreds of thousands people participating in marketing research for its clients via video chat. The software extracts at least 90,000 data points from each frame, everything from abstract patterns of light to tiny muscular movements, which are sorted by emotional categories, such as anger, disgust, joy, surprise or boredom.
Rival Affectiva says it has measured seven billion emotional reactions from 2.4 million face videos in 80 countries. The company says the sheer scope of its data has allowed it to draw conclusions about people across cultures and in different settings. For instance, it says it has learned that women smile more than men, and that Indonesians and South Africans are the world’s least and most expressive people, respectively.
The startups share the goal of embedding their software in the tiniest of cameras. Affectiva is teaming up with OoVoo LLC, a video-chat service for smartphones that has 100 million users, to build an app that could reveal people’s emotions during mobile video chats.
Its peers, too, are expanding their reach. A pediatrics researcher at the University of San Diego is testing a version of Emotient software on children who have had appendix surgery, to see whether it can signal their level of pain. An unidentified retailer is using Emotient’s software in its security cameras to gauge whether shoppers are pleased when looking at products and leaving the store.
Eyeris says it envisions therapeutic apps that could detect when a person feels stress. The company said it has struck deals with federal law-enforcement authorities, but declined to identify them.
Emotient says it prefers not to have its software used for police work or federal security matters. Affectiva says it has turned down funding offers from federal intelligence agencies.
As with many other technologies, emotion-detection software raises all sorts of privacy questions. “I can see few things more invasive than trying to record someone’s emotions in a database,” said Ginger McCall, a privacy advocate.
In the mid-2000s, former detective Charles Lieberman trained detectives in the New York Police Department’s counterterrorism unit in Dr. Ekman’s facial-coding system. He said the technology could help interrogators if they could identify inconsistencies between a suspect’s story and emotions revealed on his or her face. But, he cautioned, it is important to “recognize its limitations—it can lead you in the right direction but is not definitive.”
Problems could also arise if the software isn’t perfectly accurate. Emotions, such as sadness or frustration, could be wrongly interpreted. People could be wrongly pegged as liars. Dr. Ekman says Emotion’s software is highly accurate, but the accuracy of the system hasn’t been independently tested.
With no regulation, the companies are writing the privacy rules as they go.
Ken Denman, CEO of Emotient, says his company makes a point of discarding the images of individual faces within seconds after it has logged the sentiment they express. “There’s very little value in the facial expression of any individual,” he said.
Affectiva says it stores videos of faces only if the person involved consents. On mobile phones, the work of converting microexpresssions to data points takes place on the phone for later analysis. No images are sent back to the company.
Both Affectiva and Emotient acknowledge they have no control over how third parties using their software might store or use images of people’s faces and emotions.
Dr. Ekman says he hopes the government will step in and write rules to protect people. He says that in public spaces, such as shopping malls, consumers should at least be informed if their emotions are captured.
Dr. Ekman says he believes that, on balance, his tools have done more good than harm. But the new technology’s ability to instantaneously scan the emotions of crowds of people would be much easier to abuse.
“People don’t even know that that’s possible,” he adds.

Friday, January 23, 2015

Project HoloLens: Our Exclusive Hands-On With Microsoft’s Holographic Goggles

It’s the end of October, when the days have already grown short in Redmond, Washington, and gray sheets of rain are just beginning to let up. In several months, Microsoft will unveil its most ambitious undertaking in years, a head-mounted holographic computer called Project HoloLens. But at this point, even most people at Microsoft have never heard of it. I walk through the large atrium of Microsoft’s Studio C to meet its chief inventor, Alex Kipman.
Alex Kipman.
The headset is still a prototype being developed under the codename Project Baraboo, or sometimes just “B.” Kipman, with shoulder-length hair and severely cropped bangs, is a nervous inventor, shifting from one red Converse All-Star to the other. Nervous, because he’s been working on this pair of holographic goggles for five years. No, even longer. Seven years, if you go back to the idea he first pitched to Microsoft, which became Kinect. When the motion-sensing Xbox accessory was released, just in time for the 2010 holidays, it became the fastest-selling consumer gaming device of all time.
Right from the start, he makes it clear that Baraboo will make Kinect seem minor league.
Kipman leads me into a briefing room with a drop-down screen, plush couches, and a corner bar stocked with wine and soda (we abstain). He sits beside me, then stands, paces a bit, then sits down again. His wind-up is long. He gives me an abbreviated history of computing, speaking in complete paragraphs, with bushy, expressive eyebrows and saucer eyes that expand as he talks. The next era of computing, he explains, won’t be about that original digital universe. “It’s about the analog universe,” he says. “And the analog universe has a fundamentally different rule set.”
Translation: you used to compute on a screen, entering commands on a keyboard. Cyberspace was somewhere else. Computers responded to programs that detailed explicit commands. In the very near future, you’ll compute in the physical world, using voice and gesture to summon data and layer it atop physical objects. Computer programs will be able to digest so much data that they’ll be able to handle far more complex and nuanced situations. Cyberspace will be all around you.
What will this look like? Well, holograms.



First Impressions

That’s when I get my first look at Baraboo. Kipman cues a concept video in which a young woman wearing the slate gray headset moves through a series of scenarios, from collaborating with coworkers on a conference call to soaring, Oculus-style, over the Golden Gate Bridge. I watch the video, while Kipman watches me watch the video, while Microsoft’s public relations executives watch Kipman watch me watch the video. And the video is cool, but I’ve seen too much sci-fi for any of it to feel believable yet. I want to get my hands on the actual device. So Kipman pulls a box onto the couch. Gingerly, he lifts out a headset. “First toy of the day to show you,” he says, passing it to me to hold. “This is the actual industrial design.”
Oh Baraboo! It’s bigger and more substantial than Google Glass, but far less boxy than the Oculus Rift. If I were a betting woman, I’d say it probably looks something like the goggles made by Magic Leap, the mysterious Google-backed augmented reality startup that has $592 million in funding. But Magic Leap is not yet ready to unveil its device. Microsoft, on the other hand, plans to get Project HoloLens into the hands of developers by the spring. (For more about Microsoft and CEO Satya Nadella’s plans for Project HoloLens, read WIRED’s February cover story.)
Kipman’s prototype is amazing. It amplifies the special powers that Kinect introduced, using a small fraction of the energy. The depth camera has a field of vision that spans 120 by 120 degrees—far more than the original Kinect—so it can sense what your hands are doing even when they are nearly outstretched. Sensors flood the device with terabytes of data every second, all managed with an onboard CPU, GPU and first-of-its-kind HPU (holographic processing unit). Yet, Kipman points out, the computer doesn’t grow hot on your head, because the warm air is vented out through the sides. On the right side, buttons allow you to adjust the volume and to control the contrast of the hologram.

A Quick Trip to Mars

The first is deceptively simple. I enter a makeshift living room, where wires jut from a hole in the wall where there should be a lightswitch. Tools are strewn on the West Elm sideboard just below it. Kipman hands me a HoloLens prototype and tells me to install the switch. After I put on the headset, an electrician pops up on a screen that floats directly in front of me. With a quick hand gesture I’m able to anchor the screen just to the left of the wires. The electrician is able to see exactly what I’m seeing. He draws a holographic circle around the voltage tester on the sideboard and instructs me to use it to check whether the wires are live. Once we establish that they aren’t, he walks me through the process of installing the switch, coaching me by sketching holographic arrows and diagrams on the wall in front of me. Five minutes later, I flip a switch, and the living room light turns on.
Another scenario lands me on a virtual Mars-scape. Kipman developed it in close collaboration with NASA rocket scientist Jeff Norris, who spent much of the first half of 2014 flying back and forth between Seattle and his Southern California home to help develop the scenario. With a quick upward gesture, I toggle from computer screens that monitor the Curiosity rover’s progress across the planet’s surface to the virtual experience of being on the planet. The ground is a parched, dusty sandstone, and so realistic that as I take a step, my legs begin to quiver. They don’t trust what my eyes are showing them. Behind me, the rover towers seven feet tall, its metal arm reaching out from its body like a tentacle. The sun shines brightly over the rover, creating short black shadows on the ground beneath its legs.
jpeg-3-full
 Microsoft
Norris joins me virtually, appearing as a three-dimensional human-shaped golden orb in the Mars-scape. (In reality, he’s in the room next door.) A dotted line extends from his eyes toward what he is looking at. “Check that out,” he says, and I squat down to see a rock shard up close. With an upward right-hand gesture, I bring up a series of controls. I choose the middle of three options, which drops a flag there, theoretically a signal to the rover to collect sediment.
After exploring Mars, I don’t want to remove the headset, which has provided a glimpse of a combination of computing tools that make the unimaginable feel real. NASA felt the same way. Norris will roll out Project HoloLens this summer so that agency scientists can use it to collaborate on a mission.

Friday, January 16, 2015

NASA and Nissan Chase Self-Driving Car Technology




Google’s self-driving cars won’t be the only robotic vehicles roaming NASA’s Ames Research Center at Moffett Field in California. The U.S. space agency has teamed up with automaker Nissan to test autonomous driving technologies that could find their way into future vehicles both on the road and in space exploration missions.
NASA hopes the five-year partnership can help improve the autonomous vehicle technologies available for its robotic rovers during Mars missions and other future space exploration. On Earth, Nissan has set a 2020 goal for the market debut of cars that can navigate without human intervention under most driving conditions. Researchers from both organizations aim to begin testing the first of a fleet of self-driving vehicles before the end of 2015.
“The work of NASA and Nissan—with one directed to space and the other directed to earth—is connected by similar challenges,” said Carlos Ghosn, president and CEO of Nissan Motor Co, in an 8 January press release. “The partnership will accelerate Nissan's development of safe, secure and reliable autonomous drive technology that we will progressively introduce to consumers beginning in 2016 up to 2020.”
The two organizations have cooperated on technological development in the past. For instance, Nissan used NASA’s research on neutral body posture in low-gravity conditions to develop more comfortable car seats. But hardware and software for self-driving cars could prove to be some of the most transformative technologies to reach mainstream acceptance in the coming years.
Ghosn has suggested that Nissan’s introduction of a commercially available self-driving car could even take place as soon as 2018. He mentioned legal considerations rather than technological roadblocks as the biggest potential stumbling block along any timeline. On the other hand, Nissan engineers have emphasized a less firm deadline in order to leave themselves more wiggle room.
Other observers say that, Ghosn’s reassurances notwithstanding, there remains a list of technical and regulatory hurdles that must be cleared before self-driving cars can be expected to make the world’s roads at least as safe as they are with humans in control. The toughest part of the challenge for robotic cars will be dealing with a mix of automated vehicles and ordinary vehicles driven by humans.
As I noted earlier, the “zero-emission,” self-driving vehicles to be tested by Nissan won’t have the run of the place. They’ll share the NASA testing grounds with potential competitors such as Google. Google has already been making use of the NASA Ames Research Center to test its own self-driving vehicle—a two-seat, all-electric prototype that dispenses with the traditional steering wheel and accelerator and brake pedals in favor of just a start and stop button. The Silicon Valley giant hopes to begin tests of its unoccupied self-driving cars on the NASA research campus sometime this year.
Other carmakers are also racing to develop self-driving vehicles. Mercedes-Benz has begun testing its own robocars at an abandoned naval base in Concord, Calif. Meanwhile, Elon Musk has promised that his Tesla electric cars will be able to operate without human assistance for 90 percent of miles driven starting this year.

Faster Airplane WiFi Is Coming Now That Gogo's Technology Was Approved By the FCC




You'll soon be able to surf the web faster on airplanes now that Gogo's next generation in-flight WiFi technology was approved by the FCC on Thursday.
Itasca-based Gogo got a blanket approval from the FCC for its 2Ku antenna technology that is expected to deliver 70 Mbps speeds to aircrafts, outperforming other global connectivity solutions in the market, Gogo said. Gogo plans to install the 2Ku system on 1,000 aircraft.
"Clearing the necessary regulatory hurdles to provide this service to an aircraft flying anywhere around the globe is no small feat," Gogo's president and CEO Michael Small said in a news release. "Gogo has proven it is a leader at navigating these environments for all aircraft types no matter where they fly. We are happy that the launch of 2Ku is proceeding as planned and are continuing to work with the FAA on approval for installation."
2Ku can produce more bandwidth at less cost than competitive solutions, Gogo said. And the antenna is only 4.5 inches tall, resulting in little incremental drag on the aircraft. And Gogo expects peak speeds for the service in excess of 100 Mbps when future satellite technologies become available.
Several airlines, including Chicago-based United, have agreed to use Gogo's 2Ku. GoGo expects the technology to be available in the second half of 2015.

Monday, January 12, 2015

Police body cameras: Five facts about the technology

Carlos Chavez/The Republic
Steve Tuttle, vice president

Police body cameras used to be viewed as a novelty, an extra technology that police departments experimented with or used to provide another piece of evidence in court.
But interest has skyrocketed and the technology is being viewed as more essential since a police officer shot and killed Michael Brown, an unarmed black teenager, in August in Ferguson, Mo. The officer didn't have a body camera and his version of the events leading up to the shooting differed from those of some eyewitnesses.
Philadelphia Police officers demonstrate a body-worn
Michael White, a professor at Arizona State University's School of Criminology and Criminal Justice, has researched body cameras and predicts they will one day become as commonly used by police as Tasers. More than 17,000 U.S. law-enforcement agencies use the electrical weapons, according to Taser International.
He said the two common questions he gets asked about the cameras are: "How much do they cost?" and "Do they record everything?"
Here are five facts about the technology:
1. Body cams likely will become the norm within a decade
White estimates 25 percent or more of the nation's police departments are either using body cameras or getting ready to start implementing the technology. He predicts the number will jump to one-third or more within the next year. In addition, the Border Patrol currently is testing the cameras for use by their agents and officers. In September, the Department of Justice issued guidelines for law-enforcement agencies on how to use the technology, including how and when to record and store the data.
Las Vegas police, in league with a university researcher, also are studying the use of cameras.
Last month, President Barack Obama said he wants to see more police wearing body cameras as a way to build trust between the public and police. The same month, Los Angeles Mayor Eric Garcetti announced plans to equip 7,000 police officers with body cameras by next summer.
Body cameras are useful because they create a real-time, permanent record of what happens during encounters between police and civilians, Garcetti said.
For many police chiefs, that by itself is justification to get the cameras because they saw what happened in Ferguson, White said.
2. The technology isn't cheap
Equipping a large police force with body cameras takes an enormous amount of resources, White said. Equipping even a small department with cameras can cost several thousands of dollars. And there are additional costs in training, video storage and transfer, and so on.
A 2014 study by the Police Executive Research Forum, a research and policy organization, found agencies spent from $120 to nearly $2,000 for each camera.
The Mesa Police Department, for example, spent around $67,000 to make an initial purchase of 50 cameras, according to a 2013 study.
In Los Angeles, $1.5 million in privately raised funds will purchase more than 800 cameras for patrol officers with the mayor planning to include additional funds to equip all patrol officers in his fiscal 2015-16 city budget.
The camera typically attaches to the chest or the officer's collar, hat, eyeglasses or helmet. Video from the cameras is downloaded after each officer's shift. The data is stored for a period of time.
The real cost comes on the back end. Video data captured by the cameras has to be stored somewhere secure. This can be done using cloud-based services where police departments pay a monthly fee. Other departments set up their own servers.
Considerable work also goes into laying the groundwork for the technology, White said. Departments have to select a vendor for the cameras, develop policies for when the cameras will be turned on and overcome any police union objections, White said.
3. The use of police body cameras hasn't been widely researched
White published a review of existing research on police body cameras for the U.S. Department of Justice in 2014.
He found only five empirical studies on the use of body cameras as of September 2013. Many of the studies had significant research limitations because they didn't include a comparison group or were carried out internally by the law-enforcement agency adopting the cameras, he said.
We don't know, for example, whether the use of body cameras is more likely to result in guilty pleas in criminal cases.
More independent studies are needed to provide a better understanding of the impact and consequences of wearing body cameras, White said.
4. Body cameras may cause better behavior
The limited research that exists indicates the presence of body cameras may cause better behavior among police officers and citizens.
That's because people behave better when they know they are being recorded, White said.
In southern California, the Rialto Police Department saw a more than 50-percent reduction in police use-of-force incidents after officers began using body cameras. Citizen complaints against police also dropped.
Mesa Police Chief Frank Milstead said last year that the department's year-long experiment produced a 40-percent drop in complaints filed by the public about the behavior of officers using the camera and a 75-percent drop in use-of-force complaints.
A 2007 study in England and 2011 study in Scotland indicated the presence of body cameras may reduce the likelihood that citizens will file frivolous or false complaints against police.
5. Police disagree over when they should be used.
There isn't universal consensus over when cameras should be turned on. Some departments require officers to record video anytime they have contact with citizens.
Other departments use cameras less often, such as when an officer believes he or she will issue a citation or is likely to make an arrest.
White favors keeping cameras on whenever police interact with citizens. It's difficult to predict when an encounter with a citizen could turn on a dime, he said.
The American Civil Liberties Union has also advocated recording all encounters, maintaining this approach benefits citizens and also protects an officer from allegations of discretionary recording or tampering.
But the Police Executive Research Forum believes recording every encounter would sometimes undermine citizens' privacy rights. The organization favors policies that outline when cameras should be turned on but also gives officers some discretion.
The organization also maintains that using cameras at all times could damage police-citizen relationships. Residents could find it off-putting, for instance, if a police officer on foot or bike stops to chat with them and then turns on a video camera.

Translation Technology Starts to Prove Itself

language_tech_nyt.jpg

The tech industry is doing its best to topple the Tower of Babel.
Last month, Skype, Microsoft's video calling service, initiated simultaneous translation between English and Spanish speakers. Not to be outdone, Google will soon announce updates to its translation app for phones.
Google Translate now offers written translation of 90 languages and the ability to hear spoken translations of a few popular languages. In the update, the app will automatically recognize if someone is speaking a popular language and automatically turn it into written text.
Certainly, the technology of translating one tongue into another can still be downright terrible - or "downright herbal," as I purportedly said on a test of Skype. The service also required a headset and worked best if a speaker paused to hear what the other person had said. The experience was a little as if two telemarketers were using walkie-talkies.
But those complaints are churlish compared with what also seemed like a fundamental miracle: Within minutes, I was used to the process and talking freely with a Colombian man about his wife, children and life in Medellin (or "Made A," as Skype first heard it, but it later got it correctly). The single biggest thing that separates us - our language - had started to disappear.
Those language mistakes are a critical part of how online products get better. The services improve with use, as machine learning by computers examines outcomes and adjusts performance. It is how the online spell check feature became dependable, and how search, map directions and many other online services progress.
"The program learns as you using the conversations," is how Sebastian Cuberos, my new friend from Colombia, put it during our Skype call. "At this time, is pretty good." The grammar isn't perfect, but you know what he means.
Just a few thousand people are using the service on Skype. As it learns from them, it will bring in more of the nearly 40,000 people waiting to try the Spanish-English service. Even in these early days, it elicits the possibility of social studies classes with children in the United States and Mexico, or journalism where you can live chat with a family in Syria.
Google says its Translate app has been installed more than 100 million times on Android phones, most of which could receive the upgrade. "We have 500 million active users of Translate every month, across all our platforms," said Macduff Hughes, the engineering director of Google Translate. With 80 to 90 percent of the Web in just 10 languages, he added, translation becomes a critical part of learning for many people.
Automatic translation of Web pages into some major languages is a feature on Google's Chrome browser. People using the browser can render a page that is in English into, say Korean. There are also 140 languages in which it is possible to change things like Gmail settings.
It is possible to set your email to languages like Klingon, Pirate and Elmer Fudd. Other options, like Cherokee, are more serious, and Google aspires to eventually have these as full translation languages. Google will also soon announce a service that enables you to hold your phone up to a foreign street sign and create an automatic translation on the screen.
Microsoft's Bing Translation engine is used on Twitter and Facebook. Facebook, which also features communication across the borders of language by operating the world's largest photo sharing service, has its own translation efforts. Microsoft has also signed up thousands of people to a waiting list for Skype to offer other simultaneously translated languages, like Chinese and Russian.
Feeding the "corpus," as linguistics engineers call their database of language, has become critical for some countries as well as for the sake of machine learning. Google, which uses human translation to start its service, recently added Kazakh after a government official went on television to ask people to help. "People can ask very, very strongly that we put their language on the service," Hughes said.
Still, some experts worry as machines look more deeply at individual uses of meaning through things like intonation and humor. What will it mean if, as with our search terms and our Facebook "likes," these become fodder for advertisers and law enforcement?
"The technology is potentially magical, but the threats are real too," said Kelly Fitzsimmons, co-founder of the Hypervoice Consortium, which researches the future of communication. "What would it mean to have a corpus of conversations after there is regime change, and a new government doesn't like what you said?"
Currently, Fitzsimmons said, just 1 percent of consumers consent to having their data recorded overtly. That is what people do when they help machine learning of translation, however, or when they use voice-based assistants like Siri. She thinks individuals will become better at managing their own privacy, and not outsourcing it to the providers of services. But for now, all kinds of information is surrendered for convenience.
Olivier Fontana, director of product marketing for the Skype project, says conversations are broken into separate files before people check a translation for quality. "There is no way to know who said what," he said. "The NSA couldn't make sense of this."
Hughes said Google was also careful about what it did with voice, in part because of potential issues around biometric security in case voice recognition replaced passwords. Besides, he said, "there is something to be said for having your translator be different - if I speak Chinese, I'd have a woman's voice, so people know it's a translation."
© 2015 New York Times News Service

Tuesday, January 6, 2015

ThinkPad Yoga contorts into new screen sizes, adds new CPUs


While people certainly like Lenovo's Yoga line of hybrids, with its 360-degree hinge that folds back into a kiosk or tablet, there's always been one major complaint. In the tablet mode, you've still got the laptop's keyboard and touchpad exposed, usually right under your fingers while holding the tablet. The keyboard and touchpad are automatically deactivated, but it's still awkward to hold.
Lenovo partially solved this problem back in 2014 with the ThinkPad Yoga, a variant with a keyboard that vanished thanks to a clever bit of mechanical sleight of hand. The keyboard didn't actually retract, but the shell of the keyboard tray rose up and locked into place, making the keyboard reasonably flush with the rest of the interior surface.
We liked that workaround in the original ThinkPad Yoga, although we felt the laptop was missing some of the high-end features found in the standard Yoga models.
The ThinkPad Yoga line is now expanding, with new second-generation models. Besides a 12-inch model that matches the screen size of the original ThinkPad Yoga, the series now includes 14-inch and 15-inch versions, making this the biggest Yoga display to date.
thinkpad-yoga-151.jpg
The big 15-inch ThinkPad Yoga.Lenovo
The overall look and feel is similar to the first-generation ThinkPad Yoga, but there are some notable internal upgrades, and each screen size has its own selling points. The 12-inch model has an especially bright screen, at 400 nits, and is a reasonably slim 19mm thick.
The 14-inch and 15-inch models will offer optional Nvidia graphics, and a new ActivePen stylus that Lenovo says is a step up from the standard pen/digitizer in the 12-inch model. All are moving to Intel's new fifth-generation Core i-series CPUs, also known by the codename "Broadwell."
Beyond that, the 15-inch ThinkPad Yoga also offers an option to upgrade to Intel's new RealSensecamera, which uses a depth-sensing camera system to track objects in 3D space, allowing you to use hand gestures or even scan objects into 3D modeling programs just by holding them up to the webcam. It's been spotted in a handful of models, including Dell's Venue 8 7000, but we haven't extensively tested its claims.
The 12-inch ThinkPad yoga starts at $999, while the 14- and 15-inch models start at $1,199. The 14-inch will only be available directly through Lenovo or at Best Buy, and all three should ship in February in the US, with international prices and dates still to come. The US prices convert to £650 or AU$1,235, and £780 or AU$1,480 respectively.