Friday, January 30, 2015

The Technology that Unmasks Your Hidden Emotions


Paul Ekman, perhaps the world’s most famous face reader, fears he has created a monster.
The 80-year-old psychologist pioneered the study of facial expressions in the 1970s, creating a catalog of more than 5,000 muscle movements to show how the subtlest wrinkling of the nose or lift of an eyebrow reveal hidden emotions.
Now, a group of young companies with names like Emotient Inc., Affectiva Inc. and Eyeris are using Dr. Ekman’s research as the backbone of a technology that relies on algorithms to analyze people’s faces and potentially discover their deepest feelings. Collectively, they are amassing an enormous visual database of human emotions, seeking patterns that can predict emotional reactions and behavior on a massive scale.
Dr. Ekman, who agreed to become an adviser to Emotient, says he is torn between the potential power of all this data and the need to ensure it is used responsibly, without infringing on personal privacy.
So far, the technology has been used mostly for market research. Emotient, a San Diego startup whose software can recognize emotions from a database of microexpressions that happen in a fraction of a second, has worked with Honda Motor Co. and Procter & GambleCo. to gauge people’s emotions as they try out products. Affectiva, an emotion-detection software maker based in Waltham, Mass., has used webcams to monitor consumers as they watch ads for companies like Coca-Cola Co. and Unilever PLC.
The evolving technology has the potential to help people or even save lives. Cameras that could sense when a trucker is exhausted might prevent him from falling asleep at the wheel. Putting cameras embedded with emotion sensing software in the classroom, could help teachers determine whether they were holding their students’ attention.
But other applications are likely to breed privacy concerns. One retailer, for instance, is starting to test software embedded in security cameras that can scan people’s faces and divine their emotions as they walk in and out of its stores. Eyeris, based in Mountain View, Calif., says it has sold its software to federal law-enforcement agencies for use in interrogations.
The danger, Dr. Ekman and privacy advocates say, is that the technology could reveal people’s emotions without their consent, and their feelings could be misinterpreted. People might try to use the software to determine whether their spouse was lying, police might read the emotions of crowds or employers might use it to secretly monitor workers or job applicants.
“I can’t control usage,” Dr. Ekman says of his catalog, called the Facial Action Coding System. “I can only be certain that what I’m providing is at least an accurate depiction of when someone is concealing emotion.”
In Dr. Ekman’s analysis, there is no such thing as a simple smile or a frown. Facial movements are broken down into more-nuanced expressions; there are seven ways a forehead can furrow.
Psychologist Paul Ekman’s research on emotions and their relation to facial expressions is the basis for the software being used by advertisers and retailers to study customers.ENLARGE
Psychologist Paul Ekman’s research on emotions and their relation to facial expressions is the basis for the software being used by advertisers and retailers to study customers. PHOTO: RAMIN RAHIMIAN FOR THE WALL STREET JOURNAL
Dr. Ekman’s atlas has been used extensively by psychologists and by law-enforcement and military personnel—including interrogators at the Abu Ghraib prison in Iraq—and was the inspiration for the TV drama “Lie to Me.”
To train its software’s algorithm, Emotient has recorded the facial reactions of an ethnically diverse group of hundreds of thousands people participating in marketing research for its clients via video chat. The software extracts at least 90,000 data points from each frame, everything from abstract patterns of light to tiny muscular movements, which are sorted by emotional categories, such as anger, disgust, joy, surprise or boredom.
Rival Affectiva says it has measured seven billion emotional reactions from 2.4 million face videos in 80 countries. The company says the sheer scope of its data has allowed it to draw conclusions about people across cultures and in different settings. For instance, it says it has learned that women smile more than men, and that Indonesians and South Africans are the world’s least and most expressive people, respectively.
The startups share the goal of embedding their software in the tiniest of cameras. Affectiva is teaming up with OoVoo LLC, a video-chat service for smartphones that has 100 million users, to build an app that could reveal people’s emotions during mobile video chats.
Its peers, too, are expanding their reach. A pediatrics researcher at the University of San Diego is testing a version of Emotient software on children who have had appendix surgery, to see whether it can signal their level of pain. An unidentified retailer is using Emotient’s software in its security cameras to gauge whether shoppers are pleased when looking at products and leaving the store.
Eyeris says it envisions therapeutic apps that could detect when a person feels stress. The company said it has struck deals with federal law-enforcement authorities, but declined to identify them.
Emotient says it prefers not to have its software used for police work or federal security matters. Affectiva says it has turned down funding offers from federal intelligence agencies.
As with many other technologies, emotion-detection software raises all sorts of privacy questions. “I can see few things more invasive than trying to record someone’s emotions in a database,” said Ginger McCall, a privacy advocate.
In the mid-2000s, former detective Charles Lieberman trained detectives in the New York Police Department’s counterterrorism unit in Dr. Ekman’s facial-coding system. He said the technology could help interrogators if they could identify inconsistencies between a suspect’s story and emotions revealed on his or her face. But, he cautioned, it is important to “recognize its limitations—it can lead you in the right direction but is not definitive.”
Problems could also arise if the software isn’t perfectly accurate. Emotions, such as sadness or frustration, could be wrongly interpreted. People could be wrongly pegged as liars. Dr. Ekman says Emotion’s software is highly accurate, but the accuracy of the system hasn’t been independently tested.
With no regulation, the companies are writing the privacy rules as they go.
Ken Denman, CEO of Emotient, says his company makes a point of discarding the images of individual faces within seconds after it has logged the sentiment they express. “There’s very little value in the facial expression of any individual,” he said.
Affectiva says it stores videos of faces only if the person involved consents. On mobile phones, the work of converting microexpresssions to data points takes place on the phone for later analysis. No images are sent back to the company.
Both Affectiva and Emotient acknowledge they have no control over how third parties using their software might store or use images of people’s faces and emotions.
Dr. Ekman says he hopes the government will step in and write rules to protect people. He says that in public spaces, such as shopping malls, consumers should at least be informed if their emotions are captured.
Dr. Ekman says he believes that, on balance, his tools have done more good than harm. But the new technology’s ability to instantaneously scan the emotions of crowds of people would be much easier to abuse.
“People don’t even know that that’s possible,” he adds.

Friday, January 23, 2015

Project HoloLens: Our Exclusive Hands-On With Microsoft’s Holographic Goggles

It’s the end of October, when the days have already grown short in Redmond, Washington, and gray sheets of rain are just beginning to let up. In several months, Microsoft will unveil its most ambitious undertaking in years, a head-mounted holographic computer called Project HoloLens. But at this point, even most people at Microsoft have never heard of it. I walk through the large atrium of Microsoft’s Studio C to meet its chief inventor, Alex Kipman.
Alex Kipman.
The headset is still a prototype being developed under the codename Project Baraboo, or sometimes just “B.” Kipman, with shoulder-length hair and severely cropped bangs, is a nervous inventor, shifting from one red Converse All-Star to the other. Nervous, because he’s been working on this pair of holographic goggles for five years. No, even longer. Seven years, if you go back to the idea he first pitched to Microsoft, which became Kinect. When the motion-sensing Xbox accessory was released, just in time for the 2010 holidays, it became the fastest-selling consumer gaming device of all time.
Right from the start, he makes it clear that Baraboo will make Kinect seem minor league.
Kipman leads me into a briefing room with a drop-down screen, plush couches, and a corner bar stocked with wine and soda (we abstain). He sits beside me, then stands, paces a bit, then sits down again. His wind-up is long. He gives me an abbreviated history of computing, speaking in complete paragraphs, with bushy, expressive eyebrows and saucer eyes that expand as he talks. The next era of computing, he explains, won’t be about that original digital universe. “It’s about the analog universe,” he says. “And the analog universe has a fundamentally different rule set.”
Translation: you used to compute on a screen, entering commands on a keyboard. Cyberspace was somewhere else. Computers responded to programs that detailed explicit commands. In the very near future, you’ll compute in the physical world, using voice and gesture to summon data and layer it atop physical objects. Computer programs will be able to digest so much data that they’ll be able to handle far more complex and nuanced situations. Cyberspace will be all around you.
What will this look like? Well, holograms.



First Impressions

That’s when I get my first look at Baraboo. Kipman cues a concept video in which a young woman wearing the slate gray headset moves through a series of scenarios, from collaborating with coworkers on a conference call to soaring, Oculus-style, over the Golden Gate Bridge. I watch the video, while Kipman watches me watch the video, while Microsoft’s public relations executives watch Kipman watch me watch the video. And the video is cool, but I’ve seen too much sci-fi for any of it to feel believable yet. I want to get my hands on the actual device. So Kipman pulls a box onto the couch. Gingerly, he lifts out a headset. “First toy of the day to show you,” he says, passing it to me to hold. “This is the actual industrial design.”
Oh Baraboo! It’s bigger and more substantial than Google Glass, but far less boxy than the Oculus Rift. If I were a betting woman, I’d say it probably looks something like the goggles made by Magic Leap, the mysterious Google-backed augmented reality startup that has $592 million in funding. But Magic Leap is not yet ready to unveil its device. Microsoft, on the other hand, plans to get Project HoloLens into the hands of developers by the spring. (For more about Microsoft and CEO Satya Nadella’s plans for Project HoloLens, read WIRED’s February cover story.)
Kipman’s prototype is amazing. It amplifies the special powers that Kinect introduced, using a small fraction of the energy. The depth camera has a field of vision that spans 120 by 120 degrees—far more than the original Kinect—so it can sense what your hands are doing even when they are nearly outstretched. Sensors flood the device with terabytes of data every second, all managed with an onboard CPU, GPU and first-of-its-kind HPU (holographic processing unit). Yet, Kipman points out, the computer doesn’t grow hot on your head, because the warm air is vented out through the sides. On the right side, buttons allow you to adjust the volume and to control the contrast of the hologram.

A Quick Trip to Mars

The first is deceptively simple. I enter a makeshift living room, where wires jut from a hole in the wall where there should be a lightswitch. Tools are strewn on the West Elm sideboard just below it. Kipman hands me a HoloLens prototype and tells me to install the switch. After I put on the headset, an electrician pops up on a screen that floats directly in front of me. With a quick hand gesture I’m able to anchor the screen just to the left of the wires. The electrician is able to see exactly what I’m seeing. He draws a holographic circle around the voltage tester on the sideboard and instructs me to use it to check whether the wires are live. Once we establish that they aren’t, he walks me through the process of installing the switch, coaching me by sketching holographic arrows and diagrams on the wall in front of me. Five minutes later, I flip a switch, and the living room light turns on.
Another scenario lands me on a virtual Mars-scape. Kipman developed it in close collaboration with NASA rocket scientist Jeff Norris, who spent much of the first half of 2014 flying back and forth between Seattle and his Southern California home to help develop the scenario. With a quick upward gesture, I toggle from computer screens that monitor the Curiosity rover’s progress across the planet’s surface to the virtual experience of being on the planet. The ground is a parched, dusty sandstone, and so realistic that as I take a step, my legs begin to quiver. They don’t trust what my eyes are showing them. Behind me, the rover towers seven feet tall, its metal arm reaching out from its body like a tentacle. The sun shines brightly over the rover, creating short black shadows on the ground beneath its legs.
jpeg-3-full
 Microsoft
Norris joins me virtually, appearing as a three-dimensional human-shaped golden orb in the Mars-scape. (In reality, he’s in the room next door.) A dotted line extends from his eyes toward what he is looking at. “Check that out,” he says, and I squat down to see a rock shard up close. With an upward right-hand gesture, I bring up a series of controls. I choose the middle of three options, which drops a flag there, theoretically a signal to the rover to collect sediment.
After exploring Mars, I don’t want to remove the headset, which has provided a glimpse of a combination of computing tools that make the unimaginable feel real. NASA felt the same way. Norris will roll out Project HoloLens this summer so that agency scientists can use it to collaborate on a mission.

Friday, January 16, 2015

NASA and Nissan Chase Self-Driving Car Technology




Google’s self-driving cars won’t be the only robotic vehicles roaming NASA’s Ames Research Center at Moffett Field in California. The U.S. space agency has teamed up with automaker Nissan to test autonomous driving technologies that could find their way into future vehicles both on the road and in space exploration missions.
NASA hopes the five-year partnership can help improve the autonomous vehicle technologies available for its robotic rovers during Mars missions and other future space exploration. On Earth, Nissan has set a 2020 goal for the market debut of cars that can navigate without human intervention under most driving conditions. Researchers from both organizations aim to begin testing the first of a fleet of self-driving vehicles before the end of 2015.
“The work of NASA and Nissan—with one directed to space and the other directed to earth—is connected by similar challenges,” said Carlos Ghosn, president and CEO of Nissan Motor Co, in an 8 January press release. “The partnership will accelerate Nissan's development of safe, secure and reliable autonomous drive technology that we will progressively introduce to consumers beginning in 2016 up to 2020.”
The two organizations have cooperated on technological development in the past. For instance, Nissan used NASA’s research on neutral body posture in low-gravity conditions to develop more comfortable car seats. But hardware and software for self-driving cars could prove to be some of the most transformative technologies to reach mainstream acceptance in the coming years.
Ghosn has suggested that Nissan’s introduction of a commercially available self-driving car could even take place as soon as 2018. He mentioned legal considerations rather than technological roadblocks as the biggest potential stumbling block along any timeline. On the other hand, Nissan engineers have emphasized a less firm deadline in order to leave themselves more wiggle room.
Other observers say that, Ghosn’s reassurances notwithstanding, there remains a list of technical and regulatory hurdles that must be cleared before self-driving cars can be expected to make the world’s roads at least as safe as they are with humans in control. The toughest part of the challenge for robotic cars will be dealing with a mix of automated vehicles and ordinary vehicles driven by humans.
As I noted earlier, the “zero-emission,” self-driving vehicles to be tested by Nissan won’t have the run of the place. They’ll share the NASA testing grounds with potential competitors such as Google. Google has already been making use of the NASA Ames Research Center to test its own self-driving vehicle—a two-seat, all-electric prototype that dispenses with the traditional steering wheel and accelerator and brake pedals in favor of just a start and stop button. The Silicon Valley giant hopes to begin tests of its unoccupied self-driving cars on the NASA research campus sometime this year.
Other carmakers are also racing to develop self-driving vehicles. Mercedes-Benz has begun testing its own robocars at an abandoned naval base in Concord, Calif. Meanwhile, Elon Musk has promised that his Tesla electric cars will be able to operate without human assistance for 90 percent of miles driven starting this year.

Faster Airplane WiFi Is Coming Now That Gogo's Technology Was Approved By the FCC




You'll soon be able to surf the web faster on airplanes now that Gogo's next generation in-flight WiFi technology was approved by the FCC on Thursday.
Itasca-based Gogo got a blanket approval from the FCC for its 2Ku antenna technology that is expected to deliver 70 Mbps speeds to aircrafts, outperforming other global connectivity solutions in the market, Gogo said. Gogo plans to install the 2Ku system on 1,000 aircraft.
"Clearing the necessary regulatory hurdles to provide this service to an aircraft flying anywhere around the globe is no small feat," Gogo's president and CEO Michael Small said in a news release. "Gogo has proven it is a leader at navigating these environments for all aircraft types no matter where they fly. We are happy that the launch of 2Ku is proceeding as planned and are continuing to work with the FAA on approval for installation."
2Ku can produce more bandwidth at less cost than competitive solutions, Gogo said. And the antenna is only 4.5 inches tall, resulting in little incremental drag on the aircraft. And Gogo expects peak speeds for the service in excess of 100 Mbps when future satellite technologies become available.
Several airlines, including Chicago-based United, have agreed to use Gogo's 2Ku. GoGo expects the technology to be available in the second half of 2015.

Monday, January 12, 2015

Police body cameras: Five facts about the technology

Carlos Chavez/The Republic
Steve Tuttle, vice president

Police body cameras used to be viewed as a novelty, an extra technology that police departments experimented with or used to provide another piece of evidence in court.
But interest has skyrocketed and the technology is being viewed as more essential since a police officer shot and killed Michael Brown, an unarmed black teenager, in August in Ferguson, Mo. The officer didn't have a body camera and his version of the events leading up to the shooting differed from those of some eyewitnesses.
Philadelphia Police officers demonstrate a body-worn
Michael White, a professor at Arizona State University's School of Criminology and Criminal Justice, has researched body cameras and predicts they will one day become as commonly used by police as Tasers. More than 17,000 U.S. law-enforcement agencies use the electrical weapons, according to Taser International.
He said the two common questions he gets asked about the cameras are: "How much do they cost?" and "Do they record everything?"
Here are five facts about the technology:
1. Body cams likely will become the norm within a decade
White estimates 25 percent or more of the nation's police departments are either using body cameras or getting ready to start implementing the technology. He predicts the number will jump to one-third or more within the next year. In addition, the Border Patrol currently is testing the cameras for use by their agents and officers. In September, the Department of Justice issued guidelines for law-enforcement agencies on how to use the technology, including how and when to record and store the data.
Las Vegas police, in league with a university researcher, also are studying the use of cameras.
Last month, President Barack Obama said he wants to see more police wearing body cameras as a way to build trust between the public and police. The same month, Los Angeles Mayor Eric Garcetti announced plans to equip 7,000 police officers with body cameras by next summer.
Body cameras are useful because they create a real-time, permanent record of what happens during encounters between police and civilians, Garcetti said.
For many police chiefs, that by itself is justification to get the cameras because they saw what happened in Ferguson, White said.
2. The technology isn't cheap
Equipping a large police force with body cameras takes an enormous amount of resources, White said. Equipping even a small department with cameras can cost several thousands of dollars. And there are additional costs in training, video storage and transfer, and so on.
A 2014 study by the Police Executive Research Forum, a research and policy organization, found agencies spent from $120 to nearly $2,000 for each camera.
The Mesa Police Department, for example, spent around $67,000 to make an initial purchase of 50 cameras, according to a 2013 study.
In Los Angeles, $1.5 million in privately raised funds will purchase more than 800 cameras for patrol officers with the mayor planning to include additional funds to equip all patrol officers in his fiscal 2015-16 city budget.
The camera typically attaches to the chest or the officer's collar, hat, eyeglasses or helmet. Video from the cameras is downloaded after each officer's shift. The data is stored for a period of time.
The real cost comes on the back end. Video data captured by the cameras has to be stored somewhere secure. This can be done using cloud-based services where police departments pay a monthly fee. Other departments set up their own servers.
Considerable work also goes into laying the groundwork for the technology, White said. Departments have to select a vendor for the cameras, develop policies for when the cameras will be turned on and overcome any police union objections, White said.
3. The use of police body cameras hasn't been widely researched
White published a review of existing research on police body cameras for the U.S. Department of Justice in 2014.
He found only five empirical studies on the use of body cameras as of September 2013. Many of the studies had significant research limitations because they didn't include a comparison group or were carried out internally by the law-enforcement agency adopting the cameras, he said.
We don't know, for example, whether the use of body cameras is more likely to result in guilty pleas in criminal cases.
More independent studies are needed to provide a better understanding of the impact and consequences of wearing body cameras, White said.
4. Body cameras may cause better behavior
The limited research that exists indicates the presence of body cameras may cause better behavior among police officers and citizens.
That's because people behave better when they know they are being recorded, White said.
In southern California, the Rialto Police Department saw a more than 50-percent reduction in police use-of-force incidents after officers began using body cameras. Citizen complaints against police also dropped.
Mesa Police Chief Frank Milstead said last year that the department's year-long experiment produced a 40-percent drop in complaints filed by the public about the behavior of officers using the camera and a 75-percent drop in use-of-force complaints.
A 2007 study in England and 2011 study in Scotland indicated the presence of body cameras may reduce the likelihood that citizens will file frivolous or false complaints against police.
5. Police disagree over when they should be used.
There isn't universal consensus over when cameras should be turned on. Some departments require officers to record video anytime they have contact with citizens.
Other departments use cameras less often, such as when an officer believes he or she will issue a citation or is likely to make an arrest.
White favors keeping cameras on whenever police interact with citizens. It's difficult to predict when an encounter with a citizen could turn on a dime, he said.
The American Civil Liberties Union has also advocated recording all encounters, maintaining this approach benefits citizens and also protects an officer from allegations of discretionary recording or tampering.
But the Police Executive Research Forum believes recording every encounter would sometimes undermine citizens' privacy rights. The organization favors policies that outline when cameras should be turned on but also gives officers some discretion.
The organization also maintains that using cameras at all times could damage police-citizen relationships. Residents could find it off-putting, for instance, if a police officer on foot or bike stops to chat with them and then turns on a video camera.

Translation Technology Starts to Prove Itself

language_tech_nyt.jpg

The tech industry is doing its best to topple the Tower of Babel.
Last month, Skype, Microsoft's video calling service, initiated simultaneous translation between English and Spanish speakers. Not to be outdone, Google will soon announce updates to its translation app for phones.
Google Translate now offers written translation of 90 languages and the ability to hear spoken translations of a few popular languages. In the update, the app will automatically recognize if someone is speaking a popular language and automatically turn it into written text.
Certainly, the technology of translating one tongue into another can still be downright terrible - or "downright herbal," as I purportedly said on a test of Skype. The service also required a headset and worked best if a speaker paused to hear what the other person had said. The experience was a little as if two telemarketers were using walkie-talkies.
But those complaints are churlish compared with what also seemed like a fundamental miracle: Within minutes, I was used to the process and talking freely with a Colombian man about his wife, children and life in Medellin (or "Made A," as Skype first heard it, but it later got it correctly). The single biggest thing that separates us - our language - had started to disappear.
Those language mistakes are a critical part of how online products get better. The services improve with use, as machine learning by computers examines outcomes and adjusts performance. It is how the online spell check feature became dependable, and how search, map directions and many other online services progress.
"The program learns as you using the conversations," is how Sebastian Cuberos, my new friend from Colombia, put it during our Skype call. "At this time, is pretty good." The grammar isn't perfect, but you know what he means.
Just a few thousand people are using the service on Skype. As it learns from them, it will bring in more of the nearly 40,000 people waiting to try the Spanish-English service. Even in these early days, it elicits the possibility of social studies classes with children in the United States and Mexico, or journalism where you can live chat with a family in Syria.
Google says its Translate app has been installed more than 100 million times on Android phones, most of which could receive the upgrade. "We have 500 million active users of Translate every month, across all our platforms," said Macduff Hughes, the engineering director of Google Translate. With 80 to 90 percent of the Web in just 10 languages, he added, translation becomes a critical part of learning for many people.
Automatic translation of Web pages into some major languages is a feature on Google's Chrome browser. People using the browser can render a page that is in English into, say Korean. There are also 140 languages in which it is possible to change things like Gmail settings.
It is possible to set your email to languages like Klingon, Pirate and Elmer Fudd. Other options, like Cherokee, are more serious, and Google aspires to eventually have these as full translation languages. Google will also soon announce a service that enables you to hold your phone up to a foreign street sign and create an automatic translation on the screen.
Microsoft's Bing Translation engine is used on Twitter and Facebook. Facebook, which also features communication across the borders of language by operating the world's largest photo sharing service, has its own translation efforts. Microsoft has also signed up thousands of people to a waiting list for Skype to offer other simultaneously translated languages, like Chinese and Russian.
Feeding the "corpus," as linguistics engineers call their database of language, has become critical for some countries as well as for the sake of machine learning. Google, which uses human translation to start its service, recently added Kazakh after a government official went on television to ask people to help. "People can ask very, very strongly that we put their language on the service," Hughes said.
Still, some experts worry as machines look more deeply at individual uses of meaning through things like intonation and humor. What will it mean if, as with our search terms and our Facebook "likes," these become fodder for advertisers and law enforcement?
"The technology is potentially magical, but the threats are real too," said Kelly Fitzsimmons, co-founder of the Hypervoice Consortium, which researches the future of communication. "What would it mean to have a corpus of conversations after there is regime change, and a new government doesn't like what you said?"
Currently, Fitzsimmons said, just 1 percent of consumers consent to having their data recorded overtly. That is what people do when they help machine learning of translation, however, or when they use voice-based assistants like Siri. She thinks individuals will become better at managing their own privacy, and not outsourcing it to the providers of services. But for now, all kinds of information is surrendered for convenience.
Olivier Fontana, director of product marketing for the Skype project, says conversations are broken into separate files before people check a translation for quality. "There is no way to know who said what," he said. "The NSA couldn't make sense of this."
Hughes said Google was also careful about what it did with voice, in part because of potential issues around biometric security in case voice recognition replaced passwords. Besides, he said, "there is something to be said for having your translator be different - if I speak Chinese, I'd have a woman's voice, so people know it's a translation."
© 2015 New York Times News Service

Tuesday, January 6, 2015

ThinkPad Yoga contorts into new screen sizes, adds new CPUs


While people certainly like Lenovo's Yoga line of hybrids, with its 360-degree hinge that folds back into a kiosk or tablet, there's always been one major complaint. In the tablet mode, you've still got the laptop's keyboard and touchpad exposed, usually right under your fingers while holding the tablet. The keyboard and touchpad are automatically deactivated, but it's still awkward to hold.
Lenovo partially solved this problem back in 2014 with the ThinkPad Yoga, a variant with a keyboard that vanished thanks to a clever bit of mechanical sleight of hand. The keyboard didn't actually retract, but the shell of the keyboard tray rose up and locked into place, making the keyboard reasonably flush with the rest of the interior surface.
We liked that workaround in the original ThinkPad Yoga, although we felt the laptop was missing some of the high-end features found in the standard Yoga models.
The ThinkPad Yoga line is now expanding, with new second-generation models. Besides a 12-inch model that matches the screen size of the original ThinkPad Yoga, the series now includes 14-inch and 15-inch versions, making this the biggest Yoga display to date.
thinkpad-yoga-151.jpg
The big 15-inch ThinkPad Yoga.Lenovo
The overall look and feel is similar to the first-generation ThinkPad Yoga, but there are some notable internal upgrades, and each screen size has its own selling points. The 12-inch model has an especially bright screen, at 400 nits, and is a reasonably slim 19mm thick.
The 14-inch and 15-inch models will offer optional Nvidia graphics, and a new ActivePen stylus that Lenovo says is a step up from the standard pen/digitizer in the 12-inch model. All are moving to Intel's new fifth-generation Core i-series CPUs, also known by the codename "Broadwell."
Beyond that, the 15-inch ThinkPad Yoga also offers an option to upgrade to Intel's new RealSensecamera, which uses a depth-sensing camera system to track objects in 3D space, allowing you to use hand gestures or even scan objects into 3D modeling programs just by holding them up to the webcam. It's been spotted in a handful of models, including Dell's Venue 8 7000, but we haven't extensively tested its claims.
The 12-inch ThinkPad yoga starts at $999, while the 14- and 15-inch models start at $1,199. The 14-inch will only be available directly through Lenovo or at Best Buy, and all three should ship in February in the US, with international prices and dates still to come. The US prices convert to £650 or AU$1,235, and £780 or AU$1,480 respectively.

Razer Nabu X is a $50 fitness-with-notifications band (hands-on)

img1186.jpgLAS VEGAS -- A buzz and a glow: Razer's latest crack at a smart band goes for low-key features and a low price. Maybe, in the shadow of upcoming mega-wearables later this year, the cheaper path is the smarter one.
The Razer Nabu X, announced at CES 2015, is a $50 fitness band with an ability to buzz and light up when notifications come in. Last year's Razer Nabu was Razer's first crack at making a smart band, but it never quite lived up to expectations: it quietly launched at the end of last year, and barely registered a blip on the wearable landscape.
The Nabu X, in contrast, is an even more entry-level band aiming for affordability. Ditching the readout display of the original Nabu, the Nabu X uses three-color LED lights that can pulse red, green or blue, and a built-in vibrational buzz, to indicate phone notifications. A connected app that works on iOS or Android can customize the lights to mean different things: incoming phone call, incoming tweet, and so on.
img1194.jpg
The Nabu X can track basic fitness via a built-in accelerometer, just like the original Nabu: steps, estimated calories, and automatic sleep tracking. Will it be an accurate and useful tracker? There are plenty of other bands that are more vetted out. Extra features include proximity-based social sharing of Twitter and Facebook contacts with another Nabu X owner, should you happen to meet one -- you could even peek at what a fellow Nabu X friend's most recently-played Steam game was.
img1193.jpg

The Nabu X comes in three colors, and the snap-on rubberized band houses the actual removable plastic core of the device, which can be popped out and into other colored bands. It has a seven-day battery life, and is water resistant. The Nabu X launches next month for $50, but subscribers to the community-driven Razer Insider can buy it for $20. At that price, the Nabu X sounds like a pretty good deal, but do you need it, and does it work as well as advertised? I have no idea. It felt okay on my wrist, but not wonderful. After all the promise of last year's Nabu band, the Nabu X can't help but feel underwhelming.

What does a $6,000 phone look like? The Lamborghini 88 Tauri, of course (hands-on)

LAS VEGAS -- The lap of luxury evokes actual laps in the Torino Lamborghini 88 Tauri, an ultrapremium Android smartphone from the storied car-maker.
Lambo isn't new to the smartphone game, and this latest effort outpaces previous models in both size and specs. What the 88 Tauri's $6,000 or £4,000 price tag (that's $AU11,255) gets you is mostly a name and designer decal (plus a really cool ostentatious case that opens like a Lamborghini and a set of expensive headphones).
However, it is also outfitted with its share of premium materials, like nine different colors of calfskin leather over three treatments of stainless steel. These include black, silver, and genuine gold-plate.


The glass covering the phone's 5-inch 1080p display is also special. Lamborghini says that the company customized it to be shatterproof and scratch-proof; in fact, it's the same glass used in Lamborghini's own cars.

Hardware specs

For a phone so pricey, the specs are good, even quite good, but not cutting edge. That's just not what you pay for in a stupid-expensive smartphone.
Cameras on the Android 4.4.4 KitKat handset shoot 20-megapixel photos from the rear and 8-megapixel snaps from the front. The 2.3GHz quad-core Qualcomm Snapdragon 801 processor is proven to be very fast (though we're now on Snapdragon 810), and the 3GB RAM is a healthy dose.
You'll also get a 3,400mAh battery and 64GB of possible expandable memory on this dual-SIM device. We're told that the phone, which just went on sale today, will be sold in extremely limited quantity: 1,947 models to commemorate the 1947 founding of Lamborghini.
The phone is larger than I thought it would be for a 5-incher, mostly because it's layered in all those fancy fabrics and such. It doesn't feel liquidy sleek or satiny smooth. Rather, it's a little rough and angualr, very square and boxy, and fairly hefty to hold. It's a phone to be noticed, especially with all its stitching along the back and glinting decals.
Die-hards can buy the collectible from select high-end stores (like Harrods in London), or direct from the company's site.

Monday, January 5, 2015

Technology fever grips Las Vegas: the Consumer Electronics Show

Las Vegas is going more high-tech than ever with the annual Consumer Electronics Show!
More than 160,000 technology innovators and gadget geeks are flocking to the desert city for the four-day CESshow which opens officially this Tuesday.
This is the largest trade show in the Americas, with more than 34 football fields-worth of exhibition space.
Consumer Electronics Association President Gary Shapiro said: “This year is a record-breaker. It’s not only larger and bigger with more exhibitors, but it’s more exciting. There’s more products. There’s more new innovation here. We are ‘crescendo-ing’ with the innovation because of so many different factors and so many different categories.”
Wearable technology isn’t just for your wrist anymore. Smart clothing that can take your temperature or measure your heart rate are also debuting at this year’s CES.
Robin Roskin, founder of Living in Digital Times, said: “Well now you’ve got clothing with sensors built in. You’ve got clothing that can get cooler as your body gets warm. You’ve got clothing that can monitor your heart if you’re a cardiac arrest risk — or a diabetic, to tell you the level of your blood sugar: constant real-time monitoring.”
For the first time at CES, this year there is an area just for drones. This technology has come a long way, taking various forms, from toys to photographic and filming platforms.
TechCrunch reporter Colleen Taylor said: “Drone technology is huge. Personal drones are very big. They just keep getting cheaper and they just keep getting easier to fly. So, definitely at CES, look up: not all of this stuff is just going to be on the floor.”
A variety of self-driving cars and smart dashboards demonstrate how seriously the industry is putting tech into vehicles. A record 10 automotive manufacturers are set to exhibit in Las Vegas, including Toyota, Ford, BMW, Audi and Mercedes. Automotive exhibits will cover 17 percent more event space than last year.

Sunday, January 4, 2015

The Coolest Piece of Technology You Can Buy Isn't a Gadget, It’s a Car

The Coolest Piece of Technology You Can Buy Isn't a Gadget, It’s a Car
Imagine driving up to a gas station, stopping in front of a pump, pulling the hose close to the fuel tank and having the hatch pop open automatically—magic! That’s exactly what happens with the Tesla Model S. Except it’s not a gas station and not a fuel tank that’s popping opening. The fully electric car’s charging outlet has an RFID tag embedded so it can recognize when a charging plug is pulled up to the car.
Tesla may be changing the car industry, flipping gas guzzling vehicles on their heads, but the new car company is also head deep implementing some of the newest technology into its most popular vehicle. From a 17-inch touchscreen display which handles all adjustments, to cellular connectivity which keeps the car’s software up to date, there’s a lot of technology to appreciate.
Those turned on by raw power and racing through speed limits will love Tesla’s thirst for the open road, but it’s the technology and attention to detail draped over every aspect of the Model S which should make it any gadget lover’s ultimate dream as well. The car industry is so riddled with rules, restrictions, and a slow process, that even the coolest new features are severely outdated. It’s nice to see something new grace a dashboard once in a while.
Exterior
08-2012-tesla-model-s-fd-1347336762.jpg
It’s your lucky day, you were just handed the key to a new Tesla Model S and have been told to go take it for a drive. The first thing you do is give it a once over and check out the outside of the car.
If you look closely at the top of the windshield you’ll see a couple of dots where the rear-view mirror connects. That’s a camera and forward looking radar. These are new additions to support Tesla’s upcoming autopilot features which give the car semi-autonomous driving abilities. The sensor in the windshield will see to the front of the hood where there’s another radar sensor in the front bumper.
The camera is for street sign spotting, lane recognition, and detecting other items in the road. As you pass speed limit signs the car will read those and display the posted speed limit in the dashboard, right under your current cruising speed. It really takes the wind out of the sails trying to tell the police offer that you didn’t know what the speed limit was as he pulls you over for speeding.
In the future, when the autopilot update is in all capable models, the car will have the ability to actually speed up or slow itself down automatically based on the speed limit sign recognition. It will also be able to stay inside your lane, no hands needed.
Around the entire vehicle, you’ll find ultrasonic sonar sensors which provide a complete 360-degree buffer able to detect other cars. These are similar to ones you’ll find on other cars—usually only found on the rear bumper for backup assistance.
If you pop open the hood on your trip around the Model S, you won’t find a killer, beefy, engine, you’ll find a “frunk”. The front truck, or frunk, is a spacious storage space provided by the small electric engine being housed in the rear of the car. Having a large empty crumple zone is also one of the things that makes the car extremely safe.
Under the front truck, on the wheels, you’ll find the car’s regenerative breaking mechanics. On a traditional car force is applied to brake. You may have even seen puffs of smoke when someone braked too quickly. This is the excess friction that’s essentially been wasted. Tesla’s cars capture the energy used to brake and put that back into system as little bits of additional battery power.
Inside, on the car’s dashboard you’ll notice a red and green line while driving. The red is how much energy the car is using, but when you begin to brake the line slides to green as, not only less energy is being used, but some is even pumped back into the system.
If you head to the back of the Model S you’ll find the back trunk, just called the truck, like almost every other car in existence. The motor is in the back, but it doesn’t take up the space needed for the truck since it’s really small compared to a standard combustion engine. It actually lives right on the frame underneath.
The electric motor is relatively small, but still incredibly powerful. There are different motors on different models, but the standard 85 kWh Model S uses a three phase, four pole AC induction motor with copper rotor. It has 380 horse power with a 0-60 rating of 5.4 seconds.
The battery is underneath the car as well and takes up most of space. The entire battery is encased in a metal frame which both protects it and makes it very quick to swap with a new battery. Tesla has touted its ability to swap a car’s battery in only 90 seconds.
Finally ready to enter the car you notice there’s no key slots or door handles, even more, you’re not even holding a standard key, but a miniature version of a Model S. The electronic key fob will unlock the doors when you get close, it will also cause the door handles to pop out so they’re no longer flush with the car.
Even though the tiny Model S you’re holding in your hand is a bit of a novelty, it’s also cleverly constructed. Different areas of the fob can be pressed to activate those parts of the car. For instance, pressing on the trunk will open the car’s trunk. Pressing the hood will open the front trunk, and so on.
Interior
tesla_1.png
Now, getting in the Model S, you sit down and look around as you get ready to drive off. The first thing you’ll probably notice is the 17-inch touchscreen center console. This handles almost all the car’s settings and adjustments. Not only does it make other car’s touchscreen controls look small, it even makes an iPad and other tablets look small.
There’s an Nvidia Tegra 3 processor powering the large console which includes six main sections across the top. The first is music where you’ll find AM, FM, and satellite, but there’s also Slacker Internet radio which uses the car’s built in cellular connectivity. There’s of course also voice commands to find the music you’re looking for.
The next section is navigation which uses Google maps and can even do live traffic overlays. A dedicated GPS unit is on board as well if the car suddenly doesn’t have a 3G network connection anymore. After that there’s an energy tab which provides lots of real-time charts and data to show how the car is using the battery energy. The speed and way you drive will affect how much distance each charge will last.
There’s a built in web browser which you can set to take up the entire screen to browse the web. It doesn’t support Flash, however. The camera section allows access to the car’s built in rear HD camera which can be toggled on at any point and left on if you’re constantly curious about what’s happening behind you. Last but not least is a phone tab which will download and store contacts from any smart phone.
The dashboard in front of the steering wheel is also a large screen, completely customizable. Both the left and right sides can be changed to display anything from your music to turn-by-turn direction. A rotary knob on the steering wheel will let you adjust items on the screen without taking your hands off the wheel.
If you opt for the option, you can also control the car’s suspension from the main console. Since the Model S rides a little low, there’s the ability to raise and lower the car for speed bumps, rough roads, or pesky driveways. Better yet, the smart suspension is connected to GPS and will remember the the moves you made, and adjust in those same locations again automatically.
As you lean back and take it all in, you should hit play and give the 12 speakers (including an 8-inch sub) a listen. Streaming Internet radio might not provide the highest fidelity, but other music choices still have the opportunity to shine.
There’s no key slot, obviously, so the fob just needs to be inside the car for it to turn on. Since it’s an electric vehicle, there’s no engine noise so it can be a little alarming that you only need to have the key inside the car and then apply your foot to the break for everything to be on and ready to go.
Software
tesla_software.png
Most of the Model S’s killer features are evergreen. They don’t get outdated or stale because of the company’s ability to wirelessly provide software updates, just like your mobile phone.
A great example of this is the electric car creep feature. In a standard combustion engine, there’s constant forward movement unless the brake is applied. It’s probably something anyone who’s been driving for any amount of time doesn’t even notice and takes for granted. In an electric vehicle, however, you have to intentionally tell it to go, by pressing the accelerator, to get the car to move, otherwise it stays stationary.
Some people actually hated not having the slight rolling creep with no acceleration applied. It was a minor aspect that made it a lot harder to get used to driving the Model S. But because of the ability to apply updates to the system and how the car works, there’s now a setting which will allow you to turn on and off the creep feature with the touch of a button.
tesla_software2.png
Having the car so reliant on the ability to get critical and miscellaneous software updates or patches means Internet access is vitally important. That’s part of the reason every car comes with integrated cellular wireless connectivity.
There’s also a Tesla app for mobile devices. With the app from your phone you can check the charging status, heat or cool the car before driving, locate and track the car driving in real-time, and also flash the lights or honk the horn remotely.
No smog check, no oil changes, and less motor parts means less trips to the deanship or mechanic. Tesla recommends only a single service check up each year. The best part is that they can come to you and provide a loaner Tesla while they take it get checked out. The app comes in handy here to keep an eye on your vehicle as someone else takes it in for service.
Conclusion
The unfortunate part of Tesla, and specially the Model S, providing such an amazing experience is that at $60,000 and up, it’s out of a lot of people’s price range when they’re looking to get a new car. It’s the ultimate gadget, but most will have to settle for dreaming about.
If you can afford it, the upside is that it’s the only car you buy online—like an electronic device from Amazon—that can be purchased within a few minutes. I’m just not sure it qualifies for Amazon Prime-like two day shipping.