Welcome to our blog

Explore our blog for impactful resources, insightful articles, personal reflections and ideas that inspire action on the topics you care about. 

Welcome to our blog
FACEBOOK CAN MAKE VR AVATARS LOOK—AND MOVE—EXACTLY LIKE YOU

FACEBOOK CAN MAKE VR AVATARS LOOK—AND MOVE—EXACTLY LIKE YOU

"There's this big, ugly sucker at the door," the young woman says, her eyes twinkling, "and he said, 'Who do you think you are, Lena Horne?' I said no, but that I knew Miss Horne like a sister."

It's the beginning of a short soliloquy from Walton Jones' play The 1940's Radio Hour, and as she continues with the monologue, it's easy to see that the young woman knows what she's doing. Her smile grows while she goes on to recount the doorman's change of tune—like she's letting you in on the joke. Her lips curl as she seizes on just the right words, playing with their cadence. Her expressions are so finely calibrated, her reading so assured, that with the dark background behind her, you'd think you were watching a black-box revival of the late-’70s Broadway play.

There's only one problem: her body disappears below the neck.

Yaser Sheikh reaches out and stops the video. The woman is a stunningly lifelike virtual-reality avatar, her performance generated by data gathered beforehand. But Sheikh, who heads up Facebook Reality Labs' Pittsburgh location, has another video he considers more impressive. In it, the same woman appears wearing a VR headset, as does a young man. Their headsetted real-life selves chat on the left-hand side of the screen; on the right side, simultaneously, their avatars carry on in perfect concert. As mundane as the conversation is—they talk about hot yoga—it's also an unprecedented glimpse at the future.

For years now, people have been interacting in virtual reality via avatars, computer-generated characters who represent us. Because VR headsets and hand controllers are trackable, our real-life head and hand movements carry into those virtual conversations, the unconscious mannerisms adding crucial texture. Yet even as our virtual interactions have become more naturalistic, technical constraints have forced them to remain visually simple. Social VR apps like Rec Room and Artspace abstract us into caricatures, with expressions that rarely (if ever) map to what we're really doing with our faces. Facebook's Spaces is able to generate a reasonable cartoon approximation of you from your social media photos but depends on buttons and thumbsticks to trigger certain expressions. Even a more technically demanding platform like High Fidelity, which allows you to import a scanned 3D model of yourself, is a long way from being able to make an avatar feel like you.

That's why I'm here in Pittsburgh on a ridiculously cold early March morning, inside a building very few outsiders have ever stepped foot in. Yaser Sheik and his team are finally ready to let me in on what they've been working on since they first rented a tiny office in the city's East Liberty neighborhood. (They've since moved to a larger space on the Carnegie Mellon campus, with plans to expand again in the next year or two.) Codec Avatars, as FRL calls them, are the result of a process that uses machine learning to collect, learn, and re-create human social expression. They're also nowhere near being ready for the public. At best, they're years away—if they end up being something that Facebook deploys at all. But Sheik and his colleagues are ready to get this conversation started. "It'll be big if we can get this finished," Sheik says with the not-at-all contained smile of a man who has no doubts they'll get it finished. "We want to get it out. We want to talk about it."

In the 1949 essay "The Unconscious Patterning of Behavior in Society," anthropologist Edward Sapir wrote that humans respond to gestures "in accordance with an elaborate and secret code that is written nowhere, known by none, and understood by all." Sixty years later, replicating that elaborate code has become Sheik's abiding mission.

China may overtake the US with the best AI research in just two years

The number of influential AI research papers coming from China is increasing rapidly, a data analysis shows.
by Will Knight  March 13, 2019


White House plans to cut funding for science couldn’t come at a worse time for the country’s ambitions to lead the world in artificial intelligence.

Recommended for You
A quantum experiment suggests there’s no such thing as objective reality
There needs to be a crackdown on the big tech firms, says a UK review
The man who helped invent virtual assistants thinks they’re doomed without a new AI approach
Zuckerberg’s new privacy essay shows why Facebook needs to be broken up
The hipster effect: Why anti-conformists always end up looking the same
The most detailed analysis of Chinese AI research papers yet suggests that China is gaining on the US more quickly than previously thought.

China’s vibrant tech scene has come up with a number of recent breakthroughs, and the government has recently launched a major initiative to dominate the development of the technology within a matter of years (see “China’s AI awakening”).

Still, it isn’t easy to measure progress in a broad and complex area of technology like artificial intelligence. Previous studies have shown that China already produces a larger number of research papers mentioning AI terms like “deep learning.” But it has always been difficult to ascertain the quality of that research.

The new study aims to solve that problem. It comes from the Allen Institute for Artificial Intelligence (Ai2), a nonprofit in Seattle created by Microsoft co-founder, Paul Allen that is focused on fundamental AI research. The institute previously created a tool, called Semantic Scholar, that uses artificial intelligence to make it easier to search and analyze scientific research papers published online.

Using this tool, Ai2’s researchers examined not just the number of AI research papers coming from China but the quality of those papers—as judged by the number of citations they receive in other work. The study suggests that China will overtake the US with the top 50% of most-cited research papers this year, the top 10% of research papers in 2020, and the top 1% by 2025.

“Our economy and security have benefited greatly from the cutting-edge research being homegrown in our universities and research institutes,” says Oren Etzioni, CTO of Ai2 and a leading AI researcher.“We need to urgently increase AI research funding, and commit to visas for AI students and experts.”

But there are reasons to be cautious about this research, too.

Kai-Fu Lee, a prominent Chinese AI investor who previously established both Microsoft’s and Google’s outposts in China, says the study may overstate things a bit. “There’s definitely momentum,” he said by phone from Beijing, “but the time horizon is farther out.”

Lee, who is the author of a recent book on AI in China,  AI Super-powers: China, Silicon Valley, and the New World Order, says the US still boasts the vast majority of the world’s most influential scientific thinkers, according to measures of an individual’s citations (the US is also still far ahead of China in terms of “best paper awards” at major conferences). “Had there not been a Geoff Hinton or a Yann LeCun,” he says, referring to two AI researchers based in Canada and the US, respectively, “would there have been deep learning?”

Lee is also hopeful that people don’t lose sight of overall scientific progress when talking about the “AI race” between the US and China.

Etzioni agrees that openly published scientific research can benefit everyone, regardless of its country of origin, but he believes the study should serve as a wake-up call to the US government. “If we move to second place, will the next Google be founded here or in China?” he says. “It’s not a zero-sum game, but it isn’t a picnic either.”

China may overtake the US with the best AI research in just two years
GIFEC Participates In Ghana Code Day

GIFEC Participates In Ghana Code Day

The Ghana Investment Fund for Electronic Communications, GIFEC has participated in the Ghana Code Day Celebration held at the Zenith University College on Saturday, May 27, 2017.

The technology workshop event which was dedicated to La Dade-Kotonpon elementary school kids saw over 300 participants. Participating schools included La-Wireless 1-5 Basic and Junior High School, Nativity Presbyterian Junior High School, St Morries Junior High School among others.

The event was organized by Ghana Code Club, a non-profit organization with the objective of exposing all children between the ages of 8-16, especially the girl child to basic computer skills while learning to make their own games, animations and build their own websites.

Hon. Vincent Sowah Odotei, Deputy Minister of Communications who was the guest speaker at the event in his remarks disclosed that, the Coding for the children’s programme ties with the government policy to create digital literacy in various communities.

He said the programme will help the children to become active players in the digital world and urged all to take ICT seriously and embrace it as the world has evolved into a Technological World. “Our generation should become not only consumers of Technology but producers of Technology”, he added.

He, however, mentioned that the Ministry will re-develop the 194 Community Information Centre (CICs) built by the Ministry across the country and ensure that these centers are connected to the internet and furnish with the needed computers to ensure the realization of government’s quest to universal access and digital literacy. He then advised all to come on board to produce ICT products which will enable the children to have careers and become part of the technological world.

Some participants were taken through web designing, animation lessons and also had interactions with established tech partner organizations.

The sponsors included EPP Book Services, Tigo, Ispace, LETiARTS, McAforo, and mug.

.blog

Blog

The Ghana Investment Fund for Electronic Communications, GIFEC has participated in the Ghana Code...

Blog

The number of influential AI research papers coming from China is increasing rapidly, a data...

Blog

"There's this big, ugly sucker at the door," the young woman says, her eyes twinkling,...