Xiao Zha Yuan Universe's 1-hour "real-life conversation" even makes the hair strands distinct

**[Introduction] The world of “**Ready Player One” is right at your fingertips! Over the weekend, Xiao Zha started a one-hour “real person conversation” in the Metaverse, which made the anchor sigh and almost forget that the person in front of him was not a real person.

Get up!

Just yesterday, the well-known American podcast host Lex Fridman and Xiao Zha started a one-hour “face-to-face” chat about the Metaverse, which shocked the world.

As Lex Fridman chatted, he said bluntly, “I almost forgot that you are not a real person in front of me.”

They put on Meta’s headsets, hundreds of kilometers apart, but were able to reproduce Aavtar’s facial expressions and movements so realistically.

Behind this is Codec Avatars, a technology proposed by Meta in 2019 to easily create virtual human avatars. It only requires a mobile phone to capture subtle differences in human expressions.

Some netizens said that not to mention that they were immersed in the conversation, but even they were so immersed in watching it that after 9 minutes, they suddenly felt that they were real people having a conversation!

It even made former Google scientist David Ha change his skeptical attitude towards the “Metaverse”.

In about 13 months, Xiao Zha’s “true love” for the Yuan Universe seemed to be beginning to pay off.

From 2021 to now, Meta Universe Laboratory has invested tens of billions of dollars and lost money, but it has finally allowed people to see that the world in “Ready Player One” is one step closer to us.

Next, let’s take a look at the wonderful moments of the conversation between Xiao Zha and Lex’s virtual avatar.

Interview transcript

As soon as he appeared, the interview between Fridman and Xiao Zha began in the metaverse.

Although one person was in California and the other was in Austin, Texas, through Codec Avatar and 3D stereo technology, the two people seemed to be sitting directly face to face, and began a meeting and chat that may go down in history.

Fridman adjusted the position of the light source, and both of them clearly felt the change in light.

The other places around the two people were pitch black.

Looking at the other party’s clear face and vivid expression, I really felt that all this happened in a room with the lights off.

Fridman’s most intuitive feeling is that this is too real, so real that people can’t accept it.

In such an environment, the hour-long interview began.

The interview covered Xiao Zha’s thoughts on the Metaverse and the discussion of what “reality” is. What attracted the most attention was Xiao Zha’s views on the prospects of combining AI and the Meta Universe, as well as his plans for the future of Meta AI.

Three years of full-body simulation

In Xiao Zha’s view, AI technology will play a very important role in the Metaverse in the future.

There will definitely be very powerful super artificial intelligence in the future, but there will still be many AI tools that allow people to complete various tasks very conveniently.

He gave Fridman’s podcast as an example. Podcasters need to maintain interaction with their community audiences as much as possible. But the anchor cannot do this 24 hours a day without taking a break.

If an AI can be built in the metaverse to help anchors maintain the vitality of their fan communities and meet the various requests raised by fans, it will allow anchors to accomplish things that may have been impossible to accomplish before.

And Meta hopes that such AI will not only appear in the metaverse, but also on various existing platforms to help anchors,

Internet celebrities maintain their own communities of fans and users.

Meta will release this feature as soon as possible in the future to empower more content creators.

Further, Meta AI will appear more in various places in the metaverse, communicate with users, and provide help to users.

Different AI characters will show up with different personalities in the metaverse, providing users with a very rich and diverse experience.

And now the AI for these different roles is in the final stage of preparation. Meta hopes to make these AIs more predictable and safe.

In addition to making ordinary users have a better experience in the Metaverse, AI can provide a variety of serious and professional services to enterprises or to customers on behalf of enterprises in the Metaverse.

In metaverse games, AI can make NPCs behave more attractively. They are developing a Snoopy game similar to scripted killing. The AI performs very well as the host of the game. It is very funny and interesting.

Llama 3 on the way

Fridman continued to ask Xiao Zha about the current situation of Meta AI. Regarding Llama 2 and the future Llama 3, Xiao Zha also knew everything and kept breaking the news.

In the last podcast with Fridman, Xiaozai discussed with him whether to open source Llama 2, and Xiaozai was very happy that Meta finally did so.

In Xiao Zha’s view, the value of open-sourcing a basic model like Llama 2 far outweighs the risks.

Xiao Zha said that before open source, Meta spent a lot of time, conducted very rigorous evaluation and red team simulation, and finally open sourced it. Llama 2 has been downloaded and used more than Xiao Zha expected.

As for Llama 3, there will definitely be one. But after open sourcing Llama 2, Meta’s priority now is integrating it into various consumer products.

Because Llama 2 itself is not a consumer product. It’s more like an infrastructure that people can use to build things. Therefore, the focus now is to continue fine-tuning, and to make Llama 2 and various versions can serve consumer products well.

Hopefully one day hundreds of millions of people will enjoy using these products.

However, Meta is also working on developing foundational models for the future. There isn’t much to reveal right now, but it will definitely be released after rigorous red team testing like Llama 2.

Xiao Zha also hopes that when Llama 3 takes shape, Meta will continue to be open source. However, Meta has not yet finalized this matter, because it is still far away from Meta releasing the next-generation basic model.

However, open source models can allow people to better experience what the models can do. For example, Xiao Zha himself is very addicted to chatting with various AI virtual characters, which is very exciting.

Future Life of Humanity

Regarding human life in the future, Xiao Zha said that the metaverse will be everywhere!

The simplest example is the telephone. In the future, humans will experience real interaction between themselves and the virtual world just like making phone calls now.

For example, two people can experience their current communication method anytime and anywhere. Except that the two are not actually sitting in the same room, in other aspects such a communication meeting is no different from face-to-face communication.

Because from a philosophical point of view, the essence of the real world is the combination of what we can perceive and what actually exists.

If the digital world can restore this aspect better and better, the digital world can become richer and more powerful.

Finally, Fridman asked Xiao Zha, were you sitting on the beach chatting with me?

Xiao Zha said, no, I was sitting in the conference room.

Fridman said, it’s a pity, I was sitting on the beach, and I didn’t wear any pants. Fortunately, you didn’t see what I really looked like.

Codec Avatars: A mobile phone, your avatar is here

In fact, the amazing technology we saw in the blog video was actually developed by Meta as early as 19 years ago.

It is - Codec Avatars.

If you want to achieve true interaction in the metaverse, virtual avatars are the two channels of Ren and Du that open the door to the metaverse.

The project Codec Avatars aims to implement a system capable of capturing and representing realistic avatars for use in XR.

Initially the project started with high-quality avatar demos, and progressed to building full-body avatars.

At the Connect 2021 conference, researcher Yaser Sheikh demonstrated the team’s latest achievement-Full-body Codec Avatars.

Codec Avatars, meanwhile, support more complex eye movements, facial expressions, hand and body postures.

In addition, Meta also showed that avatars can achieve realistic rendering of hair and skin under different lighting conditions and environments.

The opportunity for Meta to start making Codec Avatars dates back to 9 years ago.

In 2014, Yaser Sheikh, the head of Panoptic Studio, a 3D capture laboratory under Carnegie Mellon University’s Robotics Institute, met Oculus chief scientist Michael Abrash, and the two had a very speculative chat.

Left: Michael Abrash; Right: Yaser Sheikh

Yaser Sheikh joined Meta in 2015 and has been leading the Codec Avatars research team since then.

“To create a realistic avatar, the basis is measurement,” said Thomas Simon, research scientist at Codec Avatars.

“What makes avatars look real is precise data, which requires good measurements. So the key to building realistic avatars is finding a way to measure the physical details in human expressions, such as the way a person squints their eyes or wrinkles their nose. .”

The Codec Avatars team at Pittsburgh Labs uses two main modules to measure human expressions: encoders and decoders.

Now, people only need a mobile phone to accurately capture facial expression information.

Restore the true body directly in the metaverse.

Netizens exclaimed: Uncanny Valley

Netizens who have read this blog are amazed by the effects in the video.

Jim Fan, senior scientist at Nvidia, said,

This episode of @lexfridman will go down in history as the first podcast produced by video conferencing with an avatar.

In the next 3-5 years, we will completely cross the “uncanny valley” of Avatar and simulation.

Throughout my career I have been working on avatar agents. Our ultimate vision is to achieve a scene from The Matrix: full-body, real-time avatars of humans and artificial intelligence, sharing the same virtual space, interacting with objects in a lifelike way, receiving rich multi-modal feedback, and forgetting that the world is just a simulation .

While avatars now need to be scanned with special equipment, Zucker hinted that smartphone selfie videos will soon be possible.

Given recent advances in 3D generative models, I think this will be possible within a few months. Fine-grained finger tracking and full-body tracking will be the next goals.

How did you evolve from a 3-pixel avatar to this one? I must have gotten excited!

Last year Meta spent 2.6 billion US dollars on advertising and marketing. The effect of this podcast is much better than the money. Lex, please ask Xiao Zha to give you the money!

Although there are some minor errors in eye tracking, the accurate presentation of expressions makes people forget that this is actually just an avatar. The future is already here!

No wonder Musk couldn’t find Xiao Zha, it turned out that Xiao Zha was hiding here!

Finally, the original video of the interview is here.

References:

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)