This morning I read a post at Engadget titled Researchers make a surprisingly smooth artificial video of Obama “Their program grafts audio-synced mouths onto existing videos.” The post describes the process used by the University of Washington researchers:
“The researchers used 14 hours of Obama’s weekly address videos to train a neural network. Once trained, their system was then able to take an audio clip from the former president, create mouth shapes that synced with the audio and then synthesize a realistic looking mouth that matched Obama’s. The mouth synced to the audio was then superimposed and blended onto a video of Obama that was different from the audio source. To make it look more natural, the system corrected for head placement and movement, timing and details like how the jaw looked. The whole process is automated save for one manual step that requires a person to select two frames in the video where the subject’s upper and lower teeth are front-facing and highly visible. Those images are then used by the system to make the resulting video’s teeth look more realistic.”
Read the post, and watch the video below to see the result.
“Published on Jul 11, 2017. A new tool developed by computer vision researchers at the University of Washington Paul G. Allen School of Computer Science & Engineering creates realistic video from audio files alone. In this example, the team created realistic videos of Obama speaking in the White House, using the audio file from a television talk show and during an interview decades ago.”
The news story by Jennifer Langston is on the University of Washington’s web site, Lip-syncing Obama: New tools turn audio clips into realistic video, and the details of the project are at this link.A story by Kurt Schlosser at GeekWire, UW’s lip-syncing Obama demonstrates new technique to turn audio clips into realistic video goes into a bit more scientific detail about the process. Schlosser writes that “By turning audio clips into realistic-looking lip-synced video, the implication is that a moving face could be applied to historic audio recordings or be used to improve video conferencing.”
(Watch the video below, and read the notes on its YouTube page.)
We’ve written on this blog before about fake and real news and images. (See Can that be real? and Alternative Facts). The news media is full of claims, counter-claims about the “true facts” in almost every area of our lives. There are TV shows to help you sort out fake from real viral videos, among them Britain’s Channel 4’s Real, Fake or Unkown: “Of all the intriguing, shocking and extreme videos on the web, how do we know which are real? Real, Fake or Unknown works out how the web’s most-watched clips were made.”
My guess is that many of you reading this post have already made videos just like the Obama one being discussed above. Do you have a smartphone? a pet? a child? a friend?
“You can make this photo do and say whatever you want. The pranking possibilities are endless.” with Crazytalk
“Add a photo and speak into microphone, you’ll get a lively talking pet in video.” with My Pet Can Talk
“My Talking Pet brings photos of your favorite pet to life! Use it for any animal… or maybe someone you know?” with My Talking Pet
Why is this technology important to the IB community? Perhaps a quick reference to the novel “1984” and The Ministry of Truth.
“…Orwell saw, to his credit, that the act of falsifying reality is only secondarily a way of changing perceptions. It is, above all, a way of asserting power.” The New Yorker and to TOK.
“Who controls the past controls the future: who controls the present controls the past,” repeated Winston obediently. “Who controls the present controls the past,” said O’Brien, nodding his head with slow approval. “Is it your opinion, Winston, that the past has real existence?” (1984. Book 3, chapter 2.pp. 39-40)”
We’ll just check with Mr. Obama about when he said some of the words quoted in the video above. Wasn’t it in 1990?