I hope you remember reading a post of mine which was published on this site at the end of July this year titled Transforming the meaning of evidence and truth. If not, go have a look, and then watch this video, and read on below.
https://www.youtube.com/watch?30=&v=-RetOjL1Fhw
“What use is there for this technology, you may be asking? Well, with Facebook’s involvement, it is quite possible that users will be able to animate their profile picture and cause it to react to stimuli on the social network at some point in the future.” writes PetaPixel, which is where I learned about this tech.
On their project page, the development team describes their project in their Abstract:
“We present a technique to automatically animate a still portrait, making it possible for the subject in the photo to come to life and express various emotions. We use a driving video (of a different subject) and develop means to transfer the expressiveness of the subject in the driving video to the target portrait. In contrast to previous work that requires an input video of the target face to reenact a facial performance, our technique uses only a single target image. We animate the target image through 2D warps that imitate the facial transformations in the driving video. As warps alone do not carry the full expressiveness of the face, we add fine-scale dynamic details which are commonly associated with facial expressions such as creases and wrinkles. Furthermore, we hallucinate regions that are hidden in the input target face, most notably in the inner mouth. Our technique gives rise to reactive profiles, where people in still images can automatically interact with their viewers. We demonstrate our technique operating on numerous still portraits from the internet.”
(You can download the project paper at this link on the project page.)
Well….let’s say you have pictures of your friends, or students, or teachers, or family, or government….and you have access to this technology, and you also have access to a machine-driven-communication tool, such as the ones described in this post on Medium.com: “Five years from now you won’t have any idea whether you’re interacting with a human online or not. In the future, most online speech, digital engagement, and content will be machines talking to machines…This machine communication will be nearly indistinguishable from human communication. The machines will be trying to persuade, sell, deceive, intimidate, manipulate, and cajole you into whatever response they’re programmed to elicit. They will be unbelievably effective.”
Do you know about Interactive Dynamic Video?
Read about it at this web page http://www.interactivedynamicvideo.com/, and watch the other videos there. “One of the most important ways that we experience our environment is by manipulating it: we push, pull, poke, and prod to test hypotheses about our surroundings. By observing how objects respond to forces that we control, we learn about their dynamics. Unfortunately, regular video does not afford this type of manipulation – it limits us to observing what was recorded. The goal of our work is to record objects in a way that captures not only their appearance, but their physical behavior as well.”
Imagine the possibilities in the classroom! Class discussions without any input from the class! Oral exams without examiner or students! Or, perhaps, ideas more useful for teaching and learning like those described below.
The Virtual Holocaust Survivor. The video below shows a hologram, not the technology described above, but the effect on the viewer is somewhat the same – this is an interview with a man who is not there. (You can see more of Pinchas Gutter here and here.)
Highlands Ranch students use virtual dialogue with WWI kaiser to spark interest in history describes a history class which has built an AI Kaiser Wilhelm: “World history students at STEM School Academy in Douglas County built a historical figure head of Kaiser Wilhelm compete with artificial intelligence that can speak through Google…what (the head) can do is answer students’ questions, debate and even reason with them because its AI is stocked with deep historical background.
“The students researched both primary and secondary sources for information about the causes of WWI and its leaders, using the College Board’s AP World History Curriculum: Stearns’ “World Civilizations: The Global Experience.”
“They also went to reliable websites such as the British Library’s World War I site, BBC news’ “World War One: 10 Interpretations of Who Started WWI,” the World War I Document Archive, and the Library of Congress records on the Great War, Cegielski said.
“The AI, in this case Kaiser Wilhelm, can respond either in frustration, anger or be perfectly agreeable when talking about his role in history. It all depends on the question.” Read the whole article at this link, and watch this video from KUSA-TV in Denver to learn more about the why and how of this project.
https://www.youtube.com/watch?time_continue=2&v=kBtvf3GvbCo