Preparing for the new threat from Deepfake
Advanced technology helps people stay connected and have influence in mass communication from the past where we communicated by one way communication, but now have two-way Communication. Receivers can create content on their own and freely communicate via their social media channels.
Although technology is the most important tool for communication it is also a double-edged sword. If we use technology to create something positive, it will benefit us all. However, if we use technology in a negative way such as news distorting, sending fake news, or disinformation; this may cause personal and social damage such as destroying brand image, business development, and spin out public trust including government, social, etc. Moreover, destroying media trust can potentially affect the government. Lastly, the objectives of some who wish you harm is to lead everyone to question and don’t trust in anything . Especially now that Artificial Intelligence has the capability to create fake information at a high level, such as:
- The copy of a human voice from Lyrebird or Baidu DeepVoice
- Editing – deleting picture tools from Adobe Cloak
- Switching someone's face, eyes, mouth, to others from Face2Face, FaceSwap, Deep Video Portraits. Deep fake is another tool to counterfeit content, photo, voice, video or even articles such as editing the face of a famous actress in adult videos, editing sentences in original videos, editing voice and writing fake news that leads to misunderstanding. Deep fake can create something so real it could be very hard to distinguish what is real to fake. Deep fake video statistics growth reports show there is 100% growth since 2018, comparing to 14,678 deep fake videos. (96% is adult videos)
So how does Deep fake work? By Deep Learning Algorithms which has a main structure called GANs (Generative Adversarial Networks). Normally this AI algorithm can bring to use in synthetic data to create data for many research and detect fraud and deep fake.
With knowing how it works, then every organization, government, public sectors and public should be ready to deal with deep fake with the following:
1. Self: Start with yourself by being more careful and think twice before posting or sharing information through social media.
2. Social/ Global inclusion: Deep fake is a national and international threat. Therefore, everyone should keep an eye on news from social media and receive/share it carefully. We must have a law to protect the victims from deep fake. We should have cooperation on an international level for legal proceeding both local and international criminals.
3.Research: Cross-disciplinary by cooperating with leading organizations aiming to create more experts and share “know how to” detect deep fake and spread awareness to society.
4. Collaboration: Collaboration between mass media, society, platform owners, government and private sector to know about threats in order to help each other.
5. Ethical: We should promote AI developers and users considering ethics to specify the standard of using AI for education, government and private sectors.
In addition to developing or researching new technology, we should consider risks and effects which will occur to society in both positive/ negative ways. Aviv Ovadya, the founder of The Thoughtful Technology Project and Jess Whittlestone, the founder of Leverhulme Centre for the Future of Intelligence, give the following suggestions:
1. Enhance more understanding of risk and risk strategy
a) Develop the barrier of language issue and set the standard of language in order to communicate for problem solving such as risk management, threat, etc.
b) Meeting with experts to evaluate the risk of Machine Learning (ML) research
c) Planning a short and long-term plan to dealing with someone using research in an unethical way.
2. Set standard to enhance more understanding for researchers about the effect of ML research
a) Arrange workshops about challenges of research dissemination
b) Awareness about the dangers of ML research including victims and risk manager
c) Supporting effect evaluation for both positive and negative for researches
3. Supporting institute and system related to ML research
a) Supporting to initiate the evaluation research process before starting to work in order to reduce the impact which may occur to researcher
b) Create checking processes that allow other researchers check the research
c) Developing dissemination process for some risk research and safer disseminate research
Although we have prepared ourselves to deal with deep fake, we still have some points to think about:
1. Using AI to detect deep fake is just a suggestion which can not be clear because deep fake has many types; such as mimic, novel, fake news, etc. We can screen by using media or social media platforms, but the most important thing is knowledge and people cooperating to help check and report.
2. Detecting deep fake technology is the most important thing to consider in every group, especially sensitive people?
3. Normally, detecting deep fake occurs after something has already happened. It’s too late for the victim. We should have a law that protects a victim. Deep fake may also be used as a weapon from other countries to make people in the country confused and lose trust.
Nowadays, we live in a world that can have many characters in the real world and virtual world such as Cyberspace, Virtual Reality and Augmented Reality. Things that we can see are made and synthesized for education and entertainment. But sometimes, they were made to interrupt and interfere with the trust and hope of people in society too. Therefore, everyone including children, adults, family, society, and organizations must enhance awareness on how to use technology in ethical ways in order to reduce the risk of using technology the wrong way.
If we don’t start with ourselves, who else?
- 2019 Brand Disinformation Impact Study
- The biggest threat of Deepfakes isn’t the Deepfakes themselves
- Social Engineering And Sabotage: Why Deepfakes Pose An Unprecedented Threat To Businesses
- Text-based Editing of Talking-head Video
- Researchers, scared by their own work, hold back “Deepfakes for text” AI
- Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning
- Prepare, Don’t Panic: Synthetic Media and Deepfakes