Deep fakes have become an axiom in different parts of the world, which in general refers to a face-swapping technology in which fake images and videos are quickly created and are astoundingly realistic. Deep fake is a data-driven technology based on machine learning algorithms. For creating deep fakes developers are first required to train computers to process and make sense of the inputs given in the form of images. Developers of deep fakes took inspiration from the functioning of the human brain. Image processing in human brains occurs at two levels; at first, the visual cortex gathers information about the image and then the neocortex processes that image and gives a sense and meaning to what’s being seen. Developers of deepfake, applying the same principles created the CNN (Convolutional Neural Network). The technology on which deepfake works is called a GAN (Generative Adversarial Network). It consists of two CNNs- one is a forger and the other is a detective. The forger helps in analyzing tens of millions of pixels in an image and picks up the distinctive attributes of that image. If the forger has analyzed the images of cats, for example, it will now create a fake cat image. In the next step, it will send this fake cat image to the second CNN- detective and the detective will test the fake image in comparison to the real image and respond to the forger with the errors and discrepancies. The forger then revamps the image and sends it back to the detective for retesting. This process is repeated unlimited times till the fake image resembles the real one. This whole process can be used to create potently convincing deep fake images and videos.
Deep fakes raise serious concerns due to their ability to blur the lines between fake and real. Though deep fakes have the potential for having a high level of creative and educational value, there are numerous examples where deep fakes are used in socially undesirable ways. The ethical concerns related to deep fake videos and pictures can be seen in the usage of deep fake obscenity, political campaigns, false news reports, IPR violations and non-consensual obscenity. The major problem associated with this technological field is that deep fakes become ubiquitous and pervasive over the internet in a very short time and can have menacing consequences creating treacherous concerns for everyone. It can lead to the invasion of the autonomy and carnal privacy of an individual creating humiliation and abuse in different ways. Bullying among children may happen with the use of deep fakes. It can also cause damage to the reputation of an individual or an organization, distort democratic discourse and impact election results. Along with these unintended consequences like eroding trust in institutions, deepening of social divisions or polarization and damage to international relations and national security may occur. Deep fake images and videos can also cause humiliation and exploitation to individuals and corporations in the form of physical, mental or financial abuse. It can also harm the economic interests of organizations in the form of IPR violations.
Since deep fake videos and images are astonishingly realistic and difficult for the human eye to notice the difference, it is essential for both governments and content distribution platforms to develop deep fake detecting technologies. But an outright ban on deep fakes would not work well because we have seen in many cases that fake news can not be checked by the executive or legislative actions. To check the problem of deep fakes, content distribution platforms must have legal obligations to control the spread of fake content, because content distribution companies have the required technical expertise to develop deep fake detecting technologies. But the major obstacle in this process is the technology itself because the technology to create deep fakes and to detect them is evolving simultaneously which is creating a “race to the bottom” situation. Therefore, restricting deep fake detection at the helm of public and private institutions will not be sufficient. In addition to public and private legal measures, there is a need for market-driven solutions. A better way to deal with this situation would be to develop deep fake detection technologies on open-source platforms. It will help in creating solutions on commonly shared standards to which anyone can contribute. An open ecosystem will help individuals and organizations in developing ingenious and innovative ways to develop deep fake-detecting technologies.
In the future with more advancements in technologies like machine learning and deep learning, we can see the process of democratization by artificial intelligence where intelligent algorithms will replace most of the manual procedures and processes. Such democratization by AI will dramatically transform how data is collected and processed. With more nuanced and detailed data the functioning algorithms will become smarter and more effective. With this fraud detection will become more precise and less ambiguous. Highly sophisticated AI systems will eliminate different types of fraud including deep fakes. It will generate new paradigms in fraud detection because AI algorithms will evolve to not only explain the process of fraud detection but also the reasons for its conclusions with evidence. It will also recommend substantial precautionary measures for the future to avoid the proliferation of fraudulent processes and programs.
Nina Schick in his book “Deepfakes and the Infocalypse” argues that the problem of deepfakes presents an imminent danger to societies all around the world. To fight the problem of misinformation generated through deep fakes we will have to involve both proactive and reactive counter-strategies. The first step towards this is to recognise and accept the existence of the problem because most people are unaware of the problems associated with the potentialities of deepfakes. In addition to this, we have to find common language and terminologies to address these issues effectively and coherently, because sometimes deepfakes have not just local but global ramifications. It is apparent that deepfakes will become usual and a day-to-day phenomenon in the future. Thus, the necessity for supporting credible journalism will become more important than ever. Along with this, the role of fact-checking services will become exceedingly crucial and pivotal to every society. It is also important to note that with time it will be necessary for users to protect themselves from fake news and misinformation. Rather than trusting blindly the information given by an online source, it will be important to check whether the website is trustworthy or not. Today there exist simple laptop and mobile-based applications which can protect you from misinformation and disinformation. For example, NewsGuard is a browser plug-in which rates news websites on their trustworthiness and reliability. In current times, even state officials of different countries are using NewsGuard data to track disinformation and misinformation.
There is no bewilderment in the fact that the quality of deepfakes is dramatically improving. But in a larger picture, Deep Fakes are just one example or tool of misinformation. In the future, it will be really difficult to distinguish between real and fake content. It will give room for malicious and malefic actors to fish in these murky waters by performing illegal activities like scamming, demeaning and exploiting people and businesses for personal gains. So, there is an urgent need today to start proactive action against issues like deep fakes or any other kind of disinformation. Global cooperation is essential in this process for developing robust early warning systems for protecting people from the onslaught of deepfakes and other tools of disinformation. Government, people and civil societies have to come together to restore and protect the faith in institutions and establishments. The future age is an Information Age, and to develop a peaceful and content society there is no place for the existence of disinformation and misinformation.