28 February 2022, NIICE Commentary 7667
Aditya Kant Ghising

The advent of the 21st century has heralded a unique era leading up to significant changes in an individual’s way of life, a state’s nature of governance, the functioning of economics and entrepreneurship as well as in the reach and role of non-state actors. Globalization 4.0 has effectively given shape to new realities guided by the way individuals, businesses and governments interact with each other today. It is often argued that the speed of progress made by humanity within the last two decades of this century has been tremendous when compared with the last century in its entirety. The next step in this trajectory is the rise of Artificial Intelligence (AI) technology, the seeds for which were sown during a conference in 1956 where Allen Newell, Cliff Shaw, and Herbert Simon discussed their ‘Logic Theorist’ program with an aim to mimic the problem solving skills of human beings. The credit for coining the term ’artificial intelligence’ however, goes to John McCarthy.

The New Oxford American Dictionary (Third Edition) defines Artificial Intelligence as ‘the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.’ On a more practical level, in the form of a supercomputer operating at a speed of 36.8 petaflops (about twice the speed of a human brain), an Artificial Intelligence based machine is significantly improving its intelligence with time. Over the decades, this concept has fascinated many scientists, researchers, scholars, journalists, movie directors of science fiction and the general public alike. As humans, we have a natural tendency to fear what we do not understand and the concept of Artificial Intelligence remains one such innovation many fail to understand well, yet find themselves relying on in their day-to-day lives in the form of smart phones, wearables, smart home devices, automated vehicles, etc. The repercussions of such uses are multi-faceted and the general risks are yet unknown. The rapid growth of the AI industry has not only affected the way humans live their lives today, but also employment patterns, government-citizen interaction models, employee-employer relations, agricultural and healthcare technology innovations as well as the future of the world at large. This is evidenced by the explosion in machine-learning patents between 2004 and 2013, most of which are held by major players such as Microsoft, Yahoo, IBM, Google and the Nippon Telegraph and Telephone Corporation. The recent mushrooming of virtual and augmented reality platforms is bound to have a marked impact on the way of life of individuals in the future, thereby affecting their pattern of interactions with society at large. The onset of the COVID-19 pandemic has further fueled the diversion of society towards the digital realm with most sectors of the economy relying on the availability and use of the internet. This in turn has led to significant changes in migration and employment patterns in many parts of the world today.

The core idea behind the rising use of this technology, especially from the perspective of business is to minimize the risks associated with human error, paving the way for new ground-breaking research on the field. The discourse on the effectiveness as well as the ethics surrounding the idea of Artificial intelligence grows with each passing day, with government agencies and private entrepreneurs supporting the development and use of AI based technologies and the layman arguing against it from a doomsday perspective. The recent reports of the Chinese government ramping up its ability to employ surveillance technology on its citizens has been described by many as a move that would eventually lead to a system of digital totalitarianism. This relies heavily on data mining and other aspects of artificial intelligence. Given the level of inter-connectedness between nation-states under the current structure of global order, such policies can be replicated elsewhere by other regimes if the end results are satisfactory from a governance perspective. Western liberal democracies have also been known to employ similar technologies to curb crime in recent years. On the other hand, AI technology also possesses the potential to fuel the future engines of economic growth and the services industry. Currently, some of the major governments as well as transnational economic players in this regard are actively or stealthily funding such research. The goals in each case, are diverse yet often overlapping and range from increasing efficiency in product manufacturing and distribution to harnessing the potential of data mining and storage to facilitate easier and faster solutions to government policy implementation on various levels.

Comprising of some of the world’s most populous countries, the Third World or the Global South may witness drastic readjustments in the emerging AI driven world. This is underscored by its effects on, apart from other things, value and supply chain dynamics in business and growing demands for regulation of digital and crypto-currency in the larger economic arena. In the realm of governance, the future of international relations and strategic affairs are set to be guided by the digital revolution and the rising popularity of e-governance models among the states in the international order. This also necessitates understanding the developing issues related to cyber-crime and cyber-warfare which increasingly rely on the use of algorithms based on AI. On the other hand, the benefits of this technology have been noteworthy and further advances in the various AI subfields are expected to bring about greater economic and social benefits in the future. Communications, healthcare, disease control, education, agriculture, transportation (autonomous vehicles), space exploration, science and entertainment are just a few of the areas already benefiting from breakthroughs in AI technology. Therefore, a comparative assessment based on a cost-benefit analysis is crucial towards providing a holistic narrative on AI and its power to influence the future.

Genesis of Artificial Intelligence as a Concept

John Von Neumann, a mathematician is regarded as the intellectual forbearer of artificial life. His contribution to the study of what he called ‘automata’ or self-operating entities including both machines and biological organisms, has been immense. The late 1960s would see the evolution of this concept through the creation of computer games. This concept, following its emphasis on a bottom up approach is considered to be on the opposite spectrum of the idea of artificial intelligence which is essentially based on a top down approach in replicating human actions and patterns while solving complex assignments. Some scholars also point to the ‘difference engine’ and ‘analytic engine’ models of Charles Babbage while discussing the genesis of artificial intelligence. Alan Turing, a British mathematician and codebreaker is another name that is often considered as having an important contribution towards the development of the idea of artificial intelligence, especially given his key role in developing the symbolic computation which is crucial for modern efforts at understanding and working with artificial intelligence technology. John McCarthy, is credited with the coining of the term artificial intelligence while writing a proposal for a conference at Dartmouth College in 1956. During this conference McCarthy, along with a group of researchers and scientists discussed about their findings and ideas about computer languages that were suited to flexible problem solving and about programs they had written that could, for example, prove mathematical theorems. To artificial intelligence researchers, computers are not number crunchers. They are machines that possess the potential to mimic human behavior in areas ranging from conversation to chess.

Impact of Popular Culture on AI and Vice Versa

Popular culture or ‘pop’ culture has emerged as one of the most important tools in foreign policy decision making of a state. Traditionally as well, it has enjoyed a prominent position in the ‘soft power’ realm of a state’s diplomacy. The recent appointment of Korean pop artists BTS as presidential special envoys of South Korean leader Moon Jae-in at the 76th United Nations General Assembly in 2021 attests to the realization that ‘pop-icons’ and ‘social-influencers’ are recognized as important guides of human civilization in today’s world. The West has also had a long history of promoting and projecting its ‘pop icons’ and brands globally. The likes and dislikes of such celebrities often influence the choices made by millions of young followers around the world. ‘Cortana’, a fictional artificial intelligence in the popular video game Halo, is known by millions around the world, especially those who are familiar with video-games. This is not to be confused with the virtual assistant with the same name in Windows operating systems, which interestingly is also inspired by the video game character from Halo. The idea behind this entity has fascinated many, who have ended up pursuing a career in artificial intelligence as adults, mainly based on their early-life interaction and fascination with the evolution of video-games. This is only one of many ways one single element of modern social life (video games) has influenced the career-choices and direction of many current tech-moguls. Some of the ground-breaking inventions and discoveries in the field of AI have a connection with popular science fiction, either as a source of inspiration or as the yardstick of technological advancement. In this regard, perhaps one of the most intriguing depictions of the dangers of AI technology was by Hollywood movie director Stanley Kubrick in his 1968 science fiction film ‘2001: A Space Odyssey’. Ethicists often refer to this movie when discussing the potential dangers of AI. There are numerous other filmmakers, journalists, intellectuals, musicians and social influencers whose views have made a significant contribution to the discourse on Artificial Intelligence (AI), Artificial General Intelligence (AGI) and even Artificial Super Intelligence (ASI). With the rising phenomenon of consumerism, the role of smart gadgets, home devices, fitness trackers, etc., in the daily lives of individuals has increased simultaneously. Today, any iteration of planet earth’s future, whether dystopian or utopian, sees the rise of artificial intelligence and its divergent forms. Adherents of dystopianism in this sense unequivocally predict a doomsday scenario where super intelligent life-forms have become the overlords of humanity while the adherents of utopianism believe in a technologically driven future shaped by the positive powers of artificial intelligence by which humans would have evolved to the point of achieving immortality. The critically acclaimed television series ‘Black Mirror’ highlights these multi-faceted aspects of advancements in artificial technology surprisingly well. The series has won the Emmy Awards for Outstanding Television Movie for three years in a row, further projecting the popular perception of artificial intelligence in the modern world. From the days of the creation of Deep Blue, IBM’s chess playing robot which beat Gary Kasparov, the reigning world chess champion in 1997 after having lost one set in 1996, artificial intelligence technology has developed significantly and this development has been provided with a social and cultural base to a large extent by popular culture.

Emerging AI Landscape

One side of the development of Artificial Intelligence points to the fact that humans are now more connected to each other than in any other period in history. The other side of this development comprises of the vital issues that many are concerned about, and ranges from invasion of privacy to manipulation of political decisions of voters during an election. Think tanks and professional political strategists in almost all major countries are increasingly relying on information generated through social networking platforms to gauge the political landscape. The power of the internet is also attested by the fact that one of the first moves by the administrative setup of many states in the beginning stages of a mass protest is the curbing of internet access. This of course, also depends on the perceived impact of such protests on internal security.  On the economic side and especially with the birth of Industry 4.0 (the fourth industrial revolution) and smart factories, mundane and repetitious tasks are being phased out of humans to ‘cobots’ (collaborative robots) who can perform them swiftly and without human supervision. One prime example may be found in China’s ‘Made in China 2025’ policy. More and more Chinese manufacturers are coming to realize that appropriate adoption of smart manufacturing approaches such as Industry 4.0 and the Internet of Things (IoT) will be key to their future global competitiveness and survival. One of the main advantages in the use of cobots in the workplace is that they can work day and night shifts, and do not require lunch or coffee breaks. To encourage smart manufacturing applications, China’s Ministry of Industry and Information Technology launched forty-six government funded Smart Manufacturing Pilot Projects in 2015 along with an additional sixty-four projects in 2016.

These developments, along with a myriad of similar initiatives across the world have significantly altered the nature of global economy and governance. More countries today are mulling the idea of regulating crypto-currency, the latest ones as of this writing being India and Russia, thereby pointing towards a new direction for humanity at large and indicating a slight departure from the global economic order based largely on the Bretton Woods institutions.

Aditya Kant Ghising is an Assistant Professor at the Department of Political Science, City College,  India.