I.A. Risks: the role of China and the United States

Indice

It is now well known that artificial intelligence exposes society to risks and represents a potential danger. Leading international entrepreneurs, especially those working in the field of machine learning, have expressed strong concerns about this.

In this sense, the Center for AI Safety has reported on some areas where the use of AI is most likely to be dangerous.

Weapons

The possibility of adopting A.I. in offensive contexts is an extremely real and serious possibility. The Centre gives a few examples such as: air combat, chemical weapons construction. Several studies are cited in this regard. There is one particularly relevant sentence in the text.

As with nuclear and biological weapons, one irrational or malicious actor is sufficient to cause damage on a large scale. Unlike previous weapons, artificial intelligence systems with dangerous capabilities could easily be disseminated through digital means.

Source: Centre of IA Safety

The computational speed, combined with the great complexity of the processes, makes this risk particularly dangerous.

Disinformation

Artificial intelligence has been used to construct texts and images bearing false information. We are talking about cases that have already occurred and that, on a national scale, could have significant consequences. An example for all is the case of Amnesty International, which published false images of the 2021 protests in Colombia. The photo, although allegedly taken with artificial intelligence, was mistaken as authentic.

Weakening

This point is particularly interesting: it is the realisation that the assignment of tasks to A.I. will produce a progressive ‘weakening’ of the human being to the point of the risk that some skills will be permanently lost. The substantial difference is that the faculties we are talking about here are conceptual: processing capacities entrusted to the machine, taken away from man. It must be understood that this process (partly physiological in human history) was adopted more gradually in the past and, above all, with fewer risks than those associated with artificial intelligence.

Real risks

Even though these fears may seem far removed from common perception, they are very much founded; applications such as ChatGPT or similar are only one of the many declinations of A.I., and there are many others to which one has no direct access and whose actual potential is unknown. The fact that the major companies operating in this sector are calling for strong and stringent regulation is indicative: certainly one cannot exclude the possibility that, through such regulation, there is an interest in creating a barrier that would allow a few companies to control many.

We are at a turning point for technology and for the history of mankind as a whole. For the first time, mankind is confronted with a potential adversary that, in its fullest evolution, will not be subject to commands but will enjoy cognitive autonomy and a set of computational capabilities far beyond those of its creator.

I.A. as a matter of geopolitics

The concern expressed about geopolitics is high: artificial intelligence is an element of prosperity but also of potential political and social domination. In this sense, as we know, some American companies have requested the intervention of the President of the United States to properly regulate the evolution of A.I., but on the other side of the world, there is a rather pronounced concern about China, which, due to social and historical values, implements different policies than the West. We know that between the United States and China there is a particularly difficult dialogue that has ancient origins: from a deep enmity, a period of dialogue was born that Kevin Rudd (former Australian prime minister) places in 1979.

Kevin Rudd

Diplomatic normalisation between China and the US was only achieved in 1979, seven whirlwind years after the Shanghai Communiqué negotiations in 1972.

Source: ‘A Brief History of US-China Relations’, Kevin Rudd, Rizzoli Editore (2023)

However, despite the efforts, relations remained in constant tension due to incidents and business strategies.

Considerations on China

The most interesting aspect of the debate on A.I. and related risks is the fact that different (profoundly different) cultures are sitting at the same table and have to deal with a technology that can undermine economies and social structures. Even before the possible ‘domination’ of the least technological country, there is the risk of corrupting those economic ties without which countries cannot sustain themselves. The fabric of financial transactions, the daily imports/exports, are guarantors of social and economic development; there is no country, however stable it may be, that is not dependent on exports and good relations with other nations. However, what makes dialogue on these issues difficult is precisely the culture of origin that puts the interlocutors at odds.

It is therefore logical to wonder why China is in the spotlight, far more so than the United States or other countries. The United States has a regulatory framework that guarantees the exercise but also the defence of democracy: sometimes these conditions fail, other times they even succeed in guaranteeing the freedom of indictment of one of its former presidents. Basically, the law applies indiscriminately to the citizen and any other member of the state apparatus, be it an executive or the president of the United States of America. It is not an infallible system but it is a system that puts everyone below the law and that is very important.

In the China we know, the state assumes the coordinating element of the population. Chinese citizens, in a sense, have traded their freedom and their right to privacy for rules and restrictions that guarantee safe and ‘harmonious’ coexistence. The state is therefore the pole around which everything revolves: not only politics, but also the way of thinking and acting, and without the state, citizens feel lost.

China has had great difficulties in this regard in recent years. The Limes itself wrote an article with the emblematic title‘Beijing no longer knows how to take on the world‘, which reads:

The People’s Republic is paying for its inability to recognise the limits of its success. The missteps on Covid-19 and the Ukraine War. From the historic principle of unity of the centre to the tactic of wait-and-see. A war for Taiwan is too risky.

Source: ‘Beijing no longer knows how to take on the world’, Limes(LINK)

So this is what is worrying: China’s long-strict governing conditions, which have guided China for millennia, are now slowly coming into crisis. This is shown by the internal uprisings following COVID-19, together with the difficult dialogue between China and the West in the war question between Russia and Ukraine. The Chinese state machine is the main decision-maker in many aspects of everyday culture, and this centrality can be found, first and foremost, in the ideogram of the word China.

The word CHINA comes from the name of the explorers who connoted the continent as ‘Land of Qin’, the first imperial dynasty. But the real meaning of the ideogram is ‘ Middle-earth ‘ precisely because the people considered themselves to be at the centre of the world in terms of importance and capacity for conquest. The ideogram represents the world divided in half and is one of the five cardinal points.

2021-Release of the ‘Code of Ethics for Next Generation Artificial Intelligence’.

On 26 September 2021, the ‘Ministry of Science and Technology of the People’s Republic of China’ published the‘Code of Ethics for Next Generation Artificial Intelligence‘. This is an important document whose purpose, as stated in Article One, is:

integrate ethics into the entire life cycle of artificial intelligence, promote fairness, justice, harmony and security, and avoid problems such as bias, discrimination, privacy and information leakage.

One must bear in mind the importance of the term ‘harmony’, which is not simply a word but, in Chinese culture, represents something more important and which will be written about later. The intent of this code can be summarised in Article 3, an excerpt of which is given below.

Various activities of artificial intelligence must follow the following basic ethical standards. Enhancing human welfare. Adhere to people-oriented, follow the common values of humanity, respect human rights and fundamental interests of humanity, and respect national or regional ethics. Adhere to the priority of public interests, promote harmony and friendship between man and machine, improve people’s livelihood, increase the sense of gain and happiness, promote sustainable economic, social and ecological development, and build together a community of shared future for humanity. Promote equity and justice. Adhere to inclusiveness, effectively protect the legitimate rights and interests of all stakeholders, promote the equitable sharing of the benefits brought by artificial intelligence in society, and promote social equity, justice and equal opportunities.

Interesting and sharable principles, but honestly quite general, with no clear understanding of how China will ensure this.

2022-Release of the ‘Position Paper of the People’s Republic of China on Strenghening Ethical Governance of Artificial Intelligence (AI)’.

In 2022, China released another document, the‘Position Paper of the People’s Republic of China on Strengthening Ethical Governance of Artificial Intelligence (AI)‘. The document is short and consists of four articles, the last of which is the most interesting one entitled ‘International Cooperation’.

Governments should encourage transnational, interdisciplinary and cross-cultural exchanges and cooperation, ensure that the benefits of A.I. technologies are shared by all countries, promote the joint participation of countries in international discussions and the development of standards on major issues concerning A.I. ethics, and oppose exclusive group building and malicious obstruction of technological development in other countries. Governments should strengthen the regulation of AI ethics for international cooperative research activities. Relevant scientific and technological activities should comply with the requirements of AI ethics management in the countries where the cooperating parties are located and pass AI ethics review accordingly. China calls on the international community to reach international agreement on the issue of AI ethics on the basis of broad participation and to work towards formulating a widely accepted international AI governance framework, standards and norms, fully respecting the principles and practices of AI governance in different countries.

From the perspective of international cooperation and economic and political balances, AI can be a disruptive element that can break these balances and undermine international relations. There is an article in ‘Il Manifesto’ entitled‘China and the Subtle Difference between Harmony and Stability‘ that reports a difference worth reflecting on:

If from an economic point of view Beijing puts no limits on cooperation, from a political point of view Xi seems to have now drawn an insurmountable line; the US, the EU and China are different politically, in their consideration of rights and in the way they govern, but this does not mean that there is someone superior to the other. For China, this diversity is not an impediment to doing business together, but a cornerstone of its foreign policy, i.e. the demand to be considered an equal partner and not a country ‘on the wrong side of history’.

The ‘harmony’ factor

Confucius was a thinker (Zou, Shandong, c. 551 B.C. – Qufu 479 B.C.) who joined the Chinese aristocratic class and strongly influenced it. According to Confucius’ vision, the ideal world is one in which there is constant ‘harmony in diversity’ (和而不同hé ér bùtóng); the concept is therefore one in which several social and political realities coexist in‘harmonious‘ relationships. To achieve the state of harmony, however, Beijing has over the years exercised a state of control that permeates social life at all levels: fiscal control, political control, mass video surveillance, etc. There is a very interesting editorial in Limes, the famous geopolitical magazine.

Based on extensive data collection on individuals, companies and government bodies and a mechanism of rewards and sanctions, the complex system of behavioural assessment has two objectives. The first is to institutionalise and digitalise existing forms of control. Just think of the link between social rights, place of birth and population monitoring derived from the hukou, the residency system launched by Mao Zedong in the 1950s and now being reformed. Or the danwei, the work units through which the Party monitored citizens’ behaviour. From Beijing’s perspective, it is necessary to put collective Confucian harmony before the individual sphere and to prevent the CCP’s sovereignty from being jeopardised by the social impact of thorny dossiers: economic slowdown, wealth gap between city and countryside and between coastal and inland regions, high levels of pollution, increasing rate of urbanisation and ageing.

Source: ‘Harmony and control: what is Beijing’s social credit system’, LIMES(Link)

The point is that, while harmony in diversity is one of the goals of Chinese thought, the way it is achieved seems to be largely questionable ethically as well as politically and socially.

An example: Jiuzhang 2.1 and the principle of self-sufficiency but not only

In 2021, Xi publicly presented the New Development Concept, China’s development plan for the future. One of the pillars of the plan was self-sufficiency and, consequently, independence from other countries in every essential aspect.

This concept has prompted China to develop and upgrade more and more technologies such as the Jiuzhang quantum computer, which has reached version 2.1 and is currently considered the fastest quantum computer in the world.

It is not primarily a technological aspect, but a cultural one and, as such, must be analysed. In 2015, China realised its ‘Made in China 2025’ development plan in which technology categories were listed in which self-sufficiency and development were required. In 2017, artificial intelligence was introduced among these categories. In 2019, 48% of all AI start-ups globally were classified as Chinese, while 38% as American and 14% as from other countries [source:‘US-China. A war we must avoid‘, K. Rudd, Rizzoli (2023), Pg.161].

The United States and the call for regulation

In recent weeks, the major American companies involved in AI technology development have called on the White House to regulate the market and the growth of this technology. However, there have been some happenings that should be watched carefully.

Microsoft has dismissed the team dedicated to the ethical analysis of .IA.; the news was given by several magazines including The Platformer. The article contains a rather telling passage:

One employee says the move leaves a foundational gap on the user experience and holistic design of AI products. “The worst thing is we’ve exposed the business to risk and human beings to risk in doing this,” they explained.

One employee claims that the move leaves a fundamental gap in the user experience and holistic design of AI products. “The worst thing is that we have exposed the company to risk and human beings to risk in doing so,” they explained.

OpenAI also behaved differently than in the past: it chose not to disclose any information about the latest version of GPT-4. The MIT Technology Review of 14/03/2023 reads:

“OpenAI is now a fully closed company with scientific communication akin to press releases for products”.

“OpenAI is now a completely closed company with scientific communication similar to press releases for products.”

The newsletter ‘Network Wars’ by Carola Frediani, reports another interesting testimony:

The lack of transparency is lamented by many researchers. For example, for Irina Raicu, director of the Internet Ethics Programme at the Markkula Center for Applied Ethics at Santa Clara University, it is crucial that fellow researchers have access to the chatbot training dataset: ‘Knowing what’s in the dataset allows researchers to point out what’s missing,’ she says (in the Fast Company newspaper).

Source: ‘Network Wars’, Carola Frediani, newsletter of 18/03/2023

Conclusions

On 12 March 2023, the New York Times wrote an article by Ezra Klein in which there is an important passage that sums up the current condition in which we find ourselves.

If we had eons to adjust, perhaps we could do so cleanly. But we do not. The major tech companies are in a race for A.I. dominance. The U.S. and China are in a race for A.I. dominance. Money is gushing towards companies with A.I. expertise. To suggest we go slower, or even stop entirely, has come to seem childish. If one company slows down, another will speed up. If one country hits pause, the others will push harder. Fatalism becomes the handmaiden of inevitability, and inevitability becomes the justification for acceleration.

If we had eons to adapt, perhaps we could do it cleanly. But we don’t have them. Major technology companies are racing for A.I. dominance. The US and China are in the race for A.I. dominance. Money is flowing to companies with A.I. expertise. To suggest that we should go slower, or even stop altogether, is childish. If one company slows down, another accelerates. If one country pauses, others will push harder. Fatalism becomes the handmaiden of inevitability and inevitability becomes the justification for acceleration.

Artificial intelligence is developing, amidst criticalities and doubts, in a very similar but not equal way between the United States and China. While on the one hand the opportunities for employment seem endless, on the other hand mankind is faced with a challenge that is not only technological but also and above all social and political.