2024: The World at Crossroads—Part One

Foundational Decisions About the Future of AI Will Be Made in 2024

Abstract

Vital decisions in 2024 will affect all of humanity in areas such as democracy vs. autocracy, the future of AI, and dealing with disinformation. In this first article in the series, we focus on AI, which is poised to provide enormous benefits for mankind. At the same time, it presents considerable known issues right now; and speculative very-high-consequence risks in the future. Here we discuss why 2024 will be a pivotal year, setting the direction for how mankind addresses those near-term issues and longer-term risks, with implications for decades to come.

Article

2024—World at Crossroads

2024 is shaping up to be a year of global inflection points; critical forks in the road that will affect global supply chains and more broadly the world we live in going forward. 2024 will be a pivotal year for humanity in these key domains:

  • AI’s Future Direction—We expect to see legislative, legal, labor, and industry actions in 2024 that will profoundly affect the future direction and evolution of this technology, potentially impacting all of mankind for decades (possibly even centuries) to come.
  • Autocracy vs. Democracy—We will go through some critical junctures this year in the battle for control between autocratic and democratic forces, as well as the contest between a rules-based world order vs. an arbitrary, transactional world order based on who has the power (military or economic). Elections are a big part of this story, with over half of the world’s population living in countries that will have national elections in 2024.
  • Globalization vs. Isolationism—The shift to ‘decoupling’, higher tariffs, strained international relationships, and isolationism may peak in 2024 with the pendulum then starting to swing back to the more internationalist cooperation, as was the trend from WWII until the 2010s. If this shift doesn’t happen in 2024, then problems created by isolationism will start to manifest in more dire ways.

There are other huge challenges and developments facing humanity, such as climate change, disinformation, tribalism/partisanship, pandemic risks, and risks from other new technologies (e.g. the potential dangers from genetic engineering). However, it is not clear that 2024 will, by itself, be any more important or consequential than other years for these other issues. Therefore, we will focus on the three issues above. In a future article, we will examine how the tug-of-war between autocracy and democracy will play out in 2024. In this article below, we look at how 2024 is likely to impact the future direction of AI.

© ChainLink Research Inc., 2024

2024—Foundational Decisions for the Future of AI

Foundational decisions made early in the development of the Internet had profound long-term implications (see sidebar). Similarly, decisions made about AI in 2024 (or the next few years) will reverberate for decades to come. There are some key differences between AI vs. the Internet[3] when drawing this analogy, but nevertheless, we feel the gist of the analogy is valid—that foundational decisions being made right now, in 2024, about AI, will have profound long-term implications for mankind.

AI’s Conundrums Right Now

We believe that AI will be an enormous net benefit for mankind. However, it also has some pretty horrendous potential downside risks. Avoiding those negative outcomes depends on how we tackle AI’s key challenges and issues now. Some of these issues have already been manifesting and are causing trouble, while others are more speculative for the future. Current and near-term risks from AI include:

  • © ChainLink Research Inc., 2024
    Bias and discrimination—AI algorithms are increasingly used for life-altering decisions, such as in hiring, policing, lending, medical diagnosis, and more. AI algorithms can exhibit highly consequential bias and discrimination based on the data they are trained on. For people at the receiving end of these decisions, the consequences can be life-changing, impacting their ability to get a job or buy a house, getting falsely accused or arrested, and receiving the correct life-saving diagnosis for medical issues. Large language models trained on the entire corpus of data accessible on the internet embody the human biases inherent in that corpus unless actions are taken to counteract those biases. Despite extensive efforts by engineers to reduce bias, this remains a serious issue.
  • Displacement of workers and jobs—AI is dramatically changing the type of work done for many jobs and, in some cases, will eliminate existing jobs, while creating new jobs. Retraining and assistance will be required to minimize societal disruption.
  • © ChainLink Research Inc., 2024
    Disinformation—Disinformation is a perennial problem that sows unrest and chaos, undermines democracies (which depend on informed citizens), polarizes and divides societies, leads to hate and violence, induces harmful behaviors (e.g., failing to take vaccines), and makes it much harder to collectively solve problems such as climate change. Bad actors have been limited in the past by their ability to create convincing content at scale, in particular when they do not have command of the native language of the population they are targeting. AI enables the mass production of disinformation, with the potential to overwhelm efforts to discredit it.
  • Abusive content—AI-powered systems can create deepfakes: realistic convincing videos and audios of a person, including abusive content, such as fake pornography of a specific person or child pornography. While video editing tools have been able to create fakes for quite a while, it is the ease with which people can create these fakes using AI that is raising alarm bells.[4]
  • Wealth inequality—AI has the potential to further concentrate wealth in just a few companies and individuals while leaving others behind.[5]
  • Source: Defense Visual Information Distribution Service
    Autonomous Weapons—Lethal autonomous weapons systems (LAWS) are drones, robots, autonomous vehicles and boats, and other autonomous weapons that can find, identify, and engage enemy targets, potentially without the need for a human to give the final command to attack. While weapons systems have become increasingly autonomous, fully autonomous systems have been generally confined to defensive uses, such as anti-missile systems for ships (and air defense more broadly) and active protection systems for tanks. However, we appear to be at the cusp of the offensive use of autonomous weapons systems on the battlefield, accelerated by the increasingly competitive arms race in drone warfare in the Russo-Ukraine war—where both sides are racing to make their attack drones more autonomous to be able to complete a mission on its own in the face of signal jamming, as well as to lower the amount of training and level of skill required to pilot the drones. There are numerous ethical concerns about the unconstrained offensive deployment of LAWS, such as the potential to dramatically increase the destructive scale of wars, enabling efficient genocide and ethnic cleansing, obfuscating accountability for attacks, and the potential for malfunction. There are currently no international agreements explicitly banning or limiting the use of LAWS, though there are intensive discussions underway.[6]

The Existential Risk Observatory ranks the risk of human extinction due to ‘Unaligned AI‘ (a future AI with goals different from ours and out of our control) as by far the highest existential risk facing humanity—more than 3X higher than the risk from a pandemic and nearly 100X the risk of extinction from a nuclear war. This risk is that an Artificial General Intelligence (AGI) that is much more intelligent than human beings is developed intentionally or emerges unintentionally; that its priorities and ‘values’ do not align with ours; and that it has escaped our control. Some also describe the risk of an intelligence explosion (a.k.a. singularity) where a super-intelligent AI (an AGI) is able to improve its own code and hardware at ever faster rates (much faster than human beings have improved computing power), thereby reaching exponential increases in intelligence—leaving mankind in the dust.

There are wide disagreements about the probability of the doomsday (i.e., extinction of mankind) scenario,[7] but even if the odds are low, the consequences are so high that it is worth taking this risk seriously.

Why 2024 Is Likely to Be So Pivotal for AI

There are four things about 2024 that make it likely to be such a potentially pivotal year in determining the future of AI:

  1. Government Regulation—An array of foundational AI legislation and regulations are being debated, and will likely take shape and in some cases passed in 2024.
  2. Legal and labor actions—Important lawsuits and strikes to protect IP and jobs from AI happened in 2023. We expect many of the critical legal issues raised by these suits to be addressed by the courts in 2024, and potentially more labor action to occur.
  3. Industry Self-regulation—Industry organizations were recently formed with the goal of self-regulation. They may start taking serious actions in 2024.
  4. AGI Risk Awareness—The awareness of the risk posed by AGI has been increasing, as well as the perception that AGI may arrive sooner than expected, adding urgency to prepare for it now, starting in 2024.

Key AI Government Regulations Are Being Formed in 2024

2024 is shaping up to be one of the most consequential years in formulating the regulatory landscape for AI. On December 8th, 2023, the European Union passed the EU AI Act, a comprehensive law to regulate AI. 2024 will be the year that the rubber meets the road for that Act, where the details and form of the actual implementation are hammered out. The law bans ‘unacceptable risks’ such as manipulation of vulnerable people, social scoring, and facial recognition by AI. It requires more transparency from generative AI software like ChatGPT to ensure that AI-generated images and content are properly labeled as such and not passed off as human-generated. An article[8] in Digiday provides this succinct summary of the EU AI:

“The legislation focuses on five main priorities: AI use should be safe, transparent, traceable, non-discriminatory, and environmentally friendly. The legislation also requires that AI systems be overseen by people and not by automation, establishes a technology-neutral, uniform definition of what constitutes AI, and would apply to systems that have already been developed as well as to future AI systems.” 

During 2024 many of the details for implementing the Act will be determined by the appropriate EU standards bodies, who will draft technical standards, and by the member states who have responsibility for enforcement and will thereby be figuring out how to enforce the law. This is the year that the EU is creating the baseline/template AI regulation that much of the rest of the world will emulate.

In the U.S., there is no federal AI legislation. However, in October 2023, the Biden Administration published an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The EO directs NIST to develop new standards for AI safety and security, the Department of Commerce to develop guidance for content authentication and watermarking, provides incentives discouraging the use of AI to develop biological weapons, strengthens privacy-preserving technologies, addresses algorithmic discrimination, advances responsible use of AI in healthcare, actions to mitigate harm to jobs and workers, promoting innovation, and providing guidance for responsible government use of AI. While it will take years to figure all of this out, 2024 will be the year that much of the foundation and directional decisions for these standards and guidelines are formed.

Meanwhile, there has been a lot of state-level AI-related legislation, mostly connected to privacy legislation so far. In 2024, we expect more states to jump on the bandwagon, and more states to broaden their AI legislation to issues other than privacy (such as addressing bias in AI).

IP Ownership and Jobs—Law Suits and Labor Strikes

2024 will be a year in which many critical, precedent-setting questions about IP ownership for AI models will be addressed … and possibly resolved or at least clarified. In 2023, numerous lawsuits were filed against Meta, OpenAI, Google, and other AI platform providers by various newspapers, writers, photographers, artists, and other creative workers seeking compensation for the unauthorized use of their intellectual property in training AI models. On December 27, 2023, the New York Times filed a lawsuit against OpenAI and Microsoft for copyright infringement, contending that millions of the Times’ articles were used to train chatbots that now compete directly against the Times. The suit seeks to recover “billions of dollars in statutory and actual damages.” Other news organizations are likely to follow suit. Legal fights can drag on for years, but 2024 will see some of the most important developments in this battle and possibly some definitive conclusions reached.

Source: Flickr, by ufcw770, CC BY 2.0 Deed

2023 was a key year for labor actions and AI with the resolution of the Writers Guild of America (WGA) strike and the SAG-AFTRA strike. Both of these addressed key issues with the use of AI, including intellectual property rights and guarantees to reduce the extent that AI could be used to take over the jobs of writers and actors. This has created a framework and set the stage for further potential labor actions in 2024 by other unions. There is at least the possibility that 2024 could be the year that more unions establish their AI rights more broadly (beyond Hollywood) in a big way.

Industry Self-Regulation—the Pressure is Mounting


July 2023, four major AI industry players (Anthropic, Google, Microsoft, and OpenAI) announced the formation of The Frontier Model Forum, with the stated aim “to promote the safe and responsible development of frontier AI systems: advancing AI safety research, identifying best practices and standards, and facilitating information sharing among policymakers and industry.” Time will tell how effective this coalition is. The optimistic view is that 2024 will be an important year for advancing industry self-regulation.

In October 2023, Stanford University released its Foundation Model Transparency Index, tracking how much information that AI model developers disclose about how people use their systems and their potential impact on society. The answer so far is ‘not very much’. Hopefully, this type of public scrutiny will spur further improvements.

Source: Stanford UniversityCenter for Research on Foundation Models
Figure 1 – Stanford University’s Foundation Model Transparency Index Scores in 2023

AGI Risk Awareness (and Hyperbole) Reaching a Crescendo in 2024

As mentioned above in the section Speculative AI Catastrophe in the Future, there is a chance that AGI (Artificial General Intelligence) could pose an existential risk to mankind. The general consensus[9] is that we still have a few decades before AGI emerges, probably sometime between 2040 – 2070. Thereby we have time to figure out and hopefully avoid the humanity-ending scenario. However, there has been a steady drumbeat of doomsaying by key figures about the danger of runaway AGI. The rapid advances in generative AI capabilities have also increased the perception that a human-level intelligence in AI is imminent. These perceptions are likely to motivate an acceleration in addressing the AGI risk. Therefore 2024 could see a lot more foundational activity in this area through the forums mentioned previously—regulatory and standard-setting bodies, and industry coalitions.

© ChainLink Research Inc., 2024

Supply Chain Practitioners’ Role in Shaping the Future of AI

Supply chain practitioners have an important role to play in shaping the direction and future of AI. This is not unprecedented; supply chain professionals have historically played a key role in making the actors in their supply chains behave more responsibly. Similarly, supply chain management, and sourcing and procurement professionals can shape the vendor selection process and criteria, as well as the supplier code of conduct, to address AI concerns.

For example, it would be reasonable to ask vendors who are bidding to supply an HR system to provide evidence that their tools do not discriminate when screening and recommending applicants. They could be asked how their AI models were trained, what steps they have taken to attenuate bias in AI decision-making algorithms, and to describe the metrics and methods they have used in testing and measuring the bias of their algorithms. Similarly, a supplier code of conduct might also require protection for workers against abusive uses of AI, such as protecting their privacy or employment rights.

Will 2024 turn out to be the pivotal year for addressing AI concerns? That remains to be seen. We believe that many of the indicators and conditions are in place to make this a seminal year in determining the future direction for AI.


[1] The Network Working Group was renamed the International Network Working Group in 1972 around the time that Vint Cerf took over leadership of the group. How the Internet was born: from the ARPANET to the Internet, November 2016—Return to article text above

[2] Cybercrime costs the world $6 trillion in 2021 and will grow to over $10 trillion by 2025, according to Cybersecurity Ventures (Cybercrime To Cost The World $10.5 Trillion Annually By 2025). While not all cybercrimes can be attributed to the lack of security in the Internet, a major portion of those trillions of dollars are due to the internet’s security architecture deficiencies. One piece of this, identity theft, costs around $40B – $50B per year in the U.S. alone, according to Javelin Strategy & Research (Identity Fraud Losses Totaled $43B in 2022, Affecting 40 Million U.S. Adults). That is about half a trillion dollars per decade and probably over a trillion dollars a decade worldwide, assuming global losses are at least twice the U.S. losses.—Return to article text above

[3] Some key differences between the decisions being made now about AI vs. decisions made in the 70s about the Internet include:
1. The Internet was by design and necessity a common protocol that everyone needs to use in order for it to work, whereas AI is an open field that anyone can develop in any way they please.
2. AI has been in development for over seven decades, whereas the Internet was quite young when those early decisions were made—hence we have a much richer and more mature understanding of AI.
3. Due to this, the awareness of the potential pitfalls of AI, the urgency to address them, and understanding of the importance of these decisions is much more widespread now, with considerable attention being paid by media, academia, companies, and governments.
4. In contrast, the early Internet was viewed (rightfully at the time) as a relatively small experiment by a few people at a few universities, hence only a relatively small convergence of people was paying attention to it at the time.—Return to article text above

[4] There are software and services that will create porn images or videos when provided with one or more images of a person’s face. It is heartbreaking to consider the trauma that victims of this kind of abuse go through when those images or videos end up shared widely on social media; traumatic at any age, but particularly for teenagers, leading to depression, PTSD, and potentially suicide. Currently, there are very few legal tools to go after or deter perpetrators.—Return to article text above

[5] A study published by the Bank for International Settlements examined data from 86 countries and found that “higher AI investment is associated with higher income and a higher income share for the top decile, while the income share declines for the bottom decile.” See Artificial intelligence, services globalisation and income inequality, 25 October 2023, by Giulio Cornelli, Jon Frost, and Saurabh Mishra—Return to article text above

[6] In October 2023, the Secretary-General of the UN and the President of the ICRC jointly issued an appeal for the urgent establishment of international rules on autonomous weapons. The November 2023 Meeting of High Contracting Parties to the CCW was supposed to agree on a way forward to reach an agreement on autonomous weapons in time for the Seventh Review of The Convention on Certain Conventional Weapons, but progress was largely blocked by Russia.—Return to article text above

[7] There is particular skepticism in some quarters about the feasibility of the more far-fetched idea of a singularity of nearly infinite intelligence.—Return to article text above

[8] See How AI regulation differs in the U.S. and EU, December 5, 2023—Return to article text above

[9] A survey of AI experts, conducted in 2012-2013, found a median estimate for the emergence of “high-level machine intelligence” to be around 2040-2050, with super-intelligence by 2070-2080. In the same survey, about one in three felt this development would be ‘bad’ or ‘extremely bad’ for mankind. See Future progress in artificial intelligence: A survey of expert opinion. For another summary of other surveys on this topic, see When will singularity happen? 1700 expert opinions of AGI—Return to article text above

Scroll to Top