Cate Blanchett expressed to the BBC her “deep concern” regarding the effects of artificial intelligence (AI).
In a conversation on Sunday with Laura Kuenssberg, the Australian actress remarked, “When I look at these robots and self-driving cars, I’m uncertain about the benefits they offer anyone.”
Blanchett, who is 55, was promoting her latest film “Rumours,” which is an apocalyptic comedy featuring a group of global leaders stranded in a forest.
She stated, “Our film appears quite innocent in comparison to the current global situation.”
When asked if she feared AI’s effect on her profession, she mentioned she was “less worried” about that and more concerned about its impact on the average individual.
“I’m apprehensive about humanity as a whole; this is a much larger issue.”
She remarked that the risks posed by AI are “very tangible,” as “anyone can be completely replaced.”
“Regardless of whether someone is an actor, if you’ve recorded yourself for just a few seconds, your voice can be duplicated.”
The actress, a two-time Oscar winner for her roles in “The Aviator” and “Blue Jasmine,” expressed that she believes AI advancements are more about “experimentation for its own sake.”
“One perspective views it as creativity, but it can also be immensely harmful, which is, of course, the other aspect.”
The necessity for AI regulation
The United Nations (UN) has conveyed its concern that only seven nations are currently engaged in the governance of Artificial Intelligence (AI) and are making decisions that affect other nations.
The final report from the UN Secretary-General’s High-level Advisory Body on Artificial Intelligence, titled “Governing AI for Humanity,” stated this.
The report indicated that the seven nations involved in AI governance for the rest of the world are Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States.
“Significant portions of the globe have been excluded from international AI governance discussions. Seven countries are involved in all assessed AI governance efforts, whereas 118 countries participate in none (mainly in the global South),” the report mentioned.
AI governance entails establishing policies, regulations, and guidelines that ensure the responsible and ethical advancement, implementation, and utilization of AI technologies.
The body highlighted that ensuring equity necessitates that more voices have meaningful involvement in decisions regarding how to govern technology that influences them.
It stressed that the concentration of decision-making within the AI technology sector cannot be warranted, noting that historically, many communities have been entirely left out of AI governance discussions that affect them.
“AI governance frameworks must have a global reach to be effective — effective in preventing “AI arms races” or a “race to the bottom” concerning safety and rights, in identifying and addressing incidents arising from decisions along AI’s life cycle that transcend multiple jurisdictions, in fostering learning, in promoting interoperability, and in sharing AI’s benefits.”
“The technology is without borders, and as it proliferates, the notion that any single state or group of states could (or should) control it will diminish,” it asserted.
The necessity for AI regulation
The UN panel further noted that the advancement of AI cannot be left to the “whims” of the market alone.
In the report, the 39-member group concurred that national governments will inevitably play a crucial role in regulating AI but emphasized that the technology’s borderless nature also requires a “global approach.”
It pointed out that the rapid progress of AI concentrates power and wealth on an international level, with geopolitical and geoeconomic consequences.
It highlighted that no one fully comprehends all of AI’s intricacies well enough to completely control its results or anticipate its evolution, nor are decision-makers held responsible for developing, deploying, or using systems they do not understand.
“Numerous countries face fiscal and resource limitations that restrict their capacity to utilize AI effectively and appropriately,” said the panel.
“The call for global governance, in particular, is undeniable. The raw materials for AI, ranging from critical minerals to training data, are sourced globally. The advancement, deployment, and use of such a technology must not be left to market whims alone,” the report concluded.
What to consider
The UN advisory body was established in October 2023 and has since proposed seven recommendations to tackle AI-related risks and governance gaps.
These include creating an AI data framework to enhance transparency and accountability, as well as a fund to assist developing nations in benefiting from advancements in the technology.
Following the launch of OpenAI’s popular ChatGPT bot in 2022, the proliferation of AI has surged, raising alarms about the potential for increased misinformation, fake news, and copyright infringements.
Only a limited number of countries have enacted laws to regulate the spread of AI tools. The European Union has been a forerunner by implementing a comprehensive AI Act, in contrast to the United States’ approach of voluntary compliance, whereas China has focused on maintaining social stability and state control.
CMA raises concerns
The Competition and Markets Authority (CMA) has raised alarms that major technology firms may leverage their dominance in artificial intelligence (AI) in ways that could be detrimental to consumers.
As AI progresses swiftly with the capacity to create social upheaval, such as job displacement, global regulators aim to ensure that large tech companies do not manipulate critical AI markets for their own benefit.
The CMA discovered an ‘interconnected web’ of AI partnerships among the same companies and sought to attain a better understanding to better protect against the potential misuse of market power and unexpected repercussions.
This action by the Competition and Markets Authority (CMA) follows its earlier report on AI Foundation Models (FMs) from the previous year. The report suggested a framework of principles to promote innovation and steer these markets toward favorable outcomes for businesses, consumers, and the broader economy.
In her speech at a conference in Washington DC, Chief Executive Officer Sarah Cardell summarized key points from the CMA’s latest updates on its FMs work. Cardell described the transformative potential of FMs as a possible “paradigm shift” for both society and the economy. She also discussed various rapid developments across FM markets, which, supported by the CMA’s growing understanding of the FM ecosystem, have led to increased concerns.
The speech underscores the increasing dominance within FM markets of a select few established technology firms that already enjoy significant market power in many crucial digital sectors. These firms possess strong advantages in both the creation of FMs (including vital resources such as compute, data, and talent) and the application of models through critical market access points, such as apps and platforms.
The CMA worries that certain firms may have both the capacity and the motivation to manipulate these markets to serve their interests—both to safeguard existing market power and to expand into new domains. This could significantly affect fair, transparent, and effective competition in FM-related markets, ultimately harming businesses and consumers through diminished choice, lower quality, and inflated prices, while also impeding the flow of potentially groundbreaking innovation and broader economic advantages from AI.
The CMA’s updated document, being released today, identifies an “interconnected web” of over 90 partnerships and strategic investments involving the same companies: Google, Apple, Microsoft, Meta, Amazon, and Nvidia (the leading supplier of AI accelerator chips). The CMA acknowledges the immense resources, expertise, and innovation potential that these large firms can contribute and the significant role they are likely to play in FM markets, while also recognizing that such partnerships can have a pro-competitive effect in the technological ecosystem.
Nonetheless, the CMA warns that influential partnerships and integrated firms should not limit the ability of competing firms to thrive, nor should they be employed to shield powerful firms from competition. Preserving diversity and choice within the market is essential to preventing over-reliance on a small number of dominant firms—especially given the wide array of possible applications for FMs across all economic sectors, including finance, healthcare, education, defense, transport, and retail.
The advantages of AI for businesses and consumers are much more likely to be realized in an environment where the most powerful technology firms are subject to fair, open, and effective competition—not only from potential challengers but also among themselves—rather than in a scenario where they can leverage FMs to solidify and expand their existing dominance in digital markets.
Reflecting on its decade-long experience in digital markets, where “winner takes all dynamics” led to the emergence of a select few powerful platforms, Cardell states that the CMA is “committed to applying the lessons of history” at this crucial juncture in the rise of a new, potentially transformative technology.
The CMA’s updated report outlines how each risk can be mitigated by its principles, as well as the proactive measures the CMA is currently implementing and evaluating in the near future to address these challenges. This encompasses current actions, like market investigations and merger assessments, along with considerations regarding FMs as the CMA determines which digital activities should be prioritized for investigation under the Digital Markets, Competition and Consumers Bill.
The speech also highlights relevant ongoing projects, such as the CMA’s Cloud Market Investigation, which includes a forward-looking evaluation of how FMs might impact competition in cloud services, and its analysis of Microsoft’s collaboration with OpenAI to assess how it could influence competition in various areas of the ecosystem.
A Balanced Look at Our Future with AI
Cardell emphasizes that the CMA is “closely monitoring both existing and new partnerships.” This encompasses the use of merger control powers to evaluate if, and under what conditions, these types of agreements fall under the merger regulations and if they pose competition risks – particularly due to the intricate and unclear nature of certain partnerships and arrangements.
Sarah Cardell, the CMA’s CEO, stated, “Initially, we were curious about this work. Now, through our increased understanding and detailed observation of developments, we hold genuine concerns.”
“The fundamental challenge we encounter is how to utilize this incredibly exciting technology for everyone’s benefit, while also protecting against potential market power exploitation and unforeseen consequences.”
“We are dedicated to applying the principles we have established, utilizing all legal powers available to us, now and in the future, to ensure that this groundbreaking and structurally vital technology fulfills its potential.”
Artificial intelligence is omnipresent nowadays, isn’t it? From our smartphones to our workplaces, AI is infiltrating almost every aspect of our lives. Major tech firms and industry leaders assert it’s the future, warning that if we fail to embrace it, we risk being left behind. But wait a minute – do we truly have to embrace AI’s dominance as inevitable? Or is this merely another tech trend that’s being overly hyped?
I’m not here to claim that AI is detrimental or that we should shy away from it. Quite the opposite! AI has the potential to accomplish incredible things. However, I believe we should pause and consider whether we genuinely want AI to take control of our lives – or if we even have to permit it.
Why AI Seems Unstoppable Right Now
If it feels like everyone is discussing AI, it’s because they are! In the business realm, people state that AI will enhance productivity and streamline operations. In healthcare, AI is assisting researchers in developing new treatments and analyzing health information. And in education, there’s a significant push for students to acquire AI skills to “prepare for the future.”
Nevertheless, just because AI is powerful doesn’t imply it’s the optimal solution for every problem. Remember the early days of social media when we were all convinced it would connect people, eliminate barriers, and enhance our relationships? Now, a few years later, we’re confronting issues like misinformation, mental health challenges, and privacy risks. AI might end up having similar powerful effects, but with substantial drawbacks.
What’s your perspective? Do you view AI as an essential component in your field, or are you somewhat doubtful?
Let’s Learn from the Past: Is AI Truly Unstoppable?
We’ve witnessed this kind of technological fervor before. Think back to when automobiles first appeared on roads or when the internet surged in popularity. At that time, many believed these technologies would revolutionize every facet of life. While they did indeed bring about significant changes, society adapted over time.
Consider social media again. It was once heralded as groundbreaking, yet people have begun to scale back usage. Schools are now prohibiting smartphones in classrooms to help students concentrate. Some individuals are even reverting to basic “dumb phones” to cut down on screen time. Just because a technology appears to be unavoidable initially doesn’t guarantee it will maintain its dominance indefinitely.
Here’s something to ponder: If we can choose to restrict social media and smartphones, why couldn’t we adopt a similarly careful approach with AI?
Where AI Is Beneficial – and Where It’s Overhyped
Let’s acknowledge AI’s merits where they exist. In healthcare, AI can significantly impact outcomes. Think about advancements in drug discovery, or how AI is enhancing medical imaging to enable doctors to detect diseases earlier. That’s where AI truly shines.
However, in other domains, the promises do not always align with reality. Research indicates that AI has not yet provided the significant productivity improvements in business that were anticipated. In education, some educators are concerned that AI tools like chatbots might detract from genuine learning and critical thinking.
Even in areas like national security, where the stakes are high, the drive for AI supremacy could lead to a costly arms race. Instead of AI enhancing our safety, there’s a risk that it could heighten tensions, as nations race to develop AI-driven weaponry to maintain an edge over one another. Here’s something to consider: If we can choose to limit social media and smartphones, why couldn’t we take a similarly thoughtful approach with AI?
Where AI Is Helpful – and Where It’s Overrated
Let’s give AI credit where it deserves recognition. In healthcare, it can have a profound impact. Think of the innovations in drug discovery and how AI is enhancing medical imaging to aid doctors in early disease detection. That’s an area where AI proves to be genuinely beneficial and thrilling.
However, in other sectors, the expectations do not always match the realities. Studies suggest that AI has yet to bring about the substantial productivity increases in business that were hoped for. Within education, certain teachers are apprehensive that AI tools like chatbots might hinder authentic learning and critical analytical skills.
Even in areas like national security, where the stakes are significant, the race for AI supremacy could trigger an expensive arms competition. Rather than ensuring our safety, the development of AI might heighten tensions, as nations hasten to create AI-enhanced weaponry to maintain an edge over others.
Let’s Discuss Caution and Choice
So, what actions should we take in response to this? I’m not suggesting we completely abandon AI. That opportunity has passed, and AI is now part of our reality. However, we might not need to plunge recklessly into a future dominated by AI decision-making.
Here are some points to consider as we advance:
- Who’s Behind the Hype? Corporations that benefit from AI naturally have an interest in convincing us of its necessity. Scrutinizing their intentions isn’t an anti-AI stance—it’s a smart approach to consuming new technology.
- Can We Establish Boundaries? Just because AI is capable of something doesn’t imply it should be allowed to do it. We hold the authority to impose restrictions. Consider how families and educational institutions have limited screen time and social media use. Why should we not apply a similar principle to AI?
- What Are the Genuine Needs? Instead of viewing AI as a universal remedy, let’s see it as a resource. There are instances where AI proves exceptionally beneficial, but there are also moments when human involvement or conventional methods may be more effective. We can be discerning and deliberate about our applications of AI.
The Main Point: Let’s Be Deliberate with AI
AI is an incredible resource, and it’s expected to continue influencing our world. However, we need not concede that AI must dominate every facet of our lives. Choices are available to us. We can opt to deploy AI where it truly has a positive effect and resist its use when it doesn’t.
So, what do you think? Do you regard AI as a tool that should be handled with care, or do you believe it is something that will inevitably take charge? Let’s discuss how we want AI to integrate into our lives.