While we’re already seeing and discussing many of the negative aspects of AI, not enough is being done to address them. And the reason is that we’re looking in the wrong place, as futurist and Amy Webb discusses in her book The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Many are quick to blame large tech companies for the problems caused by artificial intelligence. They’re not wrong. A few very wealthy organizations wield enormous power on how AI systems are being developed and deployed across thousands of applications and delivered to billions of devices and users. And by extension, they are responsible for many of the problems we are facing, from algorithmic bias and social media filter bubbles to privacy issues and lack of diversity. These companies, however, are not inherently evil and not alone to blame for the broken state of AI, Webb argues in The Big Nine. The problems run much deeper in the underlying systems that push these companies to work as they do. And if we don’t fix the problems at the root, the consequences can be disastrous. In The Big Nine, Webb provides a comprehensive layout of the current problems of the AI industry, an outlook of what can happen in the future, and a roadmap for setting the industry on the right path.
G-MAFIA vs BAT: The overlords of artificial intelligence
The three remaining companies are Chinese tech giants Baidu, Alibaba, and Tencent, who are already well-known as BAT. “I firmly believe that the leaders of these nine companies are driven by a profound sense of altruism and a desire to serve the greater good: they clearly see the potential of AI to improve health care and longevity, to solve our impending climate issues, and to lift millions of people out of poverty,” Webb writes. But the problem is that the Big Nine are being pushed by external forces—often inconspicuously—that are pressuring them to work in ways that are against their best intentions.
The cultural problems of AI companies
“The future of AI is being built by a relatively few like-minded people within small, insulated groups,” Webb writes. “[As] with all insulated groups that work closely together, their unconscious biases and myopia tend to become new systems of belief and accepted behaviors over time.” And this like-mindedness starts in the universities where big tech companies recruit their talent, and where the pioneers of AI hailed from. In U.S. universities, computer science programs are mostly focused on hard engineering skills, programming, systems engineering, math. When it comes to AI, students focus on machine learning algorithms, natural language processing, computer vision, and other technical skills. There’s little room for anthropology, philosophy, and ethics. Those topics are often overlooked or included as optional. Thirty years ago, when algorithms were still not too dominant in our lives, this would not be much of a problem. But today, AI is slowly but surely finding its way into in sensitive areas such as processing loan applications and making hiring decisions. And in these situations, the algorithms reflect the unconscious biases, preferences, and blind spots of the people who are creating them. Fortunately, all these events were followed by quick apologies and fixes issued by the respective companies. Unfortunately, most of them were found when someone stumbled on them by chance. What we don’t know is the many other hidden ways AI algorithms are discriminating against people without them knowing it. They are paper cuts, causing small disadvantages that might go unnoticed to the individual, but can have massive effects at scale. And when the people who are creating the AI systems are blind to their own biases, they surely won’t know where to look for problems. Why don’t universities fix their programs? Because technology is moving faster than academia. “A single, required ethics course—specifically built for and tailored to students studying AI—won’t do the trick if the material isn’t current and especially if what’s being taught doesn’t reverberate throughout other areas of the curriculum,” Webb writes. And universities can’t press pause to rethink and restructure their courses. “Universities want to show a strong record of employed graduates, and employers want graduates with hard skills. The Big Nine are partners with these universities, which rely on their funding and resources,” Webb writes in The Big Nine. But why don’t tech companies change their norms and criteria?
The profit-driven AI market
Throughout its history, AI research has gone through a series of summers and winters, periods of hype and excitement (and a lot of money thrown at AI research) followed by disillusionment and drying out of funding when the technologies failed to deliver on their promises. The success of deep neural networks in recent years have rejuvenated interest in the field of AI. But research on neural networks is extremely expensive and requires vast amounts of data and compute resources. The mounting costs of deep learning research have pushed AI scientists and research labs into the arms of large tech companies. The deep pockets of tech companies allow scientists to continue their research. But these companies are also driven by market forces and expect return on their investment. “There is tremendous pressure for the G-MAFIA to build practical and commercial applications for AI as quickly as possible. In the digital space, investors have grown accustomed to quick wins and windfalls,” Webb writes in The Big Nine. The direct result of this drive is the premature and hasty release of “AI-powered” products to the market, which means developer don’t have time to weigh in on the negative ramifications. But the less noticed consequence is the commercialization of AI research. Scientific research labs are required to direct at least part of their resources to create profitable products so that they can keep their investors happy and secure the next round of funding. We’ve already seen this happen with the UK-based DeepMind, acquired by Google in 2014, and the San Francisco–based OpenAI, which is receiving its funding from Microsoft. DeepMind created an “applied” division which is working on commercial AI products. OpenAI has pledged to license its “pre-AGI” technologies to its investors, which only includes Microsoft for the moment. Why aren’t tech companies and their use of AI regulated? “In the United States, the G-MAFIA wield significant power and influence over government in part because of America’s market economy system and because we have a strong cultural aversion toward strong government control of business,” Webb writes. But the situation is growing increasing dangerous as AI and the technology created by the G-MAFIA continue to permeate every aspect of our lives. Per Webb: “Sometime in the next decade, the rest of the AI ecosystem will converge around just a few G-MAFIA systems. All the startups and players on the periphery—not to mention you and me—will have to accept a new order and pledge our allegiance to just a few commercial providers who now act as the operating systems for everyday life. Once your data, gadgets, appliances, cars, and services are entangled, you’ll be locked in. As you buy more stuff—like mobile phones, connected refrigerators, or smart earbuds—you’ll find that the G-MAFIA has become an operating system for your everyday life. Humanity is being made an offer that we just can’t refuse.”
The AI-powered surveillance machine
In China, where the state is using every tool at its disposal—including AI—to consolidate its power, the situation is very different but no less dangerous. The Chinese government well understands the implications and potential of advanced AI, and it has already laid out a roadmap to achieve AI domination by 2030. Contrary to the U.S., in China, the government exerts full control over AI companies. BAT are legally obliged to put all of their data at the disposal of authorities and enable the state to conduct mass surveillance and control citizens through their technologies. One of the best-known instances of the government’s initiatives is the infamous Sesame social credit system, which employs AI algorithms and the platforms of BAT to keep a close watch on the behavior of Chinese citizens. The system is supposed to incentivize good behavior, such as abiding by the rules and keeping a good banking record, while punishing bad behavior such as playing video games late into the night and jaywalking. But it is also a tool to keep an eye on political dissidents and marginalize those who are not aligned with the views of the ruling party. What’s in it for BAT? “State-level surveillance is enabled by the BAT, who are in turn emboldened through China’s various institutional and industrial policies,” Webb writes. This is why you see the flourishing of the three companies, which together have a vast share of China’s economy. Webb also spells another warning that is often ignored: the AI brain drain caused by Chinese initiatives. “China is actively draining professors and researchers away from AI’s hubs in Canada and the United States, offering them attractive repatriation packages,” she writes. “There’s already a shortage of trained data scientists and machine-learning specialists. Siphoning off people will soon create a talent vacuum in the West. By far, this is China’s smartest long-term play—because it deprives the West of its ability to compete in the future.”
What happens if we don’t fix AI?
“AI’s consumerism model in the United States isn’t inherently evil. Neither is China’s government-centralized model. AI itself isn’t necessarily harmful to society,” Webb writes. “However, the G-MAFIA are profit-driven, publicly traded companies that must answer to Wall Street, regardless of the altruistic intentions of their leaders and employees. In China, the BAT are beholden to the Chinese government, which has already decided what’s best for the Chinese.” And what’s best for Wall Street and the Chinese government is not necessarily in the best interests of humanity. As we’ve discussed, we’re already bleeding from many paper cuts, and the situation will gradually grow worse if AI research and development is not steered in the right direction. “It’s difficult to wrap our heads around potential crises and opportunities before they’ve already happened, and that’s why we tend to stick to our existing narratives. That’s why we reference killer robots rather than paper cuts. Why we fetishize the future of AI rather than fearing the many algorithms that learn from our data,” Webb warns. In The Big Nine, Web lays out three potential roadmaps for the future of AI, two of which are disastrous. In the “pragmatic scenario,” AI stakeholders will acknowledge problems but will only make minor changes. In the U.S., the government and G-MAFIA will not come together to make sure AI benefits everyone. The paper cuts will increase. Adversarial attacks, reward-hacking, incomplete AI systems, and algorithmic discrimination will continue to harm users across the world. Worried or not, the companies creating AI systems won’t do much because they are under the constant pressure of getting products to the market. People will lose ownership of their data, their privacy, their identities. The social and economic divide will continue to grow. Technological and economic power will continue to consolidate in very few companies, who will continue to compete for user attention and monetization potential and will bombard us with ads everywhere. “Rather than bringing us together, AI has effectively and efficiently split us all apart,” Webb warns. Meanwhile, in China, the government will continue to exert centralized control and use AI to consolidate its power. It will use its leverage to apply AI to its security and military apparatus and move toward developing human-level AI. It will eventually launch subtle AI-powered attacks and take the digital infrastructure of the U.S. as hostage. “Humanity is on the brink of a terrifying ASI [artificial super intelligence] that has been developed by a country that does not share our democratic values and ideals,” Webb warns. In the “catastrophic scenario,” the G-MAFIA will continue their unabated competition and will eventually establish their own version of China’s social score on citizens in different countries. People lack power to decide the smallest things in their lives. The G-MAFIA will cause a divide among the people as everyone becomes locked into one of few incompatible platforms that expand on all aspects of their lives. AI will influence the social fabric. “America and its allies, who once celebrated in the G-MAFIA’s successes, are living under a system of AI totalitarianism,” Webb writes. “Citizens throughout China and all the countries supported by China’s direct investment and infrastructure find that they, too, are living under a pervasive apparatus of AI-powered punishment and reward.” China expands its AI dominion by exporting its technology and surveillance capabilities to other countries. Those countries inevitably become satellite states of the Chinese Communist Party and part of its AI-powered surveillance regime. The adversity between China and U.S. allies reaches a head when one of the parties develops super intelligent AI and annihilates the other.
GAIA: The plan to set AI on the right course
Not all is gloomy. In her book, Webb provides a series of steps that can set AI on the right course and make sure it will benefit all of humanity. Key among them is the formation of the Global Alliance on Intelligence Augmentation, or GAIA, an international body that includes AI researchers, sociologists, economists, game theorists, futurists, and political scientists from all member countries. GAIA will also represent all socioeconomic, gender, race, religious, political, and sexual diversities. “[GAIA members] agree to facilitate and cooperate on shared AI initiatives and policies, and over time they exert enough influence and control that an apocalypse—either because of AGI, ASI, or China’s use of AI to oppress citizens—is prevented,” Webb writes. Member nations of GAIA will collaborate to develop AI frameworks, standards, and best practices. Webb describes it as a “a new social contract between citizens and the Big Nine” that is “based on trust and collaboration.” Such a body can bring about the “optimistic scenario,” in which AI is a force for good. Citizens benefit from transparency, standardized protocols, choice of technology, ownership of data. AI complements human cognition, provides predictive care to everyone, fights climate change, finds and filters out misinformation on social media, and more. Under the guidance of GAIA, AI brings all people together. All states, including China, will be invited to join the alliance. If they don’t, their ambitions to extend its state of surveillance will be held in check by a powerful global coalition that uses its technological and economical advantage for the good of all humanity. No government will be able to prey on poor countries to expand its own AI dominion. GAIA will provide a fairer alternative in which no state is forced to trade the wellbeing of their citizens for survival. That sounds easier said than done, but as Webb explains in her book, it is a path that is build one step, one brick, one pebble at a time. The Big Nine will play a crucial part in the future of AI, but we can’t let them do it alone. “Safe, beneficial technology isn’t the result of hope and happenstance. It is the product of courageous leadership and of dedicated, ongoing collaborations,” Webb writes. “The Big Nine are under intense pressure—from Wall Street in the United States and Beijing in China—to fulfill shortsighted expectations, even at great cost to our futures. We must empower and embolden the Big Nine to shift the trajectory of artificial intelligence, because without a groundswell of support from us, they cannot and will not do it on their own.” This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.