The past couple of years have shown the incredible capabilities of artificial intelligence, as well as its many risks. For James Manyika, Google’s senior vice president of technology and society, that means spending a lot time thinking about what could go wrong.
“These technologies come with an extraordinary range of risks and challenges,” Manyika said at Fortune‘s Brainstorm A.I. conference in San Francisco on Monday, citing everything from A.I.’s impact on the labor markets, to the technology not working as expected, as well as the deliberate misuse of A.I.
During his on-stage conversation with Fortune CEO Alan Murray, Manyika pointed to the many recent advances in A.I. in fields such as biology, mathematics and physics.
“We’re starting to be able to show that A.I. can actually help us make extraordinary breakthroughs in these foundational fields that are going to be incredibly useful,” he said.
At Google in particular, that means using A.I. to improve existing products like search, as well as creating new products that would not be possible without advances in A.I. The Waymo self-driving cars being developed by Google-parent Alphabet are one such example. And a “whole range new products that are only possible through A.I.” are in the pipeline at Google, Manyika said.
Manyika started as Google’s first SVP of technology and society in January, reporting directly to the firm’s CEO Sundar Pichai. His role is to advance the company’s understanding of how technology affects society, the economy, and the environment.
“My job is not so much to monitor, but to work with our teams to make sure we are building the most useful technologies and doing it responsibly,” Manyika said.
One doesn’t have to look far to find examples of the nefarious misuse of artificial intelligence these days. OpenAI’s newest A.I. language model GPT-3 was quickly coopted by users to tell them how to shoplift and make explosives, and it took just one weekend for Meta’s new A.I. Chatbot to reply to users with anti-Semitic comments.
The regulatory and policy landscape for A.I. still has a long way to go. Some suggest that the technology is too new for heavy regulation to be introduced, while others (like Tesla CEO Elon Musk) say we need to be preventive government intervention.
Manyika said that he, and many of his peers, would welcome more regulatory involvement in the field of A.I.
“I think you’ll find that many of us embrace regulation because we’re going to have to be thoughtful about when is it appropriate to use these technologies, how do we put them out into the world,” he said.
While thinking about A.I. is only part of Manyika’s role at Google, it’s a part of the internet company that comes with a lot of baggage. In 2018, many Google employees protested about the company’s A.I. work with the US military. And in 2020, the company was embroiled in controversy after it fired the technical co-lead of the Ethical Artificial Intelligence team, Timnit Gebru, who was critical of natural language processing models at the firm.
Manyika didn’t address the past controversies, but instead focused on the road ahead for the firm — both in terms of the opportunities and potential pitfalls.
“There’s a whole range of things we have to be thoughtful about,” Manyika said.
Our new weekly Impact Report newsletter will examine how ESG news and trends are shaping the roles and responsibilities of today’s executives—and how they can best navigate those challenges. Subscribe here.