Technological advances always raise questions: about their benefits, costs, risks and ethics. And they require detailed, well-explained answers from the people behind them. It was for this reason that we launched our series of monthly Tech Exchange dialogues in February 2022.
Now, 18 months on, it has become clear that advances in one area of technology are raising more questions, and concerns, than any other: artificial intelligence. There are ever more people — scientists, software developers, policymakers, regulators — attempting answers.
Hence, the FT is launching AI Exchange, a new spin-off series of long-form dialogues.
Over the coming months, FT journalists will conduct in-depth interviews with those at the forefront of designing and safeguarding this rapidly evolving technology to assess how the power of AI will affect our lives.
To give a flavor of what to expect and the topics and arguments that will be covered below we provide a selection of the most insightful AI discussions to date from the original (and ongoing) Tech Exchange series. They feature Aidan Gomez co-founder of Cohere; Arvind Krishna chief executive IBM; Adam Selipsky former head Amazon Web Services; Andrew Ng computer scientist co-founder Google Brain; Helle Thorning-Schmidt co-chair Meta’s Oversight Board.
From October AI Exchange will bring you views industry executives investors senior officials government regulatory authorities as well as other specialists to help assess what future holds.
If AI can replace labor it’s a good thing
Arvind Krishna chief executive IBM and Richard Waters west coast editor
Richard Waters: When you talk to businesses CEOs they ask ‘What do we do with this AI thing?’ What do you say?
Arvind Krishna: I always point two three areas initially. One anything around customer care answering questions from people … it is really important area where I believe we can have much better answer maybe around half current cost. Over time it can get even lower than half but it can take half out pretty quickly.
A second one is around internal processes. For example every company any size worries about promoting people hiring people moving people these have reasonably fair processes. But 90 per cent work involved getting information together. I think AI can do that then human make final decision.
Then think regulatory work whether financial sector audits whether healthcare sector big chunk could get automated using techniques.Then there are other use cases but probably harder bit further out … things like drug discovery or trying finish up chemistry.
We shortage labor real world because demographic issue world facing.So have technologies help United States now sitting at 3.4 per cent unemployment lowest in 60 years.So maybe find tools replace some portions labor good thing time.
RW: Do think going see winners losers? And if so what going distinguish winners losers?
AK: There’s two spaces.There business consumer … then enterprises going use technologies.If think use cases pointed out all improving productivity enterprise.And thing improving productivity [is enterprises] left investment dollars advantage products.Is R&D? better marketing? Is better sales? Is acquiring things?.There’s lot places go spend spare cash flow.
AI threat human existence ‘absurd’ distraction real risks
Aidan Gomez co-founder Cohere George Hammond venture capital correspondent
George Hammond:[We’re now at] sharp end conversation around regulation in AI so interested view whether case — as [Elon] Musk others advocated — stopping things six months trying handle it.
Aidan Gomez:I think six-month pause letter absurd.It just categorically absurd.How would implement six-month clause practically? Who pausing? And enforce that? And coordinate globally? It makes no sense.The request not plausibly implementable.So first issue with it.
The second issue premise there’s lot language talking superintelligent artificial general intelligence (AGI) emerging take render species extinct eliminate humans.I think super-dangerous narrative.I think irresponsible
That’s really reckless harmful preys general public’s fears because better part half century been creating media sci-fi around could go wrong Terminator-style bots all these fears.So really preying their fear
GH Are grounds fear?talking about development AGI potential singularity moment technically feasible thing happen albeit improbable?
AG:I exceptionally improbable.There real risks technology.There reasons fear technology uses how.To spend time debating whether species going extinct takeover superintelligent AGI absurd use time public mindspace
We flood social media accounts truly indistinguishable human extremely scalable bot farms pump particular narrative.We need mitigation strategies for.One those human verification know accounts tied actual living human being filter feeds include legitimate human beings participating conversation
There major risks.We shouldn’t reckless deployment end-to-end medical advice coming bot without doctor oversight.That should not happen
So there real risks there room regulation.I’m anti-regulation actually quite favor hope public knows fantastical stories risk unfounded.They’re distractions conversations should be going
There generative model rule them all
Adam Selipsky former head Amazon Web Services Richard Waters west coast editor
Richard Waters What tell us own work [generative large language models]? How long been at ?
Adam Selipsky We’re maybe three steps into 10K race question should not be ‘Which runner ahead three steps race?’ but ‘What course look like ? What rules race going ? Where trying get race?’
If sitting around 1996 asked ‘Who internet company ?’ would silly question.But hear …‘Who winner space?’
Generative foundational set technologies years maybe decades come.Nobody knows winning technologies even invented yet winning companies even formed yet .
So customers need choice.They need able experiment.There model rule That preposterous proposition
Companies figure case model best ;that use case another model best.That choice going incredibly important
The second concept critically important middle layer security privacy A lot initial efforts launched without concept security privacy.As result talked least Fortune CIOs banned ChatGPT enterprises scared company data going internet becoming public improving models competitors
RW remember early days search engines prediction ’d many specialised search engines different purposes ended one search engine ruled them So might end two or three big large language models ?
AS The likely scenario given thousands maybe tens thousands different applications use cases generative multiple winners Again Internet ’s one winner Internet
Do we think world better off more less intelligence?
Andrew Ng computer scientist co-founder Google Brain Ryan McMorrow deputy Beijing bureau chief
Ryan McMorrow In October White House issued executive order intended increase government oversight AI.Has gone far?
Andrew Ng I taken dangerous step With various government agencies tasked dreaming additional hurdles AI development I path stifling innovation putting place anti-competitive regulations
Having intelligence world artificial help us better solve problems
We know today’s supercomputer tomorrow’s smartwatch start-ups scale compute processing power becomes pervasive see organisations run threshold.Setting compute threshold makes much sense saying device uses watts systematically dangerous device uses only W may true naive way measure risk
RM What would way measure risk If ’re using compute threshold ?
AN When look applications understand means something safe dangerous regulate properly problem regulating technology layer technology used many things regulating slows technological progress
At heart question :think world better off more less intelligence true intelligence comprises both human intelligence artificial intelligence absolutely true intelligence used nefarious purposes
But many centuries society developed humans become educated smarter .I having intelligence world artificial help us better solve problems.Throwing regulatory barriers rise intelligence could used nefarious purposes would set back society
Not all content harmful’
Helle Thorning-Schmidt co-chair Meta’s Oversight Board Murad Ahmed technology news editor
Murad Ahmed This year elections.More than half world gone polls.You’ve helped raise alarm misinformation particularly deepfakes could fracture democracy.We’re midway through year.Have seen prophecy come pass?
Helle Thorning-Schmidt If look different countries I’ll see mixed bag.What seeing India example deepfakes widespread.Also Pakistan widespread.[The technology is] being used make people say something even though dead.It making people speak prison.It making famous people back parties might backing.If look European elections obviously something observed deeply doesn’t look like distorting elections
What suggested Meta they need look harm take something created by AII also suggested modernise whole community standards moderated content label generated content see dealing That’s suggesting Meta .
I change Meta operates space.I end couple years.Meta labelling content Made with also finding signals consent remove platforms doing much faster.This difficult course need good system.They also need human moderators cultural knowledge who help Note started labelling content “Made May