Placeholder Image

Subtitles section Play video

  • Translator: Ivana Korom Reviewer: Krystian Aparta

  • Let me share a paradox.

  • For the last 10 years,

  • many companies have been trying to become less bureaucratic,

  • to have fewer central rules and procedures,

  • more autonomy for their local teams to be more agile.

  • And now they are pushing artificial intelligence, AI,

  • unaware that cool technology

  • might make them more bureaucratic than ever.

  • Why?

  • Because AI operates just like bureaucracies.

  • The essence of bureaucracy

  • is to favor rules and procedures over human judgment.

  • And AI decides solely based on rules.

  • Many rules inferred from past data

  • but only rules.

  • And if human judgment is not kept in the loop,

  • AI will bring a terrifying form of new bureaucracy --

  • I call it "algocracy" --

  • where AI will take more and more critical decisions by the rules

  • outside of any human control.

  • Is there a real risk?

  • Yes.

  • I'm leading a team of 800 AI specialists.

  • We have deployed over 100 customized AI solutions

  • for large companies around the world.

  • And I see too many corporate executives behaving like bureaucrats from the past.

  • They want to take costly, old-fashioned humans out of the loop

  • and rely only upon AI to take decisions.

  • I call this the "human-zero mindset."

  • And why is it so tempting?

  • Because the other route, "Human plus AI," is long,

  • costly and difficult.

  • Business teams, tech teams, data-science teams

  • have to iterate for months

  • to craft exactly how humans and AI can best work together.

  • Long, costly and difficult.

  • But the reward is huge.

  • A recent survey from BCG and MIT

  • shows that 18 percent of companies in the world

  • are pioneering AI,

  • making money with it.

  • Those companies focus 80 percent of their AI initiatives

  • on effectiveness and growth,

  • taking better decisions --

  • not replacing humans with AI to save costs.

  • Why is it important to keep humans in the loop?

  • Simply because, left alone, AI can do very dumb things.

  • Sometimes with no consequences, like in this tweet.

  • "Dear Amazon,

  • I bought a toilet seat.

  • Necessity, not desire.

  • I do not collect them,

  • I'm not a toilet-seat addict.

  • No matter how temptingly you email me,

  • I am not going to think, 'Oh, go on, then,

  • one more toilet seat, I'll treat myself.' "

  • (Laughter)

  • Sometimes, with more consequence, like in this other tweet.

  • "Had the same situation

  • with my mother's burial urn."

  • (Laughter)

  • "For months after her death,

  • I got messages from Amazon, saying, 'If you liked that ...' "

  • (Laughter)

  • Sometimes with worse consequences.

  • Take an AI engine rejecting a student application for university.

  • Why?

  • Because it has "learned," on past data,

  • characteristics of students that will pass and fail.

  • Some are obvious, like GPAs.

  • But if, in the past, all students from a given postal code have failed,

  • it is very likely that AI will make this a rule

  • and will reject every student with this postal code,

  • not giving anyone the opportunity to prove the rule wrong.

  • And no one can check all the rules,

  • because advanced AI is constantly learning.

  • And if humans are kept out of the room,

  • there comes the algocratic nightmare.

  • Who is accountable for rejecting the student?

  • No one, AI did.

  • Is it fair? Yes.

  • The same set of objective rules has been applied to everyone.

  • Could we reconsider for this bright kid with the wrong postal code?

  • No, algos don't change their mind.

  • We have a choice here.

  • Carry on with algocracy

  • or decide to go to "Human plus AI."

  • And to do this,

  • we need to stop thinking tech first,

  • and we need to start applying the secret formula.

  • To deploy "Human plus AI,"

  • 10 percent of the effort is to code algos;

  • 20 percent to build tech around the algos,

  • collecting data, building UI, integrating into legacy systems;

  • But 70 percent, the bulk of the effort,

  • is about weaving together AI with people and processes

  • to maximize real outcome.

  • AI fails when cutting short on the 70 percent.

  • The price tag for that can be small,

  • wasting many, many millions of dollars on useless technology.

  • Anyone cares?

  • Or real tragedies:

  • 346 casualties in the recent crashes of two B-737 aircrafts

  • when pilots could not interact properly

  • with a computerized command system.

  • For a successful 70 percent,

  • the first step is to make sure that algos are coded by data scientists

  • and domain experts together.

  • Take health care for example.

  • One of our teams worked on a new drug with a slight problem.

  • When taking their first dose,

  • some patients, very few, have heart attacks.

  • So, all patients, when taking their first dose,

  • have to spend one day in hospital,

  • for monitoring, just in case.

  • Our objective was to identify patients who were at zero risk of heart attacks,

  • who could skip the day in hospital.

  • We used AI to analyze data from clinical trials,

  • to correlate ECG signal, blood composition, biomarkers,

  • with the risk of heart attack.

  • In one month,

  • our model could flag 62 percent of patients at zero risk.

  • They could skip the day in hospital.

  • Would you be comfortable staying at home for your first dose

  • if the algo said so?

  • (Laughter)

  • Doctors were not.

  • What if we had false negatives,

  • meaning people who are told by AI they can stay at home, and die?

  • (Laughter)

  • There started our 70 percent.

  • We worked with a team of doctors

  • to check the medical logic of each variable in our model.

  • For instance, we were using the concentration of a liver enzyme

  • as a predictor,

  • for which the medical logic was not obvious.

  • The statistical signal was quite strong.

  • But what if it was a bias in our sample?

  • That predictor was taken out of the model.

  • We also took out predictors for which experts told us

  • they cannot be rigorously measured by doctors in real life.

  • After four months,

  • we had a model and a medical protocol.

  • They both got approved

  • my medical authorities in the US last spring,

  • resulting in far less stress for half of the patients

  • and better quality of life.

  • And an expected upside on sales over 100 million for that drug.

  • Seventy percent weaving AI with team and processes

  • also means building powerful interfaces

  • for humans and AI to solve the most difficult problems together.

  • Once, we got challenged by a fashion retailer.

  • "We have the best buyers in the world.

  • Could you build an AI engine that would beat them at forecasting sales?

  • At telling how many high-end, light-green, men XL shirts

  • we need to buy for next year?

  • At predicting better what will sell or not

  • than our designers."

  • Our team trained a model in a few weeks, on past sales data,

  • and the competition was organized with human buyers.

  • Result?

  • AI wins, reducing forecasting errors by 25 percent.

  • Human-zero champions could have tried to implement this initial model

  • and create a fight with all human buyers.

  • Have fun.

  • But we knew that human buyers had insights on fashion trends

  • that could not be found in past data.

  • There started our 70 percent.

  • We went for a second test,

  • where human buyers were reviewing quantities

  • suggested by AI

  • and could correct them if needed.

  • Result?

  • Humans using AI ...

  • lose.