BitcoinWorld

Latest News

[Opinion] Why better AI does not depend on greater computing power

Most of you will probably have heard the news- Ding Liren is the new World Chess Champion. As a chess fan myself, I naturally followed the developments of the games- and analyzed them using an AI bot called Stockfish.

How chess engines like Stockfish analyze and evaluate positions precisely is unknown- but their outputs can be understood by a human. A positive evaluation means that white is winning, a draw is given the evaluation of 0, and a negative evaluation means black is winning, and so on.

But where does this number come from? It comes from Stockfish’s own evaluation of the position, taking in many factors, and outputting a number. We are often told that that number is roughly equivalent to how many pawns worth of advantage one side has.

Yet, this is a horribly unintuitive way for chess players to understand chess positions.

The number gives us an answer, yes. And the number is objectively correct most of the time since the AI has managed to take into account so many different factors, possible future moves, and much more in order to arrive at its evaluation of who is winning and what is the best move.

What is far more important is the question ‘Why is the position’s evaluation as such?’ And knowing the answer in no way illuminates for you the reason for the answer.

As a chess player, this can be infuriating. It forces us to spend time to find the reasons for the chess engines’ evaluations, and oftentimes, these reasons can be difficult to find.

Imagine a teacher who gives you an extremely difficult problem, who also gives you the answer, but refuses to reason it through with you or to provide hints. You would rightly conclude that such a teacher is a poor teacher. Yet, the answer is almost certainly correct- with good reason for it.

And this is not the only time when AI algorithms confound us as well. AI algorithms are used on websites like Netflix and Youtube to recommend videos for us to watch, where the longer we stay on the site the better the algorithm is, but there is very little that we actually understand about these AI algorithms aside from pointing to the metrics we measure their success by and the data that these algorithms had access to.

But understanding the reasons for their success would greatly help us, and this is where the next huge development of AI should be in making AI that is intelligible to us.

Objectivity is not the same as intelligibility

What I mean by making AI that is intelligible to us does not mean that we can understand what the AI is saying- rather, it is an AI that can be explained in an intuitive fashion.

Stockfish’s counterpart is AlphaZero- built by Deepmind, a subsidiary of Google. Instead of evaluating chess positions and moves and outputting a seemingly random number, AlphaZero instead measures positions with probabilities. Any output by AlphaZero is going to be between -1 to 1, whereas Stockfish will often give numbers that can go into the hundreds, without it being clear what these numbers mean exactly.

The chart below shows how traditional chess engine evaluations, often denoted in centipawns, correlate with Leela’s (AlphaZero’s successor) evaluations.

 

What is important is that AlphaZero and Leela still give objective evaluations- but these evaluations feel much more intuitive. Why? Because it gives an evaluation that also better matches how much you feel it.

Having a position go from 0 to +5 correlates with a game that went from a draw to white having a high probability of winning. But is there really much of a difference between having a position go from +5 to +10? While Stockfish gives the same difference in evaluation, going from 0 to +5 roughly reflects a change from no chance to win to an 80% chance to win, while going from +5 to +10 only increases the chance to win by an additional 10%.

Why is this metric better? Because it also correlates much more with how the position feels to us. Watching a position go from drawn 0 to a +5 advantage feels like White is pressuring his opponent well while his opponent is succumbing to the pressure. On the other hand, an increase from a +5 to a +10 advantage does not feel like much because most of the work is already done.

This is part of what I mean by making AI more intelligible- the AI in this case is more understandable. It is far more intuitive and gives an output that is objectively correct, but also one that we can feel.

Traditional AI has always been good at being objective- but humanity will not see huge improvements from AI continuing to improve on its objectivity. This is what Paypal founder Peter Thiel referred to as ‘going from 1 to n’. Instead, what we need is to ‘go from 0 to 1’ and do something differently- and that is to make AI more intelligible to humans.

AlphaZero has achieved a part of this- by making its output more intuitive, it is on the way to making AI more understandable to humans. But what more can be done?

Understanding Humanity

In 1983, a magazine ad was published by Electronic Arts. It was published in an era where computers were just coming into their own. In it, the ad promises to fulfil the potential of the personal computer.

It begins with an important question, and some food for thought:

“Can a computer make you cry? Right now, no one knows. This is partly because many would consider the very idea frivolous. But it’s also because whoever successfully answers this question must first have answered several others.

Why do we cry? Why do we laugh, or love, or smile? What are the touchstones of our emotions?

Until now, the people who asked such questions tended not to be the same people who ran software companies. Instead, they were writers, filmmakers, painters, and musicians. They were, in the traditional sense, artists.”

These were prescient questions in 1983- when the computer was first becoming a mass-market consumer product. But they are relevant today as well. Over the past decades, we have come up with AI that is better and better at calculation, objectivity, and conceptual understanding.

But today, these questions, while still important, are increasingly not the only questions that need to be answered. New questions, about how AI expresses itself in a way that humans can understand, or about how humans can better understand AI, need to be asked.

Better self-explanation by AI of the answers that they output would provide far more clarity as to what goes into their decisions, and why their answers are correct. After all, when researchers publish their work, it is not just the conclusion that matters- they are expected to include a methodology, data sets, qualitative reasoning, and much more.

 

To learn from AI, humanity needs to better understand AI, and AI needs to better communicate with humanity- not just through numbers and signals and lights, but through words and pictures and emotions. To explain the decisions that can be made, the calculations that are carried out, and the purpose for these operations, are what AI needs to better be able to do.

 

The next step in AI will not be made by increasing the computing power of computers- but it will be made by making AI express itself in ways that humanity can better understand it, and ways that even non-programmers can do so.

Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Crypto is not a legal tender and is subject to market risks. Readers are advised to seek expert advice and read offer document(s) along with related important literature on the subject carefully before making any kind of investment whatsoever. Crypto market predictions are speculative and any investment made shall be at the sole cost and risk of the readers.