Ding Liren’s victory as the new World Chess Champion captivated many, including myself. Like many chess enthusiasts, I turned to AI, specifically Stockfish, to analyze the intricacies of the games.
Chess engines like Stockfish are powerhouses of calculation, capable of evaluating positions with incredible precision. We see their outputs as numerical evaluations: positive for White’s advantage, zero for a draw, and negative for Black’s. This number, we’re often told, represents the approximate pawn advantage. But does that really tell the whole story?
The Puzzle of the Pawn Advantage
Think about it: a numerical evaluation, while objectively accurate most of the time, can feel strangely detached from the actual flow of the game. Stockfish arrives at its conclusions by considering countless factors and future possibilities. It gives us the ‘what,’ but often leaves us wondering ‘why.’
As chess players, we crave understanding. Knowing that a position is +1.5 doesn’t inherently reveal why. We’re left to dissect the position ourselves, hunting for the reasons behind the engine’s assessment. It’s like a teacher handing you the answer to a complex problem without showing the work – frustrating, right?
This isn’t unique to chess AI. Consider recommendation algorithms on platforms like Netflix or YouTube. They’re incredibly effective at keeping us engaged, judged by metrics like watch time. But can we truly grasp why a specific video is recommended? Understanding the ‘why’ behind AI’s decisions is becoming increasingly important.
Objectivity vs. Intelligibility: A Key Distinction
When we talk about making AI more intelligible, it’s not about understanding its internal code. It’s about making its reasoning and outputs more intuitive for humans.
Enter AlphaZero, developed by DeepMind. Instead of spitting out numerical evaluations like Stockfish, AlphaZero assesses positions using probabilities. Its outputs range from -1 to 1, offering a more immediately understandable sense of win probability compared to Stockfish’s potentially large numerical evaluations.
Here’s a comparison:
- Stockfish: Outputs numerical evaluations (e.g., +1.5, -0.8).
- AlphaZero: Outputs probabilities (e.g., 0.8 for White winning, 0.2 for Black winning).
The chart below illustrates the correlation between traditional chess engine evaluations and AlphaZero’s successor, Leela’s, evaluations:

Why Probabilities Feel More Intuitive
Both Stockfish and AlphaZero provide objective evaluations. However, AlphaZero’s probabilistic approach resonates more with our human intuition. Let’s break down why:
- Relatability: A shift from 0 to +5 in Stockfish’s evaluation correlates with a move from a draw to a strong winning chance for White. AlphaZero expresses this as a change in win probability, say from 50% to 80%.
- Contextual Understanding: While Stockfish might show the same numerical difference between +5 and +10 as between 0 and +5, the practical impact isn’t the same. Going from a balanced position to a clear advantage (0 to +5) feels significant, reflecting a substantial shift in momentum. The jump from +5 to +10, while numerically equivalent, often represents a refinement of an already dominant position, not a fundamental change in the game’s trajectory. AlphaZero’s probability shifts mirror this more closely.
- Emotional Connection: Witnessing a position shift from a draw to an 80% win probability for White aligns with the feeling of White gaining significant pressure. The numerical increase from +5 to +10, while objectively correct, might not evoke the same intuitive understanding of the game’s dynamics.
This move towards intelligibility – making AI’s outputs understandable and relatable – is a crucial step forward. While traditional AI excels at objectivity, the next wave of progress lies in bridging the gap between AI’s calculations and human comprehension. As Peter Thiel famously said, we need to move from ‘0 to 1,’ doing something fundamentally different, rather than just improving on existing capabilities (‘1 to n’). Making AI more intelligible is that ‘0 to 1’ moment.
Can AI Understand Humanity?
Consider this thought-provoking question posed in a 1983 Electronic Arts advertisement:
“Can a computer make you cry? Right now, no one knows. This is partly because many would consider the very idea frivolous. But it’s also because whoever successfully answers this question must first have answered several others.
Why do we cry? Why do we laugh, or love, or smile? What are the touchstones of our emotions?
Until now, the people who asked such questions tended not to be the same people who ran software companies. Instead, they were writers, filmmakers, painters, and musicians. They were, in the traditional sense, artists.”
These questions, asked during the dawn of personal computing, remain remarkably relevant today. We’ve achieved incredible advancements in AI’s computational power and analytical abilities. However, the focus is shifting. We need to explore how AI can express its reasoning in a way that resonates with human understanding.
The Path to Intelligible AI
So, how do we make AI more intelligible?
- Enhanced Self-Explanation: AI needs to be able to articulate the ‘why’ behind its decisions. Instead of just providing an output, it should explain the underlying reasoning, the data it considered, and the logic it followed.
- Intuitive Communication: Moving beyond purely numerical outputs to more human-friendly representations, like probabilities as seen with AlphaZero, is a significant step.
- Transparency in Methodology: Just as researchers detail their methods, AI should provide insights into its processes. This fosters trust and allows for better learning and collaboration.
Imagine an AI that not only tells you the best move in chess but also explains the key tactical and strategic factors influencing its decision in a way that a human player can readily grasp.
The Future of AI: Understanding and Being Understood
To truly leverage the power of AI, we need to move beyond simply accepting its outputs. We need to understand its reasoning. Similarly, for AI to reach its full potential, it needs to communicate with us in ways that are intuitive and accessible.
The next leap in AI won’t solely be about increasing processing power. It will be about fostering a deeper understanding between humans and machines. It’s about enabling AI to communicate its insights through words, pictures, and even, perhaps one day, in ways that resonate with our emotions. By focusing on intelligibility, we unlock new possibilities for learning, collaboration, and innovation, making AI a truly powerful tool for humanity.
Disclaimer: The information provided is not trading advice, Bitcoinworld.co.in holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.