<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Brain-Science on Kingjin.io</title><link>https://kingjinsight.github.io/tags/brain-science/</link><description>Recent content in Brain-Science on Kingjin.io</description><generator>Hugo -- 0.141.0</generator><language>en-us</language><lastBuildDate>Thu, 02 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://kingjinsight.github.io/tags/brain-science/index.xml" rel="self" type="application/rss+xml"/><item><title>The Limitations of Current AI: Architecture, Embodiment, and Education</title><link>https://kingjinsight.github.io/posts/limitation_of_current_ai/</link><pubDate>Thu, 02 Apr 2026 00:00:00 +0000</pubDate><guid>https://kingjinsight.github.io/posts/limitation_of_current_ai/</guid><description>&lt;p>This post summarises a podcast discussing the limitations of current AI from multiple perspectives. &lt;a href="https://www.youtube.com/watch?v=-Et3GJRSI_0&amp;amp;t=5788s">Watch the podcast&lt;/a>&lt;/p>
&lt;h2 id="i-three-fundamental-dilemmas-facing-ai-architecture-level">I. Three Fundamental Dilemmas Facing AI (Architecture Level)&lt;/h2>
&lt;p>Professor Liu Jia argues that current transformer-based large models have three fundamental structural flaws compared to the human brain:&lt;/p>
&lt;p>&lt;strong>1. Insufficient neuron complexity.&lt;/strong> During evolution, the brain took two paths: increasing the number of neurons &lt;em>and&lt;/em> increasing their complexity. Today&amp;rsquo;s AI neurons are extremely simple — sum the inputs, pass through an activation function, done. Biological neurons, by contrast, are four-dimensional structures (three spatial dimensions + time), with their own dynamics. A single refined biological neuron has computing power equivalent to 5–8 layers of a deep neural network. Transformers have no time dimension, no partial differential equations — they are fundamentally a &amp;ldquo;2D&amp;rdquo; system.&lt;/p></description><content:encoded><![CDATA[<p>This post summarises a podcast discussing the limitations of current AI from multiple perspectives. <a href="https://www.youtube.com/watch?v=-Et3GJRSI_0&amp;t=5788s">Watch the podcast</a></p>
<h2 id="i-three-fundamental-dilemmas-facing-ai-architecture-level">I. Three Fundamental Dilemmas Facing AI (Architecture Level)</h2>
<p>Professor Liu Jia argues that current transformer-based large models have three fundamental structural flaws compared to the human brain:</p>
<p><strong>1. Insufficient neuron complexity.</strong> During evolution, the brain took two paths: increasing the number of neurons <em>and</em> increasing their complexity. Today&rsquo;s AI neurons are extremely simple — sum the inputs, pass through an activation function, done. Biological neurons, by contrast, are four-dimensional structures (three spatial dimensions + time), with their own dynamics. A single refined biological neuron has computing power equivalent to 5–8 layers of a deep neural network. Transformers have no time dimension, no partial differential equations — they are fundamentally a &ldquo;2D&rdquo; system.</p>
<p><strong>2. No long-range feedback connections.</strong> Roughly 40% of the brain&rsquo;s connections are long-range feedback links (e.g., the frontal lobe connecting directly back to the visual cortex), used for resolving uncertainty, forming hypotheses, and enabling creativity. Transformers are purely feedforward networks — once trained, there is no feedback at all. This is precisely why their reasoning ability has a ceiling.</p>
<p><strong>3. No parallel processing capability.</strong> The essence of a transformer is &ldquo;predict the next token,&rdquo; which is inherently sequential. But when a human faces danger, they instantly process massive amounts of visual information in parallel — you don&rsquo;t analyse a flying object token by token, you just dodge. This kind of parallel perception is architecturally impossible for transformers.</p>
<hr>
<h2 id="ii-two-core-challenges-of-embodied-intelligence">II. Two Core Challenges of Embodied Intelligence</h2>
<p><strong>1. Perception: parallel real-time sensing (corresponding to point 3 above).</strong> True embodied intelligence requires the ability to instantly catch a &ldquo;fleeting&rdquo; danger and react, as our ancestors did in the wild. This is an open-ended problem, unlike autonomous driving which operates on a closed dataset. Existing robots (including Optimus) have no real &ldquo;eyes&rdquo; — they are essentially pre-programmed industrial robot arms in a different shape.</p>
<p><strong>2. Motor control: the cerebellum&rsquo;s world model (System 1).</strong> Of the brain&rsquo;s 86 billion neurons, roughly 70 billion are in the cerebellum — 6–7 times the number in the cerebral cortex. The cerebellum builds an intuitive physical world model: automatically adjusting grip force when picking up a full versus empty cup, tossing a book onto a table. These are effortless for humans but computationally extreme for robots.</p>
<p>Current AI addresses &ldquo;System 2&rdquo; (cerebral cortex — reasoning, language, mathematics) and has not touched &ldquo;System 1&rdquo; (cerebellum/basal ganglia — intuition, movement, perception) at all. True embodied intelligence requires <strong>a second enlightenment grounded in neuroscience</strong>.</p>
<hr>
<h2 id="iii-education-in-the-age-of-ai">III. Education in the Age of AI</h2>
<p>Liu Jia argues that today&rsquo;s education system — from primary school through university — was built on the first Industrial Revolution and has entirely become &ldquo;false demand&rdquo; in the AI era. He proposes three core directions:</p>
<p><strong>1. Emphasise the concept of &ldquo;self&rdquo;.</strong> The industrial age buried the self inside collective division of labour — people just had to tighten their bolts. In the AI era, one person plus ten thousand GPUs can form a one-person company. The greatest source of motivation is <strong>one&rsquo;s own interests</strong>. The core of education should be helping children discover &ldquo;who am I and what do I want.&rdquo;</p>
<p><strong>2. Cultivate AI-native thinking.</strong> Knowing how to use AI tools is not the same as AI-native thinking. Being AI-native is a <strong>fundamental shift in cognitive paradigm</strong> — treating AI as part of your body, not as a tool. Adults struggle to make this shift because of entrenched thinking patterns; children can build this mindset from scratch far more easily.</p>
<p><strong>3. Master deductive reasoning / first-principles thinking.</strong> AI can answer any surface-level question, but finding the &ldquo;logical origin&rdquo; is something AI cannot do. Children must be trained to ask, before any decision: <strong>What is my logical starting point?</strong> This is both an anchor against getting lost in an information explosion and the root of 0-to-1 creativity.</p>
<blockquote>
<p>His closing point: <strong>education focused on memorising knowledge has become completely worthless</strong> (because large models make knowledge instantly accessible). The most important education going forward is teaching people <strong>how to become themselves</strong>.</p>
</blockquote>
]]></content:encoded></item></channel></rss>