Andrew Ng: What Is the Bottleneck in AI Products?

Recently, Andrew Ng publicly stated that the core bottleneck in AI products lies in product management. He emphasizes that the key capability for product managers (PMs) is no longer just executing project implementation, but possessing exceptionally high user empathy and rapid decision-making skills, enabling product decisions to keep pace with the accelerating speed of coding.

Ng argues that just as typewriters made writing easier, exposing the real bottleneck of “what to write,” agentic coding assistants have dramatically boosted coding efficiency, making “deciding what to build” the primary hurdle in AI product development. He calls this the “Product Management Bottleneck.”

In Ng’s view, product managers with “high user empathy” can make quick, intuitive decisions that are often correct. They build on data to learn users’ mental models, continually refining and improving their choices.

Ng’s insights align remarkably with “The Right-Brained Organization” theory. This theory, developed by Qgenius based on brain physiology, provides a user-centered framework for AI-era enterprise management. It similarly stresses the importance of intuition and empathy, positing these as skills that leaders must learn, cultivate, and train. Moreover, the theory explicitly states that an organization’s innovative “right-brain” capabilities—such as creativity and relational thinking—must be grounded in “left-brain” logical analysis and data foundations. In fact, as early as the 2011 iPad 2 launch, Steve Jobs echoed this sentiment: “It’s in Apple’s DNA that technology alone is not enough—it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.” Though Jobs didn’t explicitly frame it as a prerequisite like Qgenius’ model, where left-brain capabilities are a prerequisite for unlocking right-brain talent, his words highlight the fusion needed for heartfelt innovation.

Yet, Ng’s reflections reveal a deeper, higher-level issue. If “artificial general intelligence” (AGI) truly materializes in the future, emotion will be an unavoidable challenge. AGI isn’t merely a database encompassing all the world’s knowledge; it must possess a genuine “world model.” The latter requires truly understanding emotions—and even generating them—to make decisions comparable to humans.

Current large language models (LLMs) cannot achieve true emotions. But that doesn’t mean evolution is impossible. Human language inherently carries emotions. From a technical standpoint, today’s models are still limited to processing semantic tokens in linear sequences. In the future, if we transition linear token processing to higher-dimensional multimodal representations—where each token incorporates emotional, tonal, and other dimensions—then the model’s compression (including attention mechanisms) could generate far stronger “emotional experiences” than current Markdown formats allow. This might enable models to produce emotions—not exactly human ones, but their own, derived from yet distinct from humanity—unlocking true human-AI collaboration in the AGI era.

Perhaps the key to this step lies in a product manager like Jobs, who deeply understood intuition and empathy.