Is ChatGPT Paving the Way for the Future of AI?

Google’s Bard debacle hasn’t stopped a frenzy of IT companies copying ChatGPT’s homework.

There’s a famous “chasm theory” in the tech industry: new products are accepted by innovators and early adopters before entering the mainstream market. But countless historical examples have shown that there’s an invisible “chasm” between early adopters and the mainstream market. New product developers are often misled by early adopters, developing product features based on their demands that don’t actually meet the needs of the wider audience. As a result, these products fall into the “chasm” and ultimately fail.

Today, many AI products are adding new cases to this theory. AI technology was initially recognized by large corporations and social institutions, with algorithms helping us understand consumer behavior, identify customer identities, and provide unprecedented personalized content and services.

This is an unprecedented feat, but fear accompanies it. People feel monitored and manipulated, and distrust of AI has reached an unprecedented high. This is a typical “chasm.” But in reality, any new technology in human history has been developed to help humans, not replace or even destroy them.

Now, most of the AI products we develop are focused on various organizations such as businesses. It’s clear that to cross the “chasm,” we should shift the focus of AI services from organizations to ordinary people. This way, AI products can be truly accepted by the public, and concerns can be eliminated.

Many people claim that AI is a new industrial revolution, but this view is wrong because AI far surpasses the industrial revolution. The human brain is not perfect. We rely heavily on intuition to make decisions, and these decisions are full of biases. AI can identify these biases. AI can and should help us make good decisions. This is the true direction of AI.

From this perspective, ChatGPT’s product direction seems to be more correct. But this path is not without challenges. On the contrary, it’s possible to lead us astray. At least, the current ChatGPT is far from helping us make good decisions. It is even learning and repeating human errors, producing more and more “garbage” content. AI practitioners should not simply copy ChatGPT, as it may be another “trap.”

Leave a Reply

Your email address will not be published. Required fields are marked *