Artificial Intelligence (AI) today, with its capability to surpass human expertise in games like chess, excel on college admission tests, and even passed the bar exam. In short, AI lately is demonstrating unprecedented intellectual prowess. Yet, AI often falls short on tasks requiring common sense, making trivial mistakes.
But this isn’t surprising if you’re an AI coder or have ever worked with burgeoning technologies in the past. In truth, this perceived paradox reflects the current state of AI development which, while powerful, still lacks fundamental human-like reasoning.
AI models, particularly large language models trained on extensive datasets, show potential sparks of artificial general intelligence (AGI). Please don’t confuse AGI with actual sentient artificial intelligence which we don’t have and machine-learning (AI) can’t evolve into. AI requires immense computational power and substantial financial resources (ChatGPT costs $700,000 A DAY to operate) which limits their development to a few massive tech corporations and governments.
Of course, this fact raises concerns about the centralization of power but also highlights the substantial environmental impact brought on by the massive carbon footprints associated with training these models. [LINK TO DAKOA ARTICLE ON THIS]
AI still needs to be taught common sense and ethical norms, but when you’re spending hundreds of thousands of dollars a day on your AI infrastructure, you’d likely focus on teaching it things that can generate revenue and help cover those costs. But AI’s inability to perform simple common-sense tasks like calculating the drying time for clothes based on quantity or measuring water correctly, does raise serious questions about its practical reliability.
Think about it this way, if an AI system says it takes 30 hours to dry 30 clothes because one takes five or provides an overly complex solution to measure six liters of water, it’s displaying a lack of basic logical reasoning.
So, what to do? Well, firstly we’ll need to shift AI away from brute-force approaches and take a sustainable, humanistic viewpoint of its development. We need to democratize AI by making it smaller and more accessible, enabling broader scrutiny and input from the global research community. By integrating human norms and values, AI can become safer and more aligned with societal expectations. Which, in turn, should make it more practical.
Moreover, AI should not just rely on raw data from the web, which frequently contains biased and inaccurate information. Imagine if AI got a lot of its “facts” from Twitter/X? Instead, it should be trained on carefully curated examples and guided by human feedback. Doing it this way ensures that AI learns in a context reflective of diverse norms and values. Investing in data that is both crafted for educational purposes and judged by humans will help mitigate the risks of AI perpetuating existing biases.
Future AI development should explore innovative algorithms and learning methods that go beyond current large-scale models. These should focus more on understanding and interacting with the world in human-like ways rather than just processing vast amounts of text.
I’m talking about focusing more on symbolic knowledge distillation, which can transform large language models (LLMs) into more manageable systems that offer clearer, more inspectable knowledge representations.
Ultimately, the journey towards AI that truly understands and navigates the human world is a very complex puzzle of physical, social, and moral understanding. We must teach it how to represent human experience. If we don’t, it’ll never be able to help us in practical applications throughout our daily life.