As Bank of America’s investment strategist Michael Hartnett points out, we are witnessing an unprecedented "AI productivity miracle bull," driving technology stocks to remarkable highs relative to the S&P 500, according to data shared by Matthew Fox. Yet history cautions us about such exuberance; similar surges have often preceded corrections. One must ponder when market recalibration might occur amidst such bullish trends influenced heavily by AI advancements like OpenAI's ChatGPT chatbot.
The robust performance of LLMs has undeniably captivated the business world, yet their ever-present limitations are now coming into focus. A pre-print paper submitted to ArXiv by Google researchers (as covered by Hasan Chowdhury) emphasises that transformers, the backbone behind LLMs, struggle with tasks beyond their training set. They're simply restating what everyone is taught when they first start training neural nets - they are inherently constrained by their training data. When you create your first OCR neural net, this is made incredibly obvious.
While AI technologies such as LLMs continue to attract attention and investment, it is annotated data that underpins their success. Quality, well-annotated datasets ensure that neural networks can learn effectively, adapting to nuances within vast information streams. It is this meticulous process of annotation that holds significant value—the key ingredient for advancing AI towards more sophisticated applications. Today this is dominated by ethically questionable/flexible data brokers that push these tasks into low cost economies.
There is no denying that AI promises a substantial boost in productivity across various industries. Nevertheless, anticipation of these gains must be tempered by recognition of the necessary upskilling costs associated with integrating AI into the workforce. As leaders anticipate AI copilots impacting work hours significantly by 2024, employees will require new skills to harness the potential of these technologies — a costly but crucial investment for long-term benefits.
Despite the excitement around LLMs, they represent only one step toward the elusive goal of artificial general intelligence (AGI) — a point that organizations like OpenAI have acknowledged implicitly. They are powerful tools within specific domains but fall short in bridging the cognitive gap to human-like generalization abilities. This limitation underscores the importance of setting realistic expectations for AI’s trajectory and ensuring investments reflect its true current scope.
The ongoing discussion about the future and capabilities of AI has taken some sharp turns recently. While the financial markets have shown unbridled enthusiasm for technology stocks, buoyed by AI's transformative promise, scholarly research, including the re-affirmations by Google researchers, attempts to bring sobering realities about the present state of AI. The idea that LLMs could lead us to AGI has been met with skepticism, resonating with the innate understanding among AI practitioners: neural nets don't venture beyond their programming confines. Meanwhile, the disproportionate rise in tech stock prices prompts concerns over another impending correction. Ultimately, while we stand at the precipice of an AI-driven productivity boom, navigating this landscape demands cautious optimism, balanced with strategic investments in skill development and responsible AI governance. It is through understanding and valuing the foundational role of annotated data that we can foster sustainable growth in a field rife with both potential and pitfalls.