Sat Jul 29 19:24:43 EDT 2023

AI an LLM

Some recent items related to Artificial Intelligence (AI) and Large Language Models (LLM)

  • Why transformative artificial intelligence is really, really hard to achieve

    We think AI can be "transformative" in the same way the internet was, raising productivity and changing habits. But many daunting hurdles lie on the way to the accelerating growth rates predicted by some.

    1. The transformational potential of AI is constrained by its hardest problems
    2. Despite rapid progress in some AI subfields, major technical hurdles remain
    3. Even if technical AI progress continues, social and economic hurdles may limit its impact

    Moravec's paradox and Steven Pinker's 1994 observation remain relevant: "The main lesson of thirty-five years of AI research is that the hard problems are easy and the easy problems are hard." The hardest "easy" problems, like tying one's shoelaces, remain. Do breakthroughs in robotics easily follow those in generative modeling? That OpenAI disbanded its robotics team is not a strong signal.
    ...
    It seems highly unlikely to us that growth could greatly accelerate without progress in manipulating the physical world. Many current economic bottlenecks, from housing and healthcare to manufacturing and transportation all have a sizable physical-world component.

  • Why AI is Harder Than We Think

    Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment ("AI spring") and periods of disappointment, loss of confidence, and reduced funding ("AI winter"). Even with today's seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.

  • AI and the automation of work

    ChatGPT and generative AI will change how we work, but how different is this to all the other waves of automation of the last 200 years? What does it mean for employment? Disruption? Coal consumption?

    As an analyst, though, I tend to prefer Hume's empiricism over Descartes - I can only analyse what we can know. We don't have AGI, and without that, we have only another wave of automation, and we don't seem to have any a priori reason why this must be more or less painful than all the others.

  • Large language models, explained with a minimum of math and jargon

    Want to really understand how large language models work? Here's a gentle primer.


Posted by mjm | Permanent link | Comments
comments powered by Disqus