Skip to main content
Enterprise AI Analysis: Artificial Intelligence: Generative AI by Peter J. Denning

Enterprise AI Analysis

Artificial Intelligence: Generative AI by Peter J. Denning

An in-depth look into the capabilities, limitations, and future implications of Large Language Models (LLMs) for enterprise adoption, based on Peter J. Denning's Ubiquity Symposium article.

Executive Impact Summary

Large language models (LLMs) are the first neural network machines capable of carrying on conversations with humans. They are trained on billions of words of text scraped from the internet. They generate text responses to text inputs. They have transformed the public awareness of artificial intelligence, bringing on reactions ranging from astonishment and awe to trepidation and horror. They have spurred massive investments in new tools for drafting texts, summarizing conversations, summarizing literature, generating images, coding simple programs, supporting education, and amusing humans. Experience with them has shown them likely to respond with fabrications (called “hallucinations”) that severely undermine their trustworthiness and make them unsafe for critical applications. Here, we will examine the limitations of LLMs imposed by their design and function. These are not bugs but are inherent limitations of the technology. The same limitations make it unlikely that LLM machines will ever be capable of performing all human tasks at the skill levels of humans.

0 LLM Fabrications Undetected (p6)
0 ChatGPT-4 Training Cost (p5)
0 ChatGPT-4 Training Duration (p5)

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Large language models (LLMs) are artificial neural networks trained on vast internet text data to predict the next probable word. This statistical inference capability allows them to generate human-like text, but it also fundamentally limits their understanding and capacity for true creativity. Their 'creations' are statistically probable inferences, not original thought.

A significant challenge with LLMs is the 'hallucination problem,' where they generate false or nonsensical information. This issue stems from their statistical nature, making them unable to verify truth or distinguish reality from fabrication. Furthermore, the quality and inherent biases of training data, including 'synthetic data' that degrades over generations, pose serious trustworthiness concerns.

Humans possess abilities like 'care,' making commitments, experiencing emotions, having a shared background, and embodied action – none of which LLMs can replicate. Chomsky notes that LLMs learn language differently from humans, leading to fundamental gaps in true understanding, ethics, and responsibility. Machines cannot genuinely 'be in language' as humans are.

While LLMs offer significant productivity gains for narrow tasks like drafting initial code or summarizing text, their inherent limitations, such as code errors, lack of tacit knowledge for complex design, and inability to discern truth, prevent them from scaling to replace human programmers for large, trustworthy systems. Claims of LLMs 'outsmarting us' are unfounded, as they lack access to the vast majority of human knowledge (tacit, non-digital, non-English) and the fundamental human capacities for judgment and responsibility.

Enterprise Process Flow

Basic automation
Rule-based systems
Supervised learning
Unsupervised learning
Reinforcement AI
Generative AI
Human-machine interaction AI
Aspirational AI
💡 Stochastic Parrots LLMs are described as 'stochastic parrots,' repeating back composites of what has been told them, not creating new knowledge.

The Hallucination Problem: Fabricating Facts

LLMs are prone to generating false and nonsensical answers, known as 'hallucinations.' A study attempting to use ChatGPT-3 to discover historical facts about computer pioneer Percy Ludgate revealed 'astonishing fabrications.' This fundamental inability to distinguish truth from falsehood makes LLMs unsafe for critical applications.

Outcome: New methods still fail to detect 20% or more fabrications, highlighting the inherent untrustworthiness of LLM outputs without independent verification. Every LLM output is a fabrication from a statistical perspective, as the machine has no concept of truth or reality.

Capability Humans LLMs
Care & Concern
  • Deeply care, form shared concerns, make commitments.
  • Cannot care, no conscience, unaware of broader context.
Trustworthiness
  • Can distinguish truth, make and keep commitments, take responsibility.
  • Cannot distinguish truth/falsehood, cannot make commitments, no ethics, amoral.
Creativity
  • True creativity (e.g., Einstein's relativity), generate improbable inferences.
  • Statistical inferences, 'probable' rather than original creations.
Knowledge Access
  • Symbolic, tacit (performance skills), and background context (common sense).
  • Primarily symbolic, limited to training data, no tacit knowledge.
Learning Language
  • Immersed in human communities, practices, and culture.
  • Scraping huge text databases, fundamentally different from human learning.
⚠️ Mad Cow Disease Synthetic data generated by LLMs, used to train future models, causes gradual degradation, described as 'mad cow disease of the internet brain.'

Calculate Your Potential AI Efficiency Gains

Estimate the impact of AI automation on your operational efficiency and cost savings. Adjust the parameters to see how Generative AI could transform your enterprise.

Annual Cost Savings $0
Annual Hours Reclaimed 0

Your AI Implementation Roadmap

Our phased approach ensures a smooth and effective integration of Generative AI, tailored to your enterprise's unique needs and objectives.

Discovery & Strategy

Comprehensive assessment of current workflows, identification of high-impact AI opportunities, and development of a tailored AI strategy aligned with business goals.

Pilot & Prototyping

Development and testing of initial AI prototypes for selected use cases, demonstrating feasibility and measuring early ROI in a controlled environment.

Integration & Scaling

Seamless integration of validated AI solutions into existing systems, training of personnel, and scaling across relevant departments for maximum impact.

Optimization & Governance

Continuous monitoring, performance optimization, and establishment of robust AI governance frameworks to ensure ethical use, compliance, and sustained value.

Ready to Harness AI Responsibly?

Navigating the complexities and promises of Generative AI requires expert guidance. Let's discuss how your enterprise can leverage AI's strengths while mitigating its inherent risks.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking