Decadent vs. Classic: How New Media Technologies Find Their True Form

A Research Framework for Understanding Media Evolution
“We look at the present through a rear-view mirror. We march backwards into the future.”
— Marshall McLuhan
Introduction: The Thesis
When a new media technology is introduced, its first implementations are almost never examples of what will later become the medium’s “classic” or “true” use. Instead, early adopters instinctively pour the old medium into the new vessel — reading newspapers aloud on radio, filming stage plays for television, publishing digital brochures on the World Wide Web. We might call these initial, imitative uses “decadent”— not in a moral sense, but in the sense of a declining attachment to the previous form. The “classic” phase arrives only once practitioners discover what the new medium can do that no prior medium could.
This pattern repeats with remarkable consistency across every major media and technology transition in history, from Gutenberg’s press to artificial intelligence. This document traces nine such transitions, drawing on the work of Marshall McLuhan, Walter Ong, Elizabeth Eisenstein, Neil Postman, Jay David Bolter, Richard Grusin, Brian Winston, Carolyn Marvin, Clay Shirky, and others — alongside specific historical examples — to explore this framework and ask: where does AI stand today?
Part I: The Theoretical Foundation
Marshall McLuhan: The Medium Is the Message
Marshall McLuhan’s foundational insight, articulated in Understanding Media: The Extensions of Man(1964), is that the medium itself — not its content — is the primary agent of social change. A light bulb has no “content” in the way a newspaper does, yet it fundamentally reshapes human activity by making night spaces usable. The medium restructures perception, not merely information delivery.
Three of McLuhan’s concepts are essential to this framework:
“The content of a new medium is always the old medium.” Speech is the content of writing. Writing is the content of print. Print is the content of the telegraph. Radio is the content of early television. This observation is the theoretical backbone of the “decadent” phase: when a new medium appears, its creators instinctively fill it with the forms and patterns of whatever came before.
“Rear-view mirror” thinking. Because new environments are invisible during their innovation phase, we can only consciously grasp them through the framework of the environment they are replacing. This creates a paradox: the mechanism that helps us feel comfortable with new technologies simultaneously obscures their most revolutionary potential.
The Tetrad of Media Effects (Laws of Media, 1988). Written with his son Eric McLuhan, this framework proposes that every medium simultaneously: (1) enhances some human capability, (2) obsolesces some prior practice, (3) retrieves something previously lost, and (4) reversesor flips into its opposite when pushed to extremes. The tetrad provides a diagnostic tool for identifying when a medium has moved from imitation to its native form — the “classic” phase emerges as the enhancement and retrieval functions become dominant, while the “decadent” phase is characterized by the medium still operating primarily within its predecessor’s obsolescence pattern.
Walter Ong: Orality and Literacy (1982)
Ong’s Orality and Literacy: The Technologizing of the Worddemonstrates that the transition from oral to literate culture was not merely a change in information storage — it restructured human consciousness itself. Oral cultures think in formulaic, additive patterns; literate cultures develop analytic, subordinative reasoning. Ong also introduced the concept of “secondary orality”— the way electronic media (radio, television, digital platforms) retrieve oral characteristics (immediacy, participation, communal experience) within a literate framework. This concept presciently describes social media, podcasts, and conversational AI.
Elizabeth Eisenstein: The Printing Press as an Agent of Change (1980)
Eisenstein provides the definitive historical account of how the printing press transformed Western civilization — but crucially, not immediately. She identifies three functions unique to print that manuscripts could never provide: dissemination (mass distribution), standardization (identical copies), and preservation(reliable archival). These capabilities enabled the Protestant Reformation, the Renaissance, and the Scientific Revolution — but only after a decades-long “decadent” phase of incunabula that imitated manuscripts.
Jay David Bolter and Richard Grusin: Remediation (1999)
In Remediation: Understanding New Media, Bolter and Grusin formalize the observation that new media achieve cultural significance by refashioning prior media. They identify two complementary logics: transparent immediacy (making the viewer forget the medium exists) and hypermediacy (drawing attention to the medium itself). Every new medium oscillates between these poles as it transitions from imitation to native form.
Brian Winston: The Suppression of Radical Potential (1986)
Winston, in Misunderstanding Media, proposes that new communication technologies have inherent capacity for profound disruption, but this radical potential is systematically containedby social, economic, and institutional forces. Patent systems, established interests, and institutional inertia act as brakes, deliberately constraining new technologies to narrow applications that reinforce rather than overthrow existing power structures. This adds an important dimension to the “decadent” phase: early media mimicry is not merely a failure of imagination but is partly enforced by those who benefit from the status quo.
Carolyn Marvin: When Old Technologies Were New (1988)
Marvin examines how the telephone, phonograph, and electric light were publicly envisioned in the late 19th century, finding that new technologies consistently fall into patterns of use that replicate existing social hierarchies rather than disrupting them. Professionals struggled to control new media and preserve social order by excluding outsiders. Her work reinforces the pattern: new technologies are domesticated before they are revolutionary.
Neil Postman: Media Ecology
Postman, who formalized the field of “media ecology” at NYU in 1971 (building on McLuhan’s framework), emphasized that media are fundamentally non-neutral. The form of information transmission entails cognitive biases — and the introduction of a significant new information medium generates an entirely new culture. His work in Amusing Ourselves to Death (1985) and Technopoly(1992) warned that the “decadent” phase can persist indefinitely if a society fails to recognize the medium’s restructuring effects.
Ted Nelson: Hypertext, Xanadu, and the Vision of Liberated Text
Ted Nelson (born 1937), trained in philosophy and sociology rather than engineering, coined the terms “hypertext” and “hypermedia” in 1963 and launched Project Xanadu(begun 1960) — a comprehensive system for all human knowledge featuring two-way links, transclusion, built-in micropayments, version tracking, and permanent content addressing. His Computer Lib/Dream Machines(1974) has been called the most influential book in the history of computational media. Nelson articulated computing’s “classic” vision more completely than anyone, then watched the world adopt his concept through Berners-Lee’s World Wide Web — an implementation he considered a catastrophic simplification. Xanadu became computing’s most famous vaporware, demonstrating that articulating a medium’s true form is not the same as building it. Nelson’s critique of the web is explored in Era 5.
Additional Theorists
Henry Jenkins, in Convergence Culture (2006), examines how old and new media coexist and collide, with participatory fan cultures creating genuinely new forms. Sven Birkerts, in The Gutenberg Elegies (1994), traces the psychological costs of media transitions. Lewis Mumford, in Technics and Civilization(1934), argued that moral and political choices — not machines themselves — shape technological society. James W. Carey distinguished between transmission and ritual views of communication, suggesting that media mimicry persists partly because new media must replicate the cultural rituals of old media before establishing new ones.
Part II: Nine Eras of Media Evolution
Era 1: From Speech to Print — The Gutenberg Revolution
The Decadent Phase: Incunabula as Imitation Manuscripts (1450s–1490s)
The term “incunabula” (Latin for “cradle”) refers to books printed before 1501 — and they are a near-perfect illustration of new media mimicking old. When Johannes Gutenberg completed his Bible around 1454–1455, the result was designed to be indistinguishable from a handwritten manuscript. The Gutenberg Bible was printed in a two-column format with 42 lines per page, using typefaces that replicated medieval Gothic script. Spaces were intentionally left blank for hand-painted decorations — rubricators added red and blue ornamental letters, and illuminators painted elaborate borders, just as they had done for centuries with manuscripts.
Each copy was individually customized by different artisans, creating the illusion of unique handmade books. Purchasers could commission elaborate decorations, preserving the medieval practice of bespoke manuscript production. The irony is striking: the first product of a technology capable of producing identical copies was designed so that no two copies looked the same.
Early printers also focused almost exclusively on reproducing existing texts — Bibles, religious commentaries, classical works — rather than enabling new forms of literature. The printing press was, in its first decades, a faster scriptorium.
The Classic Phase: Mass Literacy and New Literary Forms (1500s onward)
By the turn of the 16th century, printed books began emancipating themselves from manuscript conventions: fewer abbreviations, numbered pages (foliation), standardized typefaces optimized for readability rather than calligraphic beauty.
The medium’s true power emerged through applications impossible in manuscript culture:
- The Protestant Reformation (1517 onward):Martin Luther’s 95 Theses would have circulated only among Wittenberg scholars in manuscript form. Print made it a bestselling pamphlet distributed across Europe within a year. Between 1517 and 1525, Luther published over half a million works.
- Vernacular literature and the novel:Print enabled and encouraged writing in local languages rather than Latin, leading eventually to the novel as a literary form — Miguel de Cervantes’ Don Quixote (1605) being a landmark example of a form that could not have existed as mass medium before print.
- Scientific publishing:Eisenstein argues that print’s capacity to standardize and preserve knowledge — which had been fluid and error-prone in manuscript culture — was the precondition for the Scientific Revolution. Scholars could now build on identical texts rather than working from divergent manuscript copies.
- Newspapers and pamphlets: Cheap, rapidly produced news sheets created journalism as a profession and public opinion as a political force.
Key scholars: Elizabeth Eisenstein, The Printing Press as an Agent of Change (1980); Walter Ong, Orality and Literacy (1982); Marshall McLuhan, The Gutenberg Galaxy (1962); Adrian Johns, The Nature of the Book (1998).
Era 2: From Print to Radio — The First Broadcast Revolution
The Decadent Phase: Reading Newspapers on Air (1920s)
Early radio broadcasts directly imitated print media. The first licensed commercial radio news broadcast illustrates this precisely: on November 2, 1920, KDKA Pittsburgh broadcast presidential election returns from the rooftop of the Westinghouse building. The station acquired results by telephone from the Pittsburgh Postnewspaper and simply read them aloud. The content, structure, and sourcing were identical to newspaper reporting — just vocalized.
Throughout the 1920s, radio programming offered listeners the same fare available in print and theater: news read from scripts, orchestral performances, vaudeville routines, stock market closing prices, and weather reports. Early radio drama, emerging around 1927, featured actors reading theatrical scripts — stage plays adapted for audio, retaining the narrative conventions and dramatic pacing of live theater without developing audio-native storytelling techniques.
Programs like Streamlined Shakespeare (1930s) and Lux Radio Theatrewere essentially theatrical performances broadcast to radio audiences — complete scripts performed by actors, fundamentally theater minus the visual component.
The Classic Phase: Immediacy, Intimacy, Music, and the Audio Imagination (1930s–1960s)
Radio’s true form emerged when practitioners stopped thinking about what print did and started exploring what only radio could do. Three breakthroughs defined the transition — and the most consequential of them was music.
Music: Radio’s Defining “Classic” Use
If news was what radio borrowed from print, music was what radio could do that print never could: transmit the temporal, emotional, and performative experience of sound in real time to millions simultaneously. Before radio, experiencing music required either physical attendance at a live performance or ownership of a phonograph — both constrained by geography, cost, and the severe audio limitations of early recording technology (performers had to crowd around a horn; softer instruments were simply lost). Radio demolished these barriers overnight.
By the mid-1920s, music programming had become central to radio’s identity, but the transformation accelerated dramatically in the 1950s with three developments:
- The Top 40 format:Todd Storz, owner of KOWH-AM in Omaha, Nebraska, working with program director Bill Stewart, invented the Top 40 format in the early 1950s — limiting disc jockeys to only the 40 most popular songs from the Billboard charts, with relentless repetition. Gordon McLendon of KLIF-AM in Dallas perfected the commercial formula in 1953, combining tight playlists with fast-paced newscasts, jingles, and contests. The format swept the nation, making music the dominant programming category and transforming radio from a general-interest broadcast medium into a music delivery system.
- Alan Freed and the birth of rock and roll on radio:Alan Freed joined WJW in Cleveland in 1951 and began programming R&B music on his late-night show “The Moondog House,” which became wildly popular with both Black and white teenagers. Freed popularized the phrase “rock and roll” on mainstream radio and — critically — played recordings by Black artists rather than sanitized white cover versions, helping bridge racial segregation in American popular culture. He organized what is now considered the first major rock and roll concert, the Moondog Coronation Ball at the Cleveland Arena on March 21, 1952. When Freed moved to WINS in New York in 1954, his show became #1 within months, transforming rock and roll from a regional, marginalized genre into a national cultural movement — a transformation that could only have happened through radio.
- Radio as creator of simultaneous national culture:The scale was staggering. In 1923, only 1% of American households owned radios; by 1937, 75% did; ownership eventually peaked at 98%. Radio created something unprecedented: millions of people hearing the same song at the same moment across vast geographic distances. This simultaneity — shared emotional experience in real time — was structurally impossible in print, theater, or phonograph culture. Rock and roll, R&B crossover, and the British Invasion of the mid-1960s (when British bands dominated American AM radio after the Beatles’ arrival on February 7, 1964) were all fundamentally radio-enabled phenomena. The music existed before radio broadcast it; the nationwide cultural movements could not.
As Susan Douglas argues in Listening In: Radio and the American Imagination(1999), listening to radio transformed generational identities and shaped American views of race, gender, and cultural belonging in ways that no prior medium could accomplish. Music was the vehicle through which radio achieved this cultural power — not news, not drama, not information delivery. Music was what radio was for.
Political Intimacy and Dramatic Presence
Beyond music, radio discovered two additional native capabilities:
- FDR’s Fireside Chats (1933 onward):Beginning March 12, 1933, just eight days into his presidency, Roosevelt used radio for intimate, conversational address to the nation during the banking crisis. These were not news broadcasts — they were a leader speaking directly and personally to citizens in their homes. The framing was deliberate: “the president wants to come into your home and sit at your fireside for a little fireside chat.” With 41% of U.S. cities operating radio stations, this created a national political experience impossible in print.
- Orson Welles’ War of the Worlds (October 30, 1938):The Mercury Theatre broadcast demonstrated radio’s unique power to create immersive, immediate emotional reality through sound alone. The reported panic (likely exaggerated, but culturally significant) revealed that radio could generate a sense of present-tense reality that print and theater never could.
- Live breaking news:Rather than reading yesterday’s newspapers, radio could transmit events as they happened — Edward R. Murrow’s reporting from London during the Blitz (1940) exemplified radio’s native capacity for immediacy and presence.
- Audio-native storytelling:By the 1940s, radio drama developed its own language — sound design creating environments without visuals, dialogue patterns optimized for listeners rather than audiences, voice acting calibrated for intimate home listening rather than theatrical projection.
Key scholars: Susan Douglas, Listening In (1999); Michele Hilmes, Radio Voices (1997); Jason Loviglio, Radio’s Intimate Public (2005).
Era 3: From Radio to Television — The Visual Revolution
The Decadent Phase: Radio with Pictures (1940s–1950s)
Early television explicitly copied radio’s format with static visual elements added. The phrase used at the time was telling: “radio with pictures.” Established radio stars, programs, formats, and advertisers moved directly to television without fundamental format changes.
- TV news as studio reading: NBC’s Camel News Caravanwith John Cameron Swayze (1949), considered the first major national TV newscast, featured an anchor sitting at a studio desk reading scripts with occasional newsreel footage edited in. The format was identical to radio news — just with the newsreader’s face visible. Technical limitations enforced the mimicry: cameras were massive and immobile, and videotape was not widely used until the late 1950s, so field reports required film that had to be physically transported, developed, and edited.
- Vaudeville on screen:CBS launched its first variety show in 1948 as “vaudeo” (vaudeville plus video), featuring acrobats, jugglers, magicians, and comedians performing stage acts on camera. The Ed Sullivan Show (1948–1971) was essentially a proscenium stage viewed through a camera. Performers worked in highly scripted routines transferred directly from radio and stage.
- Static theatrical framing: Early television drama used single-camera setups mimicking theatrical blocking. Directors came from theater and radio, bringing those conventions with them.
The Classic Phase: Visual Storytelling and Live Presence (1950s–1970s)
Television found its native language through several developments:
- Mobile field reporting:The widespread adoption of 16mm cameras in the 1950s freed news from the studio. Television moved increasingly into real-world environments, including live coverage of breaking news. This magnified the value of immediacy while requiring compelling visuals — creating a form of journalism that was neither print nor radio.
- Live event broadcasting:NBC’s experimental station W2XBS broadcast live coverage of a building fire (1938) and the opening of the New York World’s Fair — early demonstrations of television’s unique power to transmit events as they occurred, visually. The Telstar 1 satellite (1962) enabled the first live transatlantic television broadcast, proving television’s capacity for global simultaneous experience.
- Cinéma vérité influence (late 1950s–1960s): This documentary movement, emphasizing handheld cameras, spontaneous filming, and intimate subject engagement, gave television its distinctive visual grammar. Robert Drew’s work brought the dramatic realism of Lifemagazine’s candid photography to television reporting. The legacy persists in contemporary TV: handheld cameras, direct-address, and documentary-style realism.
- Exploiting the close-up:Performers discovered that television’s intimate visual grammar — close-ups, facial expressions, subtle physical cues — created a viewing experience fundamentally different from both stage performance and radio imagination. Television developed its own relationship with the viewer, based on visual intimacy rather than theatrical projection.
MTV and the Music Video: Television Discovers What Radio Could Never Do
If music was radio’s defining classic use, then the music video represents television discovering how to extend that power into a uniquely visual form. MTV launched on August 1, 1981, with the symbolically perfect first video “Video Killed the Radio Star” by The Buggles. The network created an entirely new art form — the music video as cinematic narrative — spawning auteur directors like David Fincher and Spike Jonze. Michael Jackson’s “Thriller” (debuted December 2, 1983), a near-14-minute short film directed by John Landis, proved the format’s artistic ambitions; the Library of Congress later made it the first music video added to the National Film Registry. MTV reshaped the music industry: artists now needed visual presentation, not just audio talent. The network’s influence on fashion, editing styles, and youth culture pervaded the 1980s and 1990s.
The historical irony: MTV eventually abandoned music videos for reality television, and the music video migrated to YouTube — born on television, it found a more natural home in the on-demand, algorithmically curated internet.
Key scholars: Lynn Spigel, Make Room for TV (1992); William Boddy, Fifties Television (1990); Simon Frith, Andrew Goodwin, and Lawrence Grossberg, Sound and Vision: The Music Video Reader (1993).
Era 4: From Television to Computing — The Digital Transition
The Decadent Phase: Faster Calculators and Digital Typewriters (1940s–1980s)
The first computers were conceived, built, and used as faster versions of existing tools. ENIAC (completed February 15, 1946, University of Pennsylvania) was designed specifically to calculate artillery firing tables for the U.S. Army’s Ballistic Research Laboratory. Despite being Turing-complete — theoretically capable of computing anything — its primary application was the same mathematical calculation that humans had been performing with mechanical calculators, just faster.
This pattern persisted into the personal computer era:
- Word processors as digital typewriters:IBM’s MT/ST (Magnetic Tape/Selectric Typewriter) in the late 1960s automated word wrap but had no screen — it was literally a typewriter enhanced with tape recording. Wang Laboratories’ systems in the 1970s–80s added CRT screens but were marketed as “digital typewriters” — emphasizing the ability to correct, revise, and reprint without retyping.
- VisiCalc as digital ledger paper (1979):Dan Bricklin and Bob Frankston created VisiCalc after Bricklin’s frustration using a calculator for financial problems. The interface directly replicated the physical ledger book — rows and columns of numbers, formulas, totals. However, VisiCalc contained the seeds of the “classic” phase: users could now ask “what if?” questions, recalculating instantly rather than manually. This transformed spreadsheets from passive records into dynamic modeling tools — a capability that could not exist on paper. Over 700,000 copies sold in six years, becoming the application that justified the Apple II’s existence.
- Skeuomorphism as design philosophy:The Xerox Alto (1973), developed by Alan Kay and the Xerox PARC team, pioneered the desktop metaphor — a visual simulation of an actual physical desk, complete with folders, trash cans, and paper documents. Steve Jobs incorporated these concepts into the Apple Lisa (1982) and Macintosh (1984). Susan Kare’s iconic designs — the trash can with a closed lid, floppy disk icons, menu bars — made computers accessible by mapping them to physical metaphors. Users were trained to think of the computer as an “electronic desk” rather than a fundamentally new medium.
The Classic Phase: The Computer as Metamedium
The visionary articulation of computing’s true potential came early — but was not widely realized for decades:
- Douglas Engelbart’s “Augmenting Human Intellect” (1962) envisioned computers as tools to enhance human cognitive capabilities through what he called H-LAM/T systems — not faster calculation but fundamentally augmented thinking.
- Alan Kay’s Dynabook concept (1972)proposed a notebook-sized device that could function as a programming tool, interactive information system, text editor, and creative medium for drawing, animation, and music composition. Kay recognized that the computer’s authenticity lay not in imitating paper documents but in being a metamedium— a tool capable of simulating any other medium while enabling dynamic, interactive systems impossible in physical media.
- Vannevar Bush’s “As We May Think” (1945)described the Memex — a device for storing and cross-referencing knowledge through associative trails — anticipating hypertext and the web decades before they existed.
- Ted Nelson’s Project Xanadu (1960 onward) and “Computer Lib/Dream Machines” (1974) proposed the most radical departure from paper-based thinking. Nelson, trained in philosophy and sociology rather than engineering, saw that the computer’s true power was not in simulating desks, typewriters, or filing cabinets but in liberating human thought from the sequential constraints of paper altogether. His concepts of hypertext, transclusion, and two-way linking described a medium in which all knowledge could be interconnected, attributed, and versioned — a vision that remains more ambitious than anything yet implemented. Nelson declared that “the purpose of computers is human freedom” — a philosophical claim that cut directly against the prevailing view of computers as business machines or faster calculators.
The “classic” phase of computing emerged when applications stopped imitating physical objects and began exploiting computational uniqueness: real-time collaboration, version control, algorithmic visualization, simulation, hypertext, and data analysis at scales impossible manually.
Key scholars: Janet Abbate, Recoding Gender (2012); Paul Ceruzzi, A History of Modern Computing (2003); Michael Mahoney, Histories of Computing (2011).
Era 5: From Personal Computing to the Internet
The Decadent Phase: Brochureware and Digital Catalogs (1993–2004)
Tim Berners-Lee’s original 1989 proposal at CERN envisioned a “universal linked information system” — and in his 1999 memoir Weaving the Web, he described the web as “much more than a tool for research or communication; it is a new way of thinking.” He successfully advocated for CERN to release the underlying code on a royalty-free basis.
What actually filled the early web was far less ambitious.
After the Mosaic browser launched in January 1993 (the first to display images inline with text, reaching over 1 million users within 18 months), the commercial web became a landscape of brochureware: static corporate pages reproducing printed materials in digital form. Company websites were digital versions of their paper brochures — “About Us” pages, product listings, contact information, with rarely any interactivity or user contribution. Early e-commerce sites were digital mail-order catalogs. Email, the web’s killer application, was conceptualized and used as “digital memos” — faster than physical mail but identical in structure and purpose.
The period from roughly 1993 to 2004 — sometimes called Web 1.0— was characterized by one-way information flow from institution to consumer: the internet as a publishing medium, replicating print’s dynamics at higher speed.
Ted Nelson’s Critique: The Web as “Decadent” Hypertext
No one understood the internet’s “decadent” phase more acutely — or more painfully — than Ted Nelson. Nelson had spent three decades developing the concept of hypertext and the comprehensive architecture of Project Xanadu before Berners-Lee created the World Wide Web in 1989. When the web arrived, Nelson recognized it as an adoption of his concept through someone else’s implementation — and, in his view, a catastrophically simplified one.
Nelson’s indictment was precise: “HTML is precisely what we were trying to PREVENT — ever-breaking links, links going outward only, quotes you can’t follow to their origins, no version management, no rights management.” The web’s one-way hyperlinks meant that links could break when content moved or vanished, destinations had no knowledge of what pointed to them, quotations lost their connection to original sources, and there was no built-in mechanism for compensating creators. Nelson had designed Xanadu to solve all of these problems — two-way links, transclusion, micropayments, permanent addressing, version tracking — and the web solved none of them.
Berners-Lee’s motivation was pragmatic: sharing information among dispersed CERN researchers. He designed for simplicity, openness, and rapid adoption. Nelson’s motivation was philosophical: representing all human knowledge with proper attribution, versioning, and compensation. The web succeeded precisely because it simplified away the features Nelson considered essential. It was spreadable and implementable; Xanadu was comprehensive and perpetually unfinished.
The result is one of computing history’s most striking ironies: the world adopted Nelson’s concept (hypertext) but implemented it as what he considered a regression to paper-like thinking in digital dress. The broken links, lost attribution, and absent creator compensation that plague the web today are exactly the problems Nelson’s architecture was designed to prevent. From within our “decadent vs. classic” framework, Nelson’s critique suggests something provocative: that the web itself may still be in a partially “decadent” phase of hypertext — that the full potential of interconnected digital knowledge, as Nelson envisioned it, has never been realized. Modern technologies like blockchain, content-addressed storage (IPFS), and smart contracts are only now beginning to implement the two-way linking, micropayments, and immutable versioning Nelson proposed in 1960.
The Classic Phase: Participation, Collaboration, and Platform Effects (2004 onward)
The transition to what was labeled Web 2.0 represents the internet discovering capabilities that no prior medium possessed:
- User-generated content: Wikipedia (2001), Flickr (2004), YouTube (2005), and Twitter (2006) inverted the publisher-consumer dynamic. Content creation was democratized in ways impossible in print, radio, or television.
- Social platforms and the algorithmic feed:Facebook’s introduction of the News Feed in 2006 was an inflection point. Rather than profile-based navigation, content flowed algorithmically — enabling passive consumption of friend updates, creating the “feed” as an entirely new media form, and introducing algorithmic content curation as a default experience.
- Real-time collaboration:Google Docs, wikis, and collaborative platforms leveraged the internet’s unique capacity for simultaneous multi-user interaction — something structurally impossible in any prior medium.
Clay Shirky captured this transition in Here Comes Everybody (2008) and Cognitive Surplus(2010), arguing that the internet enabled people to pool their surplus intellect, energy, and time at vanishingly low cost — creating collaborative value in four categories: personal, communal, public, and civic. Nicholas Carr, in The Big Switch(2008), framed the transition as an infrastructure shift comparable to Edison’s centralized electric grid — computing as a utility, centrally provided rather than individually owned.
| Dimension | Web 1.0 (Decadent) | Web 2.0 (Classic) |
|---|---|---|
| Content flow | One-way: creator to consumer | Many-to-many: bidirectional |
| User role | Passive reader | Creator and curator |
| Technology | Static HTML | AJAX, real-time APIs |
| Economic model | Advertising, subscription | Network effects, data value |
Key scholars: Clay Shirky, Here Comes Everybody(2008); Tim O’Reilly, “What Is Web 2.0” (2005); Yochai Benkler, The Wealth of Networks (2006); Nicholas Carr, The Big Switch (2008).
Era 6: From the Internet to Mobile
The Decadent Phase: The Internet in Your Pocket (2001–2010)
Early mobile web was the internet shrunk to fit a smaller screen. Starting in the late 1990s, the WAP Forum (founded 1997 by Nokia, Ericsson, and Motorola) created standards for mobile devices with limited bandwidth and tiny screens. The result was stripped-down websites: text-only browsers, WML (Wireless Markup Language) pages, and mobile versions of desktop experiences like “mobile Yahoo.”
Even the revolutionary iPhone (June 29, 2007) initially embodied this thinking. Steve Jobs’ original vision was for developers to create web applications for Safarirather than native apps — the phone as a window onto the existing web. The App Store launched a year later (July 10, 2008) with 500 applications, generating 10 million downloads in its first weekend. But early apps largely mimicked web functionality: email clients replacing webmail, news readers aggregating web content, weather apps displaying web data, and simplified ports of computer games.
Japan: The Country That Found Mobile’s Classic Form a Decade Early
The Western narrative of mobile’s evolution — from WAP browsers through the iPhone to Instagram — obscures a remarkable fact: Japan had already discovered mobile’s “classic” uses by the early 2000s, nearly a decade before the rest of the world.
On February 22, 1999, NTT DoCoMo launched i-mode— the world’s first comprehensive mobile internet service. It reached 40 million subscribers by 2003, offering mobile email, web browsing, games, and a micropayment system eight years before the iPhone existed. But i-mode was only the infrastructure. What Japan built on top of it was a mobile-native culture without precedent: keitai novels (搫御小説) — entire novels written and read on phones, with five of Japan’s ten bestselling novels in 2007 being keitai adaptations; mobile payments at scalevia Osaifu-Keitai (launched July 2004), using Sony’s FeliCa contactless technology a full decade before Apple Pay; emojias mobile-native language, designed by Shigetaka Kurita at DoCoMo in 1999 (now in MoMA’s permanent collection); the world’s first mass-market camera phones(Sharp J-SH04, November 2000), with purikura photo culture presaging Instagram’s filters by a decade; and mobile-first design — QR codes (invented by Denso Wave in 1994, ubiquitous in Japan by 2002), mobile TV broadcasting, and web design optimized for small screens while the West still designed for desktop monitors.
The cultural anthropologist Mizuko Ito, in Personal, Portable, Pedestrian: Mobile Phones in Japanese Life(2005), documented how the keitai became an extension of personal identity — not a tool but an intimate companion. Japanese youth, dubbed the oyayubi sedai (“thumb generation”), used phones as social, creative, and commercial platforms that restructured daily life.
The Galápagos Paradox.Japan’s ecosystem was so advanced it became isolated — a phenomenon known as Galápagos syndrome. Japanese phones offered digital TV, contactless payments, 3G data, and touchscreens years before any Western device. When Apple launched the iPhone in 2007, it was in many respects a step backwardfrom what Japanese consumers already had. But the iPhone succeeded globally because it simplified: open app ecosystem, international standards, unified platform design. Japan’s carrier-specific, proprietary systems could not travel.
This illustrates a crucial nuance: discovering a medium’s true form is not the same as making it universally accessible.Japan found mobile’s classic uses but wrapped them in an ecosystem that could not cross cultural and technical borders — serving as both proof that mobile’s potential extended far beyond “the internet on a smaller screen,” and warning that even visionary implementations can become evolutionary dead ends.
The Classic Phase: Location-Aware, Always-On, Camera-First (2010 onward)
Mobile’s native form emerged when developers recognized what was truly unique about a device that was always with you, knew where you were, had a camera, and was perpetually connected:
- Instagram (October 6, 2010):Kevin Systrom and Mike Krieger initially built Burbn, a check-in app mimicking Foursquare’s location-based model. Recognizing this was a pale imitation, they pivoted to photo-centric sharing optimized for the iPhone 4’s camera. Instagram reached 25,000 users on day one and 1 million within two months. It was mobile-first by design — no web version initially — and its filters, square format, and instant sharing leveraged mobile cameras in ways desktop web never could.
- Uber (2010 beta, 2011 public launch):Garrett Camp and Travis Kalanick conceived the service after spending $800 on a private driver on New Year’s Eve. Real-time GPS matching of riders and drivers is structurally impossible without GPS in every device, always-on connectivity, real-time communication, and integrated mobile payment. Uber did not replace a desktop service — it created an entirely new service category enabled solely by mobile infrastructure.
- Snapchat (2011):Evan Spiegel, Bobby Murphy, and Reggie Brown created ephemeral messaging that fundamentally challenged the permanence of social media. Disappearing messages, AR filters, and always-on camera interaction make sense primarily on an always-with-you mobile device. Snapchat’s design assumed constant phone proximity.
- TikTok/Douyin (2016–2017):ByteDance launched Douyin in China, reaching 100 million users within a year. TikTok represents the culmination of mobile-native thinking: AI-driven algorithmic recommendation (not social graphs), vertical short-form video optimized for phone screens, in-app creation tools encouraging user content, and algorithmic discovery replacing follower-based networks. TikTok doesn’t mimic YouTube or Facebook — it creates a form that could only exist on mobile.
The authentication test:Instagram, Uber, Snapchat, and TikTok are not “websites adapted for phones.” They are products that make sense only on mobile devices. That distinction marks the boundary between the decadent and classic phases.
Key scholars: Jason Farman, Mobile Interface Theory (2012); Adriana de Souza e Silva, Net Locality (2011); Gerard Goggin, Cell Phone Culture (2006).
Era 7: From Personal/Mobile Computing to Cloud Computing
The Decadent Phase: Lift-and-Shift — Someone Else’s Server (2006–2014)
When Amazon launched S3 and EC2 in 2006, early adopters treated cloud infrastructure as remote hosting for existing applications. The dominant migration strategy was “lift-and-shift”(also called “rehost migration”): moving applications, systems, workloads, and data to the cloud with little or no changes. As AWS enterprise strategy documentation acknowledged, this approach required no team of engineers with cloud-native experience.
The pattern was pervasive:
- Cloud storage as remote backup:Amazon S3 and early competitors positioned cloud storage as offsite file access — mimicking on-premise network-attached storage, just hosted remotely.
- Monolithic applications on virtual machines:Organizations moved rigid, tightly-coupled applications directly to cloud without architectural redesign, creating what industry analysts called “monolith hell” — all the cost of cloud with none of the benefits.
- Gartner’s 5 R’s framework (2010):Formalized the migration spectrum as rehost, refactor, revise, rebuild, replace — acknowledging that most companies were stuck on the first step. 44% of CIOs approached migration with insufficient planning.
The essential failure: businesses treated cloud as “someone else’s server” rather than a fundamentally different computing paradigm.
The Classic Phase: Cloud-Native Architecture (2013 onward)
The industry gradually discovered that cloud’s true power required fundamental architectural change:
- Docker containerization (2013): Enabled applications to run consistently across cloud environments with true portability and elasticity.
- Kubernetes orchestration (2014): Provided management for containerized workloads at scale.
- Microservices architecture (2014+):Breaking monolithic applications into loosely-coupled services that could scale independently — a pattern impossible in traditional on-premise infrastructure.
- Serverless computing: Abstracting infrastructure entirely, allowing developers to focus on functions rather than servers.
Netflix: The Defining Case Study.A database failure at Netflix’s data center in 2008 forced a complete infrastructure rethink. Rather than a lift-and-shift, Netflix executed a seven-year architectural rebuild (2008–2016), transforming from a single monolithic application with relational database to over 1,000 loosely-coupled microservices using NoSQL databases, event-driven architectures (Kafka), and asynchronous APIs. The result: 75% improved performance, 28% cost savings, and the ability to handle 15x traffic spikes. Netflix completed this migration in January 2016 — becoming truly cloud-native.
Salesforcetook a different path: building cloud-native from inception (1999) with the radical premise of “no software.” Marc Benioff disrupted the Siebel conference in 2000 with employees protesting traditional software. Salesforce’s advantages — automatic updates for all users, data aggregation across systems, multi-tenant architecture, API-first design — were capabilities impossible in traditional software delivery.
Gartner has reported that over 85% of enterprises have embraced cloud-first principles, reflecting the complete dominance of cloud-native over lift-and-shift thinking.
Key thinkers:Jerry Chen (Greylock), “The Evolution of Cloud”; Adrian Cockcroft (Netflix), on microservices architecture; Martin Fowler, on microservices design patterns.
The Counter-Movement: Cloud Repatriation, Data Sovereignty, and the Return to Local
Even as cloud-native architecture reached maturity, a powerful counter-current emerged in the early 2020s — one that complicates the clean “decadent-to-classic” narrative and introduces a dynamic that recurs with particular force in the AI era.
Cloud Repatriation: The Pendulum Swings Back
By 2024, a striking statistic had emerged: 86% of CIOs planned to move at least some public cloud workloads back to private or on-premise infrastructure (Barclays CIO Survey) — up from 43% in late 2020. The movement acquired a name: cloud repatriation.
The most prominent case was 37signals (makers of Basecamp and HEY), whose CTO David Heinemeier Hansson published a widely-read account of their cloud exit in 2022. Their annual AWS bill had reached $3.2 million. By summer 2023, they had migrated to custom Dell hardware without adding new staff, saving over $2 million annually. Dropbox had pioneered a similar move earlier, saving $75 million over two years (2015–2016) by moving from AWS to custom infrastructure, eventually storing 90% of data on its own servers.
The drivers were primarily economic — wasted public cloud spend was estimated at 30–40% across enterprises, with egress fees multiplying storage costs by 3–5x — but the deeper issue was control. Organizations discovered that total dependence on hyperscale cloud providers created vendor lock-in, unpredictable costs, and architectural constraints that undermined the flexibility cloud was supposed to provide.
Data Sovereignty: The Regulatory Counter-Force
Simultaneously, a global wave of data localization legislation began forcing data back within national borders. The EU’s GDPR (2018) imposed strict cross-border transfer restrictions, with fines reaching 4% of annual global turnover — TikTok was fined €530 million in 2025 for unlawfully transferring EU user data to China. China’s Cyber Security Law, Data Security Law, and Personal Information Protection Law mandated that personal data collected in China must be stored domestically. Russia’s Federal Law 152-FZ required all citizen data to remain within Russian borders, with fines escalating sharply in 2024. India’s Digital Personal Data Protection Act (2023) introduced sectoral mandates requiring financial, insurance, and payment data to remain in Indian datacenters.
The cumulative effect was structural: the dream of a single, borderless global cloud — the “classic” vision of cloud computing as universal utility — ran headlong into the reality of national sovereignty. Companies faced a fragmented landscape where different jurisdictions imposed different localization requirements, making truly global cloud architectures legally untenable for sensitive data.
Air-Gapped Systems and the Resurgence of Private Infrastructure
At the far end of the localization spectrum, demand surged for systems physically disconnected from the internet. Google launched Distributed Cloud Air-Gapped for military, intelligence, and critical infrastructure clients. Oracle announced sovereign air-gapped cloud offerings for national security applications. These were not nostalgic retreats to 1990s server rooms — they were sophisticated, cloud-architected systems deployed on private hardware, combining cloud-native design principles with physical isolation.
McLuhan’s Tetrad and the Reversal
This counter-movement is illuminating through McLuhan’s tetrad framework — specifically, the principle that any technology pushed to its extreme reversesinto its opposite. Cloud computing’s central promise was liberation from infrastructure: outsource complexity, scale infinitely, reduce costs. Pushed to its extreme — all data in hyperscaler clouds, all workloads abstracted, all infrastructure rented — the technology reversed: costs became uncontrollable, vendor dependence replaced infrastructure freedom, and the surrender of data to third parties created the very security and sovereignty vulnerabilities that local infrastructure had once prevented.
Computing has exhibited this pendulum across its entire history — mainframes and timesharing (centralization, 1950s–60s), personal computers and client-server (decentralization, 1980s–90s), cloud (centralization, 2006 onward), and now edge computing, private cloud, and repatriation (decentralization, 2020s). The swing is not a failure of cloud computing’s “classic” form but rather a maturation: organizations now deploy infrastructure strategically — cloud for elastic, rapidly-changing workloads; local for cost-sensitive stable workloads; edge for latency-critical applications; and air-gapped for sovereign or classified data. Public cloud spending continues to grow (Gartner forecasts $723 billion worldwide in 2025, up 21% from 2024), but the ideology of cloud-for-everything has given way to pragmatic hybrid architectures.
This counter-movement sets the stage for one of the defining tensions of the AI era: the collision between AI’s hunger for data and computing power (which favors centralization) and growing demands for privacy, transparency, and local control (which favor distribution).
Era 8: The Evolution to Artificial Intelligence
The Decadent Phase (Ongoing): AI as Autocomplete for Everything
The history of AI applications follows the mimicry pattern with striking fidelity.
Expert Systems: Automated Rule Books (1970s–1980s)
The first major commercial AI wave consisted of expert systems — software that encoded human expertise as if-then rules. MYCIN (medical diagnosis), XCON (computer configuration at Digital Equipment Corporation), and Internist-I (internal medicine) all worked by formalizing what human experts already knew into decision trees. By the mid-1980s, two-thirds of Fortune 500 companies used expert systems in daily operations. The technology was comprehensible precisely because it was imitative: managers understood that the system was applying explicitly coded rules.
The limitation was equally clear: expert systems could not learn, adapt, or discover new patterns. They mimicked human reasoning processes without achieving the flexibility or generality of actual reasoning.
ELIZA: The Chatbot as Therapist (1966)
Joseph Weizenbaum’s ELIZA, the first chatbot, simulated a Rogerian psychotherapist using simple pattern matching: search for keywords, mirror them back with a generic therapeutic response. Weizenbaum was explicit that ELIZA was not intended to understand conversations. Yet the psychological impact was profound — users became emotionally attached, and Weizenbaum’s own secretary asked him to leave the room so she could have a “real conversation” with the program. The ELIZA effect — humans attributing understanding to sufficiently sophisticated mimicry — remains central to understanding current AI perception.
Current AI Applications (2023–2025)
Noah Smith captured the current moment precisely with his characterization: “generative AI is autocomplete for everything.” Current mainstream applications predominantly extend existing paradigms rather than creating new ones:
- AI chatbots as enhanced search:Google’s AI Overviews, ChatGPT, Perplexity, and Copilot position themselves as “answer engines” — applying pattern completion to web-indexed information. The interaction model (ask a question, get an answer) is structurally identical to search.
- AI writing tools as enhanced word processors: Compose AI, HyperWrite, Sudowrite, and similar tools integrate into existing word processors, offering autocomplete for sentences, paragraph suggestions, and phrasing enhancement. The workflow remains: human thinks, types, AI assists.
- AI image generation as digital Photoshop:Photoshop’s Generative Fill, Adobe Firefly, and DALL-E integrations are positioned as tools within existing creative workflows — automating specific tasks within the pre-existing “photograph → editing → output” pipeline.
- AI coding assistants as enhanced autocomplete:GitHub Copilot, Cursor, and similar tools extend the IDE autocomplete pattern — suggesting code completions rather than fundamentally changing how software is conceived and built.
In each case, the AI is poured into the vessel of the previous medium. The chatbot interface mimics human conversation. The writing assistant mimics collaborative editing. The image generator mimics a digital canvas. These are sophisticated, powerful tools — but they are “decadent” in the sense that they have not yet found AI’s native form.
The Black Box Problem and the Privacy Counter-Movement
Compounding the “decadent” character of current AI is a crisis of trust that has no precedent in earlier media transitions. When the printing press arrived, readers could see the text. When radio arrived, listeners could hear the broadcast. When television arrived, viewers could watch the program. In each case, the medium’s operation was transparent — the content was the experience. AI is fundamentally different: users cannot see what happens to their data once it enters the system.
This opacity has triggered a corporate backlash with startling speed. In April 2023, Samsung engineers inadvertently leaked proprietary source code and manufacturing data to ChatGPT — one entered code to debug a problem, another transcribed confidential meeting notes, a third fed in manufacturing test sequences. Samsung subsequently banned generative AI tools on company-owned devices. JPMorgan Chase restricted employee use of ChatGPT in February 2023. Apple banned ChatGPT and GitHub Copilot for employees in May 2023, citing confidential data risks. The pattern was consistent: major corporations recognized that sending proprietary data to cloud-based AI services meant surrendering control to systems whose internal operations — training processes, data retention, model memorization — were opaque.
The fear is not abstract. Companies worry that proprietary information sent to AI services may be used to train future models, potentially surfacing in competitors’ queries. The FTC warned that companies quietly updating privacy policies to permit AI training on user data could constitute deceptive practices. The result is a growing demand for private AI— models that run locally, on a company’s own infrastructure, with data that never leaves the premises. Open-weight models like Meta’s Llama and Mistral AI’s Devstral (which runs on a single consumer GPU) have enabled a rapidly expanding ecosystem of local LLM deployment. Apple’s approach with Apple Intelligence — running a lightweight 3-billion-parameter model on-device, with a “Private Cloud Compute” architecture that deletes data after processing and prevents even Apple employees from accessing it — represents a major consumer technology company building its entire AI strategy around the principle that data should never leave the user’s control.
This dynamic extends the cloud repatriation movement directly into AI: just as organizations pulled data back from public clouds for cost, sovereignty, and control reasons, they are now pulling AI workloads back from cloud-based API services for privacy, security, and competitive reasons.
The Individual Consumer’s Dilemma
The privacy crisis is not limited to enterprise boardrooms. Individual consumers face a version of the same problem that is in some ways more acute, because they typically have less visibility into how their data is handled and fewer resources to protect themselves.
When a person asks an AI chatbot for medical advice, relationship guidance, legal questions, financial planning, or help processing grief, they are disclosing information more intimate than almost anything they would type into a search engine. Search queries are fragmentary; AI conversations are contextual, sustained, and deeply personal. Users often do not realize — or choose not to think about — the fact that their conversations may be logged, reviewed by human moderators for safety training, or incorporated into datasets that improve future models. Meta’s 2024 privacy policy update confirmed that user-generated content across Facebook, Instagram, WhatsApp, and Threads could be used to develop AI — a policy change that affected billions of users, most of whom never read the updated terms. The asymmetry is stark: the user shares everything; the system discloses almost nothing about what it does with that information.
For consumers, the emerging solutions fall along a spectrum of privacy-preserving approaches. On-device AIrepresents the most radical solution: models that run entirely on the user’s own hardware, ensuring data never leaves the device. Apple Intelligence’s on-device model (approximately 3 billion parameters, optimized for iPhone, iPad, and Mac) processes writing assistance, photo organization, and notification summarization locally, with its Private Cloud Compute architecture providing a cryptographic guarantee that even Apple cannot access data sent to the cloud for more complex tasks. Open-source tools like Ollama and LM Studio allow technically inclined users to run models like Llama and Mistral locally on consumer hardware, creating a fully private AI experience with no cloud dependency whatsoever. Federated learning offers a middle path: training shared models across millions of devices without centralizing raw data — the model improves from collective use while individual data stays local. Differential privacyadds mathematical guarantees by introducing statistical noise that prevents any individual’s data from being extractable, even if the model itself is compromised. And encrypted or ephemeral processing — where cloud-based AI processes queries in secure enclaves and deletes data immediately after — attempts to preserve cloud AI’s capability advantages while minimizing retention risk.
The trajectory is clear: the market is moving toward a tiered model where the most sensitive interactions happen locally, moderately sensitive tasks use privacy-preserving cloud architectures, and only low-sensitivity or fully anonymized interactions flow through standard cloud AI services. This mirrors the hybrid cloud architecture that emerged from the cloud repatriation movement — and it may prove to be a necessary precondition for AI’s “classic” phase rather than a retreat from it. If people do not trust AI enough to share the data that makes it genuinely useful, the medium cannot find its true form.
Whether this localization impulse represents the “decadent” instinct — retreating to familiar, controlled paradigms rather than embracing AI’s transformative potential — or a necessary maturation that will shape AI’s eventual “classic” form remains one of the defining open questions of the current moment.
Emerging Signals of the Classic Phase
Several developments suggest the transition from “decadent” to “classic” AI has begun, though it is far from complete:
1. AI Agents and Autonomous Systems
The emergence of agentic AI in late 2024/early 2025 represents the first significant departure from the “mimicking existing tools” phase. Gartner reported a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025. Unlike chatbots (which mimic search) or writing tools (which mimic word processors), agents operate independently with minimal human oversight, take actions across multiple systems, and make decisions based on learned patterns rather than executing predefined workflows. A key inflection point came with the release of Anthropic’s Model Context Protocol in late 2024, giving language models standardized ability to act beyond generating text.
2. Scientific Discovery
AlphaFold’s solution of the 50-year protein folding problem (2020–2021) may be the clearest example of “classic” AI to date. Over 2 million researchers in 190+ countries use the AlphaFold Protein Structure Database. Researchers using AlphaFold show a 40%+ increase in novel experimental protein structure submissions — suggesting discoveries that would not exist without AI, not merely automation of existing processes. Demis Hassabis and John Jumper received the 2024 Nobel Prize in Chemistry for this work, underscoring that it represents fundamental scientific advance rather than tool improvement.
3. AI-Native Architecture
McKinsey’s work on “The Agentic Organization” identifies emerging paradigms where AI is a “first-class citizen” rather than a bolt-on: applications incorporating natural language processing, generative AI, and predictive capabilities as native features. The key distinction: these are “AI-native” not because they use AI, but because the problem itself could not be solved without AI-driven autonomy and learning.
Key scholars:Noah Smith, “Generative AI: Autocomplete for Everything” (Noahpinion); Yann LeCun on world models and AGI; Demis Hassabis on the agentic era; Andrew Ng on data-centric AI.
Era 9: Where Is AI Now? Decadent or Classic?
The Case for “Still Decadent”
The evidence that we remain primarily in AI’s decadent phase is substantial:
1. The dominant interaction paradigm is mimicry.The chatbot — the primary consumer interface for AI in 2024–2025 — is a direct descendant of ELIZA. Users type questions, AI generates responses. This is the structure of search, of conversation, of the therapist’s office — not something native to artificial intelligence. The fact that AI’s primary interface replicates human dialogue suggests we have not yet discovered what AI’s native interaction paradigm might be.
2. Most production systems use workflows, not autonomy.According to production data, 95% of production AI systems use human-designed workflows rather than fully autonomous agents. AI is being directed by humans through predefined pipelines — a pattern analogous to early printers reproducing existing manuscripts rather than inventing new literary forms.
3. AI is being evaluated by old-medium standards.We judge AI writing by how well it mimics human prose. We judge AI art by how closely it resembles photography or painting. We judge AI conversation by how human-like it sounds. When we evaluate a new medium by the standards of the old medium, we are in the “decadent” phase by definition.
4. Institutional suppression is active.Brian Winston’s framework applies: established interests are constraining AI’s radical potential. AI tools are being integrated into existing software suites (Microsoft Office, Adobe Creative Suite, Google Workspace) as features within familiar interfaces rather than as fundamentally new paradigms. This domestication serves business models built on existing product categories.
5. The trust deficit is forcing retreat to familiar paradigms.The black box problem — users’ inability to see what happens to their data inside AI systems — has triggered a localization counter-movement that echoes cloud repatriation. Major corporations banning or restricting AI tools, the surge in local LLM deployment, and the demand for air-gapped AI infrastructure all represent organizations pulling back toward controlled, transparent, familiar computing models rather than embracing AI’s full potential. When Samsung bans ChatGPT and Apple builds its entire AI architecture around the premise that data must never leave the device, the message is clear: the institutions that would need to adopt AI at scale do not yet trust it enough to let it operate in its native mode. This trust deficit may be the single most powerful force keeping AI in its “decadent” phase — analogous to the early print era when the Church and universities attempted to control what could be published, constraining the medium to reproducing approved texts rather than enabling the pamphlet culture that would eventually fuel the Reformation.
The Case for “Emerging Classic”
Simultaneously, signals of the classic phase are appearing:
1. Scientific discovery represents a genuinely new capability.AlphaFold is not “faster biology” — it enables predictions that human researchers could not make through any other means. This is analogous to the moment when the printing press enabled the Scientific Revolution: not faster manuscript copying, but a fundamentally new relationship with knowledge.
2. Agentic systems are beginning to transcend the chatbot paradigm.The shift from “AI that answers questions” to “AI that takes actions” parallels the shift from “radio that reads newspapers” to “radio that broadcasts live from the field.” The interaction model is changing from query-response to goal-delegation.
3. Multi-modal and multi-agent systems suggest emerging native forms. Just as television combined audio and visual in ways neither radio nor film alone could, multi-modal AI systems that combine text, image, code, and action in fluid interaction suggest capabilities native to AI rather than borrowed from prior media.
The Verdict: Early Transition
Based on the historical pattern, AI in early 2026 appears to be roughly where television was in the early 1950s or the internet was around 2002–2004 — past the earliest phase of pure mimicry, with clear signals of native capability emerging, but with the dominant paradigm still reflecting the old medium’s form.
The historical pattern suggests several things about what comes next:
- The “classic” form is rarely predicted by contemporaries.No one in 1920 predicted that radio’s true form would be the fireside chat. No one in 1993 predicted that the internet’s true form would be social media and collaborative platforms. The “classic” use of AI will likely be something we cannot clearly articulate today.
- The transition takes longer than technologists expect.The Gutenberg Bible was printed in 1455; the novel emerged around 1605. The iPhone launched in 2007; TikTok launched in 2016. Historical transitions typically take 10–30 years.
- The classic phase creates new cultural forms, not better versions of old ones.Print did not create better manuscripts — it created newspapers, novels, and scientific journals. Radio did not create better newspapers — it created live broadcasting and intimate national address. The classic phase of AI will not create better search, better word processing, or better image editing. It will create something we do not yet have a name for.
Yann LeCun’s perspectivereinforces this assessment. Meta’s Chief AI Scientist argues that current large language models are sophisticated pattern completion without genuine understanding — they lack world models, causal reasoning, and planning capability. He estimates world models will require approximately a decade to mature. His launch of AMI Labs in 2025, seeking €500 million to develop AI systems that understand physics and plan complex actions, suggests the scientific community recognizes the current phase as transitional.
Demis Hassabisframes the near-term differently: AGI will emerge in 5–10 years, with the next 2–3 years dominated by agentic AI systems that act rather than merely respond. His vision of “millions of autonomous agents roaming the internet doing tasks” describes something genuinely new — not faster search or better writing, but delegated autonomous action.
Andrew Ngobserves a shift from code-centric to data-centric AI, and from massive centralized models to smaller, targeted projects — noting a “blizzard-like progress in intelligent agent systems” improving tool use and desktop automation in 2024.
The convergence of these perspectives suggests we are at the hinge point — still primarily “decadent” in practice, but with the theoretical and technical foundations of the “classic” phase becoming visible.
Part III: The Pattern — A Summary for Discussion
The Universal Sequence
Across all nine eras, the same pattern recurs:
Phase 1: Mimicry (Decadent)
New technology adopts the form, content, and interaction patterns of its predecessor. Early printed books look like manuscripts. Early radio reads newspapers aloud. Early television is radio with pictures. Early websites are digital brochures. Early AI is enhanced autocomplete. Users and creators find comfort in familiar forms. Institutions constrain radical potential.
Phase 2: Recognition
Visionaries articulate the new medium’s unique capabilities — often decades before widespread adoption. McLuhan describes the medium as the message. Engelbart envisions augmented intellect. Nelson envisions hypertext and two-way linking. Berners-Lee describes a universal information system. LeCun describes world models. The gap between vision and practice defines the transition period.
Phase 3: Native Form (Classic)
Practitioners discover what the new medium can do that no prior medium could. Print enables mass standardization and the novel. Radio enables live intimacy and national address. Television enables visual presence and global simultaneity. The internet enables mass collaboration and algorithmic curation. Mobile enables location-aware, always-on, camera-first experiences. Cloud enables elastic, distributed, serverless architecture. The classic form typically creates entirely new cultural categories rather than better versions of old ones.
Phase 4: Transformation
The new medium restructures perception, cognition, and social organization. Ong shows that literacy restructured consciousness. Eisenstein shows that print enabled the Scientific Revolution. Postman warns that television restructured public discourse. Each medium, once it finds its classic form, remakes the world in ways the decadent phase could never predict.
The Key Insight
You cannot discover a medium’s true form by thinking about the previous medium. McLuhan’s “rear-view mirror” is not merely a cognitive habit — it is a structural limitation. The printing press’s true power was not faster manuscript copying; it was mass literacy. Radio’s true power was not newspapers read aloud; it was intimate presence. Television’s true power was not filmed stage plays; it was visual immediacy. The internet’s true power was not digital brochures; it was networked collaboration.
AI’s true power is not better search, better writing, better image editing, or better conversation. It is something else — something we are only beginning to glimpse in scientific discovery, autonomous agency, and forms that do not yet have names.
The historical pattern tells us that this is exactly where we should expect to be. Every medium goes through its decadent phase. The question is not whether AI will find its classic form — the pattern is too consistent for doubt. The question is what that form will be, and whether we will recognize it when it arrives, or only in the rear-view mirror.
Sources and Recommended Reading
Primary Theoretical Works
- McLuhan, Marshall. Understanding Media: The Extensions of Man. McGraw-Hill, 1964.
- McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. University of Toronto Press, 1962.
- McLuhan, Marshall and Eric McLuhan. Laws of Media: The New Science. University of Toronto Press, 1988.
- Ong, Walter J. Orality and Literacy: The Technologizing of the Word. Methuen, 1982.
- Eisenstein, Elizabeth. The Printing Press as an Agent of Change. Cambridge University Press, 1980.
- Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. Viking, 1985.
- Postman, Neil. Technopoly: The Surrender of Culture to Technology. Knopf, 1992.
- Bolter, Jay David and Richard Grusin. Remediation: Understanding New Media. MIT Press, 1999.
- Winston, Brian. Misunderstanding Media. Harvard University Press, 1986.
- Marvin, Carolyn. When Old Technologies Were New. Oxford University Press, 1988.
Media History
- Shirky, Clay. Here Comes Everybody: How Change Happens When People Come Together. Penguin, 2008.
- Shirky, Clay. Cognitive Surplus: How Technology Makes Consumers into Collaborators. Penguin, 2010.
- Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. NYU Press, 2006.
- Carr, Nicholas. The Big Switch: Rewiring the World, from Edison to Google. W.W. Norton, 2008.
- Birkerts, Sven. The Gutenberg Elegies: The Fate of Reading in an Electronic Age. Faber and Faber, 1994.
- Berners-Lee, Tim. Weaving the Web. Harper San Francisco, 1999.
- Benkler, Yochai. The Wealth of Networks. Yale University Press, 2006.
- Ito, Mizuko, Daisuke Okabe, and Misa Matsuda, eds. Personal, Portable, Pedestrian: Mobile Phones in Japanese Life. MIT Press, 2005.
Computing and AI
- Nelson, Ted. Computer Lib/Dream Machines. Self-published, 1974.
- Nelson, Ted. Literary Machines. Self-published, 1981 (revised through 1993).
- Nelson, Ted. “Complex Information Processing: A File Structure for the Complex, the Changing, and the Indeterminate.” ACM National Conference, 1965.
- Nelson, Ted. Geeks Bearing Gifts: How The Computer World Got This Way. Mindful Press, 2008.
- Nelson, Ted. Possiplex: Movies, Intellect, Creative Control, My Computer Life and the Fight for Civilization. Mindful Press, 2010.
- Engelbart, Douglas. “Augmenting Human Intellect: A Conceptual Framework.” SRI International, 1962.
- Kay, Alan and Adele Goldberg. “Personal Dynamic Media.” Computer 10.3, 1977.
- Bush, Vannevar. “As We May Think.” The Atlantic, July 1945.
- Smith, Noah. “Generative AI: Autocomplete for Everything.” Noahpinion, 2023.
- Ceruzzi, Paul. A History of Modern Computing. MIT Press, 2003.
Contemporary AI Analysis
- Gartner. Multi-agent system inquiry reports, 2024–2025.
- McKinsey. “The Agentic Organization: Contours of the Next Paradigm for the AI Era.” 2025.
- DeepMind / Google. AlphaFold documentation and impact reports, 2020–2024.
- LeCun, Yann. AMI Labs announcement and world model architecture proposals, 2025.
- Hassabis, Demis. Gemini 2.0 and agentic era commentary, 2024.
- Ng, Andrew. Data-centric AI and industrial AI era commentary, 2024–2025.