Work With Us

WHY WE NEED A NEW INTERNET

Every so often, humanity invents something that changes almost everything else. From the development of language itself, the wheel, writing, mathematics, printing, railways, electricity, automobiles, computers to the internet, the effects of such inventions spillover to transform the economic and social contexts of their birth.

Inventions require three features to be classified as general purpose: that they are widely adopted, capable of ongoing technical improvements and, most importantly, that they enable new innovations and applications to be built on top of them. They can take many forms — symbolic systems, organising processes or new products — but their discovery enables entrepreneurs to assemble novel combinations of labour, capital and modes of organising that create new sources of value. These combinations unleash new waves of creativity that ultimately transform human life. Once diffused, they become almost invisible, so integrated into everyday practices that we no longer think of them as inventions or technology. Their very ubiquity converts the previously unthinkable into the now unremarkable.

Each general purpose invention provides a new platform for a subsequent generation of inventors, engineers and entrepreneurs to build upon. If each invention is its own big bang, then each new wave of innovation a Cambrian explosion. The invention of electricity, for example, saw a new wave of domestic goods as every conceivable household appliance became ‘electrified’. And yet there is always a lag between the big bang of the invention, and the Cambrian explosion of its adoption. For electrification to take off, we needed the infrastructure investment to wire up and connect each home to the system.

This is why the evolution of technological, economic and social systems does not proceed in an even, incremental continuum. It clusters around paradigms. These paradigms are tight sets of relationships between techno-economic systems and socio-institutional orders. This relationship itself is brought into coherence by a dominant social narrative, an underlying mythology that confers legitimacy on the arrangements of the day. Just as the economic systems of foraging, horticultural, agrarian, industrial and informational societies were organised around a different ‘technology stack’, the social systems of these societies were organised around a different dominant narrative. Whether of the animistic, tribal, religious, nationalistic, or the emerging global cosmopolitan varieties, these stories help bring meaning and legibility to groups and individuals living through each era. They help anchor metaphysical curiosity to a template for negotiating social life.

The broad arc of this trajectory has seen astonishing progress and improvement in human wellbeing over the course of history. In fact the benefits of science, technology and open democratic societies have become so familiar, it is easy to neglect them. And yet progress can be somewhat dialectical. Some technological advances disrupt and decentralise power, others help consolidate and centralise it. Many of our paradigmatic advances, although conferring important gains, also end up generating new, unforeseen problems that we must then work to resolve.

The dialectics of progress

‘In 50 years, every street in London will be buried under nine feet of manure’ read the London Times headline in 1894. The newspaper editors had overlooked the automobile prototypes of the day, and within two decades it was the Model-T Ford, rather than the horse and buggy congesting city centres. Consequently, a century later, it is climate destabilising automobile emissions rather than horse manure, that we are seeking to displace. It is not that horses or internal combustion engines were bad, they solved real problems of their era. It is that their large scale adoption created a new problem set, challenges to be resolved by a new generation of innovators.

When the emerging techno-economic paradigm falls out of alignment with the major socio-political institutions, and the dominant cultural narrative that fosters their legitimacy, we confront a legitimation crisis. Legitimacy refers to the justification for and acceptance of asymmetric power relations. Legitimacy is the means by which we render power palatable, it enables the acceptance of forms of authority. A legitimation crisis thus indicates a rupture of public confidence in the ability of institutions and social orders to deliver on their putative goals.

The recent growth of the internet is one such rupture in the link between an emerging techno-economic paradigm and a lagging set of social institutions. Although frequently used as a single noun, ‘the internet’ is really an assemblage of network technologies, linked together by a shared set of protocols, that make communication possible via a distributed, global information system. Furthermore the evolution of the internet can be separated into important phases characterised by new technical advances and applications- the early (pre-web) internet; Web 1.0; Web 2.0; and an emerging collection of technologies many are now calling Web 3.0. Each of these phases has been characterised by a different network architecture and a distinct array of incentive structures. We believe that the developments of Web 3.0 will bring new forms of managing identity, money and community into the world of digital networks, and in doing so will offer crucial democratising directions that offset the pathologies of the Web 2.0 society. However, alongside the roadmap of Web 3.0 technical infrastructure, we envision the birth of new organisational and institutional orders that will embed the value of the emerging technologies into everyday life. For this to happen, we need to construct a story that helps people make sense of these changes and supports constructive participation in this new era.


The birth of the internet

Resilient networks

In the 1960s, amid cold war rivalry and the prospect of nuclear attack, pioneering researchers such as Leonard Kleinrock at MIT, Donald Davies and Roger Scantlebury at NP and Paul Baran at the RAND corporation began working on telecommunications network models that could continue to function even if a large part of them were destroyed. Baran’s 1962 paper on this topic is the original source of the diagrams illustrating the anatomy of centralised, decentralised and distributed networks. The invention of ‘packet switching’ involved dividing information into packets which could be forwarded and stored in each node, before determining the best route to its destination. Organising communications through this structure increased fault tolerance — if a part of the network failed, communication signals could still reach their destination via an alternative route.

Baran’s frequently shared original distributed network diagram

Baran’s frequently shared original distributed network diagram

This pioneering research community had a consequential realisation, for a communications system to be resilient it had to be distributed. It was this distributed network model that became the foundational architecture for the internet. These advances in network technology were applied to connect the computers of researchers and scientists in the defence community, but soon morphed into spontaneous forms of collaboration at distance and experiments in virtual communities with the introduction of electronic mail in 1972. The introduction of the simple name@computer format by Ray Tomlinson saw email become the first ‘killer app’ of the internet era.

Open protocols

The next crucial developments happened in the early 1980s, with the development of the first internet protocol suite. The transmission control protocol (TCP) and internet protocol (IP) in concert with others like the simple mail transfer protocol (SMTP), laid the foundations for the internet becoming the global information infrastructure we have today. Communication protocols are sets rules that allow data to be transmitted and read. Just like grammatical rules in human language, they provide a common set of standards which, when followed, enable parties to communicate. The general purpose design of these protocols means that any parties can connected to the network, given they follow the simple rules laid out in the protocol. Other advances were incorporated along the way, the development of ethernet out of Xerox PARC, the adoption of personal computers, modems, and scalable data structures such as the developments of the domain name systems (DNS). But underpinning the various technical developments was a key idea — open architecture networking. The commitment to this principle among early internet communities enabled permissionless participation in the network with no global control at the operations level.

Web 1.0

The network architecture of this early phase was constructed on open protocols, but only a small community could actually access the internet, primarily through military or academic institutions. The pioneering personal computer enthusiasts and internet users of the 1970s and 1980s evolved in largely separate social worlds. Throughout the 1980s, Tim Berners-Lee was working as a contractor at CERN pondering how to map and keep track of the thousands of relationships between projects, researchers and the data on their computers. Early on, as an innovative hack, he designed a software program that used hypertext to create direct links to index these relationships. The success of this project inspired Berners-Lee to consider a much bolder ambition — what if we could link all the information stored on computers, not just within a bounded corporate intranet, but in a global web of connected links? This would only be possible if everyone used a shared set of standards in which to exchange information. It was this vision of an open, permissionless web that directed the design of Hypertext Markup Language (HTML), Hypertext Transfer Protocol (HTTP) and Uniform Resource Locators (URLs), the key foundations for this next phase of the information revolution. History could have looked quite different from this point — when CERN attempted to patent the web protocols, Berners-Lee refused, demanding that they be open and remain in the public domain. Part altruism, part pragmatism, this decision helped rapidly scale adoption of the web standards. This openness would enable anyone, anywhere in the world, to link and unlink web pages to each other without seeking permission. Just like Paul Baran’s earlier vision of telecommunication networks, there would be no global command hub, no central node. Individuals would be free to craft new strands as they saw fit, links could spread out and contract in a state of dynamic flux, but collectively this would gradually weave the world of human information together into a global web.

In 1993 the National Information Infrastructure Act in the USA enabled public access and commercial activity on the internet for the first time. As strange as it sounds today, up until this point it was actually illegal to connect the early ‘online’ applications like The WELL and America Online to the global internet. Now the gates were open, and netizens could venture out of the walled gardens, but the view wasn’t so pretty, at least for people that were looking for more than blinking cursors and text. The web protocols had given us the power to create links between files, but it was the development of web browsers, like Mosaic by Marc Andreessen and Eric Bina, that made the experience of the internet attractive for the masses. Now not only were the gates open, but the views were pattractive. One measure claims internet use increased by 300,000% in 1993. Search engines provided a map, and set the foundation for internet use to reach half the planet’s population today. This layering of the open architecture of the internet and the world wide web has become one of the most revolutionary technological innovations in human history.

Web 2.0

The web was the killer app of the internet in the 1990s. But for most early users the experience of ‘surfing the web’ principally involved consuming information. There were, to be sure, pioneering content creators from the earliest days (for example here’s what one of the first blogs looked like in 1994). Publishing a web page simply required access to a server and some rudimentary knowledge of HTML, but for many early users this bar was still sufficiently high to render the experience ‘read only’. Unlike the creation of the Web, there was no single bright technical line to be crossed on the path to what we now think of as Web 2.0, or the ‘read-write’ web. Tim Berners-Lee’s invention was originally imagined as a way to collaboratively share documents between researchers, not as an ecommerce platform, much less a sedimentary layer on which to run an entire economy. Consequently it lacked many basic components necessary for commercial activity. In 1994 Netscape created an encrypted transfer protocol (HTTPS) and HTTP cookies that would enable features like shopping carts. Engineers got to work on various browser plugins that offered richer media experiences like audio and video. Other software developments like ‘wikis’, JavaScript and AJAX enabled websites to become more interactive and load data more efficiently. Hardware developments like broadband uncluttered telephone lines and supported heavier data consumption; laptops and wifi further privatised and personalised internet use in the home. Applications like Blogger reduced barriers to creating a personal site, making publishing content on the internet as easy as creating a word document.

If the technical history is one of gradual accretion, the social history is marked by a clear divide. In March 2000 the NASDAQ lost nearly a trillion dollars in valuation as the dot.com boom crashed. Companies and valuations evaporated like puffs of vaporware. Yet the iconic internet companies that emerged from the crash looked different – they built products designed to evolve through interactions with human users. The dominance of interactive web browsers, and the growth of the cloud computing infrastructure that supported them, converged in a new philosophy of product design and release. Constant BETA, first championed by Google, saw companies begin to ship their digital products in endless iterations, each one incorporating feedback from the data generated by users. Amazon encouraged its users to write reviews of products, and its recommendation engine learned more about what to suggest from every customer purchase. Google’s PageRank algorithm continually improved by watching and learning from the search results selected by its human users. In one of the most remarkable examples, Wikipedia built the world’s foremost encyclopedia from a standing start by inviting users to collaboratively edit its content. The new models deepened the human-machine symbiosis by massively expanding the scope for human interactions and content creation, and incorporating these features in its development through learning algorithms.

From open protocols to centralised platforms

In the Web 2.0 era, the platform became the dominant model for organising an enterprise. The logic was to grow as quickly as possible, to take advantage of the reinforcing dynamics of network effects. These competitive pressures would push a single enterprise into a dominant position within each category, no longer a ‘big four’ like in the professional services industry, but a single player in search, social networking, or professional profiles. The fastest way to grow was to offer services for ‘free’.

There were high hopes for this phase of the social web. Excitement at the prospect of virtual communities, voluntary citizen journalism and a general expansion of the open, meritocratic spirit of early internet culture. Scholars and commentators attempted to make sense of it, discussed blended categories of pro-sumers and prod-users, venerated the wisdom of crowds, or described models of commons-based peer production.

Social media entrepreneurs, however observed the emerging trend of distributed, ‘free’ content creation and social interactions in the burgeoning blogosphere and created a new set of attractive gardens in which to channel these desires. But these new gardens had walls. The introduction of smartphones in 2007 accelerated this dynamic by an order of magnitude. Gradually the notion of ‘being online’ faded, an obsolete distinction as we began to carry ubiquitously connected devices in our pockets. Accessing the internet crossed over from the wired computer to the wireless device, accompanied by embedded geolocators and frequently reduced privacy settings. The web’s transition to the smart device also pushed the experience in a more intimate direction. Phones can be absent mindedly flipped through on public transport, cradled horizontally on the couch or in bed. They facilitated ‘digital’ reaching every corner of social life, from learning to shoping, working to travelling, dating to parenting.

But free access to web services for individuals came at a collective cost. Surveillance and data extraction in exchange for advertising dollars became the primary business model that funded the Web 2.0 paradigm. Expectations of profits and growth turned digital platforms towards ever more creative methods of extracting, analysing, using and selling data. The boundaries between spying and marketing became disturbingly blurred. What first appeared as a powerful democratic wave led by a new generation of creative entrepreneurs has become hobbled by some core structural dysfunctions, a systemic victim of its early success. The gardens were enticing, but their walls ushered in a dark age in the internet’s development.

The early design protocols of the Web built a distributed peer-to-peer system of sharing information, but neglected at least three features that would turn out to be critical in constructing a healthy digital society. These are identity, community (or at least a portable socio-graph) and money. As a consequence, proprietary systems of surveilled identity and social networks and a pre-digital financial system have filled the gap. The core problems of Web 1.0 were solved with open protocols, but the new challenges of Web 2.0 were addressed through private web services. The consequences are a distorted network society, leading to our current suite of societal pathologies.

March 2018’s WIRED magazine front cover.

March 2018’s WIRED magazine front cover.

The social pathologies of Web 2.0

There are four problematic facets to our current Web 2.0 model of a digital society.

Security

The cloud based web services of Web 2.0 required central repositories of data. This was partly to offer convenience, our ability to interact with the applications required them to ‘know’ things about us — our credit card details for payments, our interests and personal network for social media. But the methods for proving our identities were built on incredibly brittle systems, often simply an email address and password. Moreover the central databases in which the growing aggregate pools of data are stored offer tremendously alluring ‘honeypots’ for hackers. There are huge incentives for cybercrime in the current arrangements and an entire industry of identity theft has blossomed as life has becomes ever more digitised.

Economic

Identity, proving who you are what rights you have, is what is called a ‘base-layer’ of infrastructure in a digital world. Like language and railway gauges, there are tremendous advantages conferred by adopting a universal standard and making it freely available in the public domain. The open standards of the early phases of the internet are what enabled it to become such a fertile ecosystem for innovation in the first place. But identity on the web was not a problem addressed through the careful design of open standards, but de-facto ‘solved’ by the proprietary ‘log in’ systems of Google, Facebook and Twitter. It is no accident that the Web 2.0 economy has been marked by an extraordinary winner-take-all dynamic. In times of unsettling economic inequality the companies with the highest market capitalisations in the world are all platform enterprises. While the early phase of digital platforms are often characterised by genuinely novel innovations that attract users, networks effects can lead to a consolidation of near monopolistic power. As individuals increasingly rely on digital services, it becomes harder to leave the platforms without incurring significant costs. The power wielded by platform owners can convert to extractive, rent-seeking-like behaviour, raising taxes on producers, arbitrary exclusions and other changes to the rules of engagement. These arrangements leave little incentive to invest in improving the underlying web protocols, especially if improvements in identity or social network management would empower user sovereignty and mobility at the expense of platform monopolies.

Social

The advertising based revenue models that underpin the major Web 2.0 applications have created a now all too familiar distortion in our global information system. The social and cognitive biases that attract us towards clicking on the outrageous are not new. Just like our inherited taste for sweet flavours, our information preferences likely conferred some adaptive advantage in the evolutionary context of our ancestors. The 20th century system of information production via newspapers, radio and television certainly wasn’t perfect, but the assemblage made up of the public identities of journalists, codes of ethics, norms of employment and regulatory power of the state generally provided disincentives to produce blatantly fake news. Furthermore the narrower media space of print and broadcasting often forced competing views into the same spaces of interaction, usually under the aegis of relatively civilised norms of debate. Web 2.0 shattered this system. The story of a fragmented media space of self-affirming echo chambers and abusive anonymous comments is familiar enough not to be recounted here. But what is frequently overlooked is that much of the content fuelling these fires is driven by an advertising model that purely rewards clicks. And what turned out to maximise clicks? The moral-emotive over the factual-reasoned, and the extreme over the measured. This discovery has pointed fake news farms and social media algorithms to feed us increasingly polarised opinions, hardening ideological positions and social division in the political spheres.

Psychological

The cost of this arrangement is also deeply personal. Advertising dependent applications attract revenue by maximising the time users spend on their sites and interacting with their features. This is after all, the way to maximally extract data. The easiest way to do this is to employ the most addictive principles of interaction design in what amounts to a zero-sum race for our finite attention. Research has long pointed out that humans are susceptible to making choices that are not in our best interest, that do not maximise our long-term wellbeing. Choosing not to stock the fridge at home with sugary foods is one way we arrange our decision environment to avoid snap choices we may regret – like late night sugar binges. The advertising dependent incentive models of the digital economy have misaligned the architecture of choices frequently presented to us, with the decisions that promote our deeper wellbeing. The social media information diet provided on tap is actually designed to hijak our neurology, to keep us scrolling and clicking while our better angels might implore us to stop.

These issues ultimately result from underlying design decisions at critical junctures in the evolution of the web. Directly, they result from an alignment problem between the short term commercial interests of Web 2.0 application enterprises and the longer term interests of their human users. In a deeper sense, they stem from a shift in the direction of innovation away from the open protocols and permissionless access of the earlier phases towards the proprietary algorithms and centralised control of our current era.

The celebrated mantra of the Web 2.0 phase, move fast and break things, turned out to be pervasively successful. The leading platforms did move fast, and they also broke things. In doing so they have begun to imperil the careful balance of institutional power that maximised liberty, equality, solidarity and dignity at the heart of healthy societies. But the emerging phase of Web 3.0 has the potential to correct this trajectory and renew the founding spirit of the early internet and open web. In our next article we will discuss how the Web 3.0 paradigm can incentivise a return to the decentralised architecture, open protocols and community governance of the early phase of the internet and web. This is our core purpose at Typehuman, to accelerate the adoption of Web 3.0. We would be delighted if you join us on this journey.

References and further reading

This article was developed by Dr. Julian Waters-Lynch in partnership with James Eddington and Nick Byrne. Originally published on Medium on 27 February 2018.

Here is a selection of key sources that have also contributed to the ideas presented here: