I’ve been interested in the attention economy of videogames lately and in the wicked problem of the duration of play experiences. While AAA companies gamblify their business models and compete for the most repulsive and exploitative monetization schemes, indies generally stick to wholesome premium experiences (in Apple newspeak, the pay once and play forever genre). But premiumness is hard to sell to the vocal gamers proletarized by an endless economic crisis. There’s plenty of pressure on indie gamemakers to offer the hours of playtime that can justify a $5 to $20 price tag – the kind of market positioning that can potentially pay the bills. Play hours can be generated expensively with content, or for cheap with truckloads of text, sheer difficulty, or repetition. This best-of-the-year list is devoted to the idealists who organize the players’ time according to their expressive goals, and not to the perceived market demand. If games are machines for wasting time, they may as well waste it in an artful way. Hand picked with my usual emphasis on things that matter and imperfectly ordered from longest to shortest:
Oikospiel Book I
If I have to spend hours immersed in a fantasy world, I want it to be a challenging and delirious world like David Kanaga’s. Oikospiel Book I is described as a “dog opera in five acts”, in that you play the role of a strike-breaking German shepherd with luscious music driving the whole experience. The game is thematically rich and metatextual, it revolves around immaterial labor, play, and environmental crisis. The title itself demarcates the scope: oikos is the greek prefix for ecology and economy, spiel is German for play, and the Italian word opera is related to labor. The wordplay and the references are dense, but the game doesn’t feel cerebral or academic. Leaping through flow-of-consciousness texts and stock 3d assets, glitchy assemblages and arresting musical acts, you may miss many of the ideas and subplots detailed in the libretto, but ultimately it doesn’t matter. Oikospiel is a gesamtkunstwerk held together by digital duct-tape, and part of the thrill is wondering if it will hold together conceptually and materially or crash under the weight of its own ambition. The surprise is that there is no gravity grounding the structure, it’s a romantic game to feel through its game-feel, and a rhizome of abstractions to absorb subliminally. Link
Bury Me My Love
Bury Me My Love is a game that cleverly uses real time and the smartphone device to tell an important story. You play the role of Majd, a man living in war-torn Syria, as your wife Nour attempts to escape to Europe. You keep in touch with her through a snapchat-like app, advising, entertaining, and supporting her throughout her difficult journey. European countries are in a “refugee crisis”, mass hysteria, numerous setbacks and unexpected immigration regulations force the couple to make serious decisions on the spot. A text message can change Nour’s route or convince her to hire an expensive smuggler.
Although the settings can be changed, the whole story is meant to unfold in roughly real time. You have a short conversation, you close your app, and then you wait for Nour to recharge her phone, or to get to the next stop of the journey. Hours later she would send you a text (as a push notification) and touch base with you. It’s a powerful way to create suspense and empathy.
The creators are an all-French team but they did a huge amount of research into the matter, interviewing refugees and examining their text messages. The result is a mash up of many different experiences, a heartbreaking oral history compiled in a captivating form. Link
Night in the Woods
The circular time of NITW can be daunting at moments. Not much is going on in the depressed Rust Belt town of Possum Springs. Not much agency is granted to the player as they hang out with the bratty protagonist Mae Borowski and her friends stuck in dead end jobs. And yet, day after day, you grow attached to that row of houses and its inhabitants. You start to notice the subtle changes of weather, the tensions between the characters, the darkness lying deep inside Mae. NITW is deceptively cute and whimsical until the knives come out. It slowly unfolds as an elegy to a community hollowed by capitalism, a coming-of-age tale about the anguish of growing up in a messed up world. Link
Frank Lantz’s latest game is a complex “clicker” in which you play as an Artificial Intelligence manufacturing paperclips. Like all clicker games, its casts the player’s time as an implicit resource. You can theoretically click the mouse button a million times, or leave a browser tab open for days, but you’d rather set up an engine that does it for you in an increasingly efficient way and revel in the labor you “saved”.
Universal Paperclips is an adaptation of a thought experiment about singularity. If intelligence was conceived as the ability to optimize the solution of a problem, and an Artificial Super-Intelligence was tasked with the creation of paperclips, such entity might end up taking over the world and turn it into a paperclip-producing factory, possibly even transforming all the planet’s atoms into paperclips.
The parallel with capitalism, in its industrial, informational, and bitcoin-speculative varieties is hard to escape.
Stretching the parable just a bit, it can be argued that we are currently living through a singularity in which a distributed intelligence we can’t control (capitalism/financial markets) is subsuming the planet and human society under the logic of profit.
Frank Lantz has argued that games are the aesthetic form of instrumental reason and as such they can bridge the gap between rationality and emotion. Universal Paperclips is a perfect embodiment of this idea: a deep, engrossing, addicting, and ultimately beautiful system that allows the player to over-identify with an optimizing deity.
I have a more critical take on the subject (namely that we need to create emotional distance and inject paradoxical and unformalized elements into games to counter the cybernetic bias of computing machines) but I still regard Universal Paperclip as a major accomplishment, not just for its compelling mechanics, but its capacity to gradually morph its internal economy. While the end remains the same, the means expand and transform continuously. The gameplay gradually incorporates marketing, soft power and finance, and eventually goes through major paradigm shifts. It’s a remarkable dramatization of the different phases of capitalism, albeit without explicit mention of crisis, and a great proof of concept for those who strive to envision systemic change through games. Link
Fidel Dungeon Rescue
At first I couldn’t believe that Daniel Benmergui, the creator of many poetic and artsy games, put his highly anticipated Storyteller on hold to work on a side project of a side project which spun off from a prototype satirizing dungeon crawler games. Somewhere along the way, Daniel discovered something extraordinary, and forged it into a little-big game about a dog rescuing his elderly owner.
Fidel is a roguelike; in anno domini 2017 the term typically refers to turn-based gameplay involving a mix of exploration, combat, and leveling within a modular, randomly generated world. The twist in this case is that the titular character can’t move on previously explored tiles, turning an RPG-derived genre into a puzzle. But instead of the rigidly constructed mazes of a title like the Witness, Fidel doesn’t have pre-defined solutions nor paths that allow one to collect all the collectables or kill all the killables. Each step is a compelling negotiation with a system imperfect by design. Fidel manages to be deep without being too complicated or punishing. Game sessions are fairly short, easily slipping in the crevices of a non-gamer life, but also long enough to prevent the one-more-try addiction cycle. Moreover, it’s extremely polished and full of surprises that provide a sense of progression and personality often missing from traditional roguelikes. Link
I have a soft spot for messy puzzles with multiple solutions, and Freeways raises the bar of both messiness and multiplicity. The latest game by Justin Smith (the genius behind Desert Golfing and Envirobear 2000) is a love-child between a traffic simulation and MS Paint. Freeways envisions a future in which self-driving cars rule the world but highway exchanges are apparently designed manually by shaky-handed engineers with no undo function. Your goal is to draw the connections between a series of roads in the most efficient way. Since the vehicles are automated, it doesn’t matter how byzantine your system of ramps becomes, as long as it produces smooth traffic flow. The best results are Lovecraftian horrors of cloverleaf interchanges, fractal roundabouts, biomorphic overpasses and assorted asphalt nightmares. It’s a wonderfully artisanal and yet post-human way to think about infrastructure. Just make sure to play it on an iPad or with a drawing tablet. Link
Reigns: Her Majesty
Reigns was already one of the best releases from last year, but Leigh Alexander’s writing took the sequel to the next level. In Reigns: Her Majesty you play as a lineage of queens governing a kingdom in turmoil. The executive decisions are structured as a series of short encounters with advisors and subjects through a Tinder-like interface. Each binary choice affects four variables representing the church, the people, the military, and your finances. The goal is to guess the effects of each action based on the internal logic of the world, and maintain the balance between factions. Some encounters unlock new sections and advance various story arcs.
While the original Reigns often felt like a number balancing game, Her Majesty adds a new layer of satire to the pandering mechanics. Even in the most powerful position, the queen regnant has to navigate through societal expectations, outright sexism, and sneaky “nice guys” trying to undermine her. There are echoes of gamergate and shards of social commentary throughout, but the real treat is the intergenerational quest for the transformation of patriarchal power. Link
Four Last Things
My main gripe with high-fantasy in pop culture is the adoption of elements from the European Middle Ages and the Renaissance in complete disregard for the religious experience and the peasant’s perspective. All across media we are so used to see castles without churches and warring kingdoms with no apparent mean of subsistence. One might say that in fantasy, the concrete power of magic replaces the ideological power of the clergy, and nobility-centered stories are in continuity with the Chivalric Romance tradition (the original fantasy genre). But the outcome is a proliferation of narratives of an imaginary past that gives us no useful tools to think about centuries of actual Western history.
This is why Four Last Things felt refreshing to me, despite the familiar visuals and gameplay. The game is a classic point-and-click adventure primarily made of paintings from the Flemish Renaissance. You play as an ordinary guy who has to commit the seven deadly sins due to a bureaucratic quirk. The premise allows for plenty of Monty Pythonesque humor, but there is also a remarkable commitment to the historical material. More than a gimmick or a cost-saving strategy, the use of public domain assets informs all puzzles and situations. The collages are beautifully crafted and while looking for clickable items you’ll find yourself wondering where the digital manipulation happened. You may even appreciate details you would not have noticed in a museum, in front of the actual work. Link
A Mortician’s Tale
Death in videogames is so omnipresent and yet so rarely problematized. From the rote-killing of shooters to the learning-by-dying of most action games, dead bodies blink and disappear, dissolve in gory bits, lay around as ragdolls, quickly forgotten.
A Mortician’s Tale is an openly “death positive” game: it supports a progressive movement striving to break the culture of silence around death in Western society. Playing as a young mortician, you have to perform the tasks and the emotional labor of your trade. While mastering tastefully stylized procedures, you have to keep in touch with a dear friend from college, and face the acquisition of your mom-and-pop business by a cynical funerary services corporation.
The player’s agency in A Mortician’s Tale is so narrow that it may feel like a one-hour-long tutorial; but if you don’t mind a linear story told through the conventions of a management game, you might find yourself deeply touched. Link
Everything is Going to Be Ok
Nathalie Lawhead is notorious for her hysterical apps and games exploding with glitches, hand drawn animations, and ’90s computing psychedelia. Everything is Going to Be Ok goes beyond style and sheer weirdness, channelling this unique language toward a more focused expressive goal. The project is a digital fanzine, a collection of short games and gamelike experiences with recurring characters and themes. The scenes explore communication breakdowns, social awkwardness, the elusive nature of real friendships, and the internet as a messed up remedy for all of this. Everything is Going to Be Ok is dark, intense, and funny like a Don Hertzfeldt movie, but also boldly innovative for its use of humor in an interactive medium. Link
In a better world, Topsoil would be as popular as Candy Crush or 2048. In the current one, everything is topsy turvy and the bad guys always win. But that should not prevent us from enjoying minutes of bliss with this overlooked puzzle. Topsoil is a tight, abstract territory management game. You plant seeds and decide when to harvest a certain crop growing on a certain color. Each harvest cycles the terrain (a rare systemization of crop rotation in agriculture) so you have to hedge your bets and attempt to synchronize cycles to avoid the fragmentation of your land. It’s easier to play than to explain. Secretly install it on your puzzle-addicted relatives’ device. Link
Aliens exist but what do they think of us? Are they appropriating our culture and getting everything wrong? Alien Caseno is a tiny world that will make u think big and make u laugh. Link
and i made sure to hold your head sideways
It was New Year’s Eve and everybody drank too much. You piece together memories of that wild night, memories that appear like a deconstructed children’s book. This is a short, intimate, serene game about taking care of the people you love. Link
Sometimes revolutions happen at a glacial pace. Nothing appears to change, or so they think. Link
Game. Game of. Game of the. Game of the year! Link
A civil society fund - "I think it'd be better to have an explicit delegation, a budget of $X per person or household as a match to household donations, as a paid program not a tax expenditure. That undoes the blurring, which I also dislike, of financing things through the tax codes."
This month, the World Bank will release the most comprehensive attempt yet to crack the problem. The Changing Wealth of Nations 2018 is the fruit of years of work by a dedicated team. It builds on research published in 2006 and 2011. In its latest iteration, the bank produces comprehensive wealth accounts for 141 countries between 1995 and 2014. For each country, there are estimates for "produced" capital, including urban land, machinery and infrastructure. Natural capital includes market values for subsoil assets, such as oil and copper, arable land, forests and conservative estimates for protected areas, which are priced as if they were farmland.
For the first time, the bank makes an explicit attempt to measure human capital. Using a database of 1,500 household surveys, it estimates the present value of the projected lifetime earnings of nearly everyone on the planet. "We're looking at GDP as a return on wealth," says Glenn-Marie Lange, co-editor of the report and leader of the bank's wealth accounting team. "Policymakers need this information to design strategies to ensure that their GDP growth is sustained in the long run.''
Spacecrypt works by converting your private message into binary data, and then converting that binary data into zero-width characters (which can then be hidden in your public message). These characters are used:
Unicode Character 'WORD JOINER' (U+2060)
Unicode Character 'ZERO WIDTH SPACE' (U+200B)
Unicode Character 'ZERO WIDTH NON-JOINER' (U+200C)
It appears that these hidden payloads can work their way into code, not just data (such as the string shown above.)
I think this poses some serious issues, not just for Stack Overflow, but for the languages which are discussed on this Q&A site. Hidden characters in code make effective code review much more difficult. In the example above, a quick review of the code would lead someone to believe that foo * bar would be 11111111, not the actual value of 12345678987654321. This would be an easy way for someone to hide a security vulnerability in plain sight.
It’s also very difficult to see these hidden characters at the point-of-origin: They don’t appear at all in Safari’s Web Inspector and in Chrome the HTML entities blend right in with the other HTML and CSS for this site.
(You can also watch it on YouTube, but it runs to about 45 minutes.)
Abstract: We're living in yesterday's future, and it's nothing like the speculations of our authors and film/TV producers. As a working science fiction novelist, I take a professional interest in how we get predictions about the future wrong, and why, so that I can avoid repeating the same mistakes. Science fiction is written by people embedded within a society with expectations and political assumptions that bias us towards looking at the shiny surface of new technologies rather than asking how human beings will use them, and to taking narratives of progress at face value rather than asking what hidden agenda they serve.
In this talk, author Charles Stross will give a rambling, discursive, and angry tour of what went wrong with the 21st century, why we didn't see it coming, where we can expect it to go next, and a few suggestions for what to do about it if we don't like it.
Good morning. I'm Charlie Stross, and it's my job to tell lies for money. Or rather, I write science fiction, much of it about our near future, which has in recent years become ridiculously hard to predict.
Our species, Homo Sapiens Sapiens, is roughly three hundred thousand years old. (Recent discoveries pushed back the date of our earliest remains that far, we may be even older.) For all but the last three centuries of that span, predicting the future was easy: natural disasters aside, everyday life in fifty years time would resemble everyday life fifty years ago.
Let that sink in for a moment: for 99.9% of human existence, the future was static. Then something happened, and the future began to change, increasingly rapidly, until we get to the present day when things are moving so fast that it's barely possible to anticipate trends from month to month.
As an eminent computer scientist once remarked, computer science is no more about computers than astronomy is about building telescopes. The same can be said of my field of work, written science fiction. Scifi is seldom about science—and even more rarely about predicting the future. But sometimes we dabble in futurism, and lately it's gotten very difficult.
How to predict the near future
When I write a near-future work of fiction, one set, say, a decade hence, there used to be a recipe that worked eerily well. Simply put, 90% of the next decade's stuff is already here today. Buildings are designed to last many years. Automobiles have a design life of about a decade, so half the cars on the road will probably still be around in 2027. People ... there will be new faces, aged ten and under, and some older people will have died, but most adults will still be around, albeit older and grayer. This is the 90% of the near future that's already here.
After the already-here 90%, another 9% of the future a decade hence used to be easily predictable. You look at trends dictated by physical limits, such as Moore's Law, and you look at Intel's road map, and you use a bit of creative extrapolation, and you won't go too far wrong. If I predict that in 2027 LTE cellular phones will be everywhere, 5G will be available for high bandwidth applications, and fallback to satellite data service will be available at a price, you won't laugh at me. It's not like I'm predicting that airliners will fly slower and Nazis will take over the United States, is it?
And therein lies the problem: it's the 1% of unknown unknowns that throws off all calculations. As it happens, airliners today are slower than they were in the 1970s, and don't get me started about Nazis. Nobody in 2007 was expecting a Nazi revival in 2017, right? (Only this time round Germans get to be the good guys.)
My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we're now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.
Ruling out the singularity
Some of you might assume that, as the author of books like "Singularity Sky" and "Accelerando", I attribute this to an impending technological singularity, to our development of self-improving artificial intelligence and mind uploading and the whole wish-list of transhumanist aspirations promoted by the likes of Ray Kurzweil. Unfortunately this isn't the case. I think transhumanism is a warmed-over Christian heresy. While its adherents tend to be vehement atheists, they can't quite escape from the history that gave rise to our current western civilization. Many of you are familiar with design patterns, an approach to software engineering that focusses on abstraction and simplification in order to promote reusable code. When you look at the AI singularity as a narrative, and identify the numerous places in the story where the phrase "... and then a miracle happens" occurs, it becomes apparent pretty quickly that they've reinvented Christianity.
Indeed, the wellsprings of today's transhumanists draw on a long, rich history of Russian Cosmist philosophy exemplified by the Russian Orthodox theologian Nikolai Fyodorvitch Federov, by way of his disciple Konstantin Tsiolkovsky, whose derivation of the rocket equation makes him essentially the father of modern spaceflight. And once you start probing the nether regions of transhumanist thought and run into concepts like Roko's Basilisk—by the way, any of you who didn't know about the Basilisk before are now doomed to an eternity in AI hell—you realize they've mangled it to match some of the nastiest ideas in Presybterian Protestantism.
If it walks like a duck and quacks like a duck, it's probably a duck. And if it looks like a religion it's probably a religion. I don't see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I'm going to wash my hands of the singularity as an explanatory model without further ado—I'm one of those vehement atheists too—and try and come up with a better model for what's happening to us.
Towards a better model for the future
As my fellow SF author Ken MacLeod likes to say, the secret weapon of science fiction is history. History, loosely speaking, is the written record of what and how people did things in past times—times that have slipped out of our personal memories. We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story. With a little bit of history it's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Hapsburg Empire, or to re-spin the October Revolution as a tale of how Mars got its independence.
But history is useful for so much more than that.
It turns out that our personal memories don't span very much time at all. I'm 53, and I barely remember the 1960s. I only remember the 1970s with the eyes of a 6-16 year old. My father, who died last year aged 93, just about remembered the 1930s. Only those of my father's generation are able to directly remember the great depression and compare it to the 2007/08 global financial crisis directly. But westerners tend to pay little attention to cautionary tales told by ninety-somethings. We modern, change-obsessed humans tend to repeat our biggest social mistakes when they slip out of living memory, which means they recur on a time scale of seventy to a hundred years.
So if our personal memories are usless, it's time for us to look for a better cognitive toolkit.
History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.
I'm talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?
Old, slow AI
Let me crib from Wikipedia for a moment:
In the late 18th century, Stewart Kyd, the author of the first treatise on corporate law in English, defined a corporation as:
a collection of many individuals united into one body, under a special denomination, having perpetual succession under an artificial form, and vested, by policy of the law, with the capacity of acting, in several respects, as an individual, particularly of taking and granting property, of contracting obligations, and of suing and being sued, of enjoying privileges and immunities in common, and of exercising a variety of political rights, more or less extensive, according to the design of its institution, or the powers conferred upon it, either at the time of its creation, or at any subsequent period of its existence.
—A Treatise on the Law of Corporations, Stewart Kyd (1793-1794)
In 1844, the British government passed the Joint Stock Companies Act, which created a register of companies and allowed any legal person, for a fee, to register a company, which existed as a separate legal person. Subsequently, the law was extended to limit the liability of individual shareholders in event of business failure, and both Germany and the United States added their own unique extensions to what we see today as the doctrine of corporate personhood.
(Of course, there were plenty of other things happening between the sixteenth and twenty-first centuries that changed the shape of the world we live in. I've skipped changes in agricultural productivity due to energy economics, which finally broke the Malthusian trap our predecessors lived in. This in turn broke the long term cap on economic growth of around 0.1% per year in the absence of famine, plagues, and wars depopulating territories and making way for colonial invaders. I've skipped the germ theory of diseases, and the development of trade empires in the age of sail and gunpowder that were made possible by advances in accurate time-measurement. I've skipped the rise and—hopefully—decline of the pernicious theory of scientific racism that underpinned western colonialism and the slave trade. I've skipped the rise of feminism, the ideological position that women are human beings rather than property, and the decline of patriarchy. I've skipped the whole of the Enlightenment and the age of revolutions! But this is a technocentric congress, so I want to frame this talk in terms of AI, which we all like to think we understand.)
Here's the thing about corporations: they're clearly artificial, but legally they're people. They have goals, and operate in pursuit of these goals. And they have a natural life cycle. In the 1950s, a typical US corporation on the S&P 500 index had a lifespan of 60 years, but today it's down to less than 20 years.
Corporations are cannibals; they consume one another. They are also hive superorganisms, like bees or ants. For their first century and a half they relied entirely on human employees for their internal operation, although they are automating their business processes increasingly rapidly this century. Each human is only retained so long as they can perform their assigned tasks, and can be replaced with another human, much as the cells in our own bodies are functionally interchangeable (and a group of cells can, in extremis, often be replaced by a prosthesis). To some extent corporations can be trained to service the personal desires of their chief executives, but even CEOs can be dispensed with if their activities damage the corporation, as Harvey Weinstein found out a couple of months ago.
Finally, our legal environment today has been tailored for the convenience of corporate persons, rather than human persons, to the point where our governments now mimic corporations in many of their internal structures.
What do AIs want?
What do our current, actually-existing AI overlords want?
Elon Musk—who I believe you have all heard of—has an obsessive fear of one particular hazard of artificial intelligence—which he conceives of as being a piece of software that functions like a brain-in-a-box)—namely, the paperclip maximizer. A paperclip maximizer is a term of art for a goal-seeking AI that has a single priority, for example maximizing the number of paperclips in the universe. The paperclip maximizer is able to improve itself in pursuit of that goal but has no ability to vary its goal, so it will ultimately attempt to convert all the metallic elements in the solar system into paperclips, even if this is obviously detrimental to the wellbeing of the humans who designed it.
Unfortunately, Musk isn't paying enough attention. Consider his own companies. Tesla is a battery maximizer—an electric car is a battery with wheels and seats. SpaceX is an orbital payload maximizer, driving down the cost of space launches in order to encourage more sales for the service it provides. Solar City is a photovoltaic panel maximizer. And so on. All three of Musk's very own slow AIs are based on an architecture that is designed to maximize return on shareholder investment, even if by doing so they cook the planet the shareholders have to live on. (But if you're Elon Musk, that's okay: you plan to retire on Mars.)
The problem with corporations is that despite their overt goals—whether they make electric vehicles or beer or sell life insurance policies—they are all subject to instrumental convergence insofar as they all have a common implicit paperclip-maximizer goal: to generate revenue. If they don't make money, they are eaten by a bigger predator or they go bust. Making money is an instrumental goal—it's as vital to them as breathing is for us mammals, and without pursuing it they will fail to achieve their final goal, whatever it may be. Corporations generally pursue their instrumental goals—notably maximizing revenue—as a side-effect of the pursuit of their overt goal. But sometimes they try instead to manipulate the regulatory environment they operate in, to ensure that money flows towards them regardless.
Human tool-making culture has become increasingly complicated over time. New technologies always come with an implicit political agenda that seeks to extend its use, governments react by legislating to control the technologies, and sometimes we end up with industries indulging in legal duels.
For example, consider the automobile. You can't have mass automobile transport without gas stations and fuel distribution pipelines. These in turn require access to whoever owns the land the oil is extracted from—and before you know it, you end up with a permanent occupation force in Iraq and a client dictatorship in Saudi Arabia. Closer to home, automobiles imply jaywalking laws and drink-driving laws. They affect town planning regulations and encourage suburban sprawl, the construction of human infrastructure on the scale required by automobiles, not pedestrians. This in turn is bad for competing transport technologies like buses or trams (which work best in cities with a high population density).
To get these laws in place, providing an environment conducive to doing business, corporations spend money on political lobbyists—and, when they can get away with it, on bribes. Bribery need not be blatant, of course. For example, the reforms of the British railway network in the 1960s dismembered many branch services and coincided with a surge in road building and automobile sales. These reforms were orchestrated by Transport Minister Ernest Marples, who was purely a politician. However, Marples accumulated a considerable personal fortune during this time by owning shares in a motorway construction corporation. (So, no conflict of interest there!)
The automobile industry in isolation isn't a pure paperclip maximizer. But if you look at it in conjunction with the fossil fuel industries, the road-construction industry, the accident insurance industry, and so on, you begin to see the outline of a paperclip maximizing ecosystem that invades far-flung lands and grinds up and kills around one and a quarter million people per year—that's the global death toll from automobile accidents according to the world health organization: it rivals the first world war on an ongoing basis—as side-effects of its drive to sell you a new car.
Automobiles are not, of course, a total liability. Today's cars are regulated stringently for safety and, in theory, to reduce toxic emissions: they're fast, efficient, and comfortable. We can thank legally mandated regulations for this, of course. Go back to the 1970s and cars didn't have crumple zones. Go back to the 1950s and cars didn't come with seat belts as standard. In the 1930s, indicators—turn signals—and brakes on all four wheels were optional, and your best hope of surviving a 50km/h crash was to be thrown clear of the car and land somewhere without breaking your neck. Regulatory agencies are our current political systems' tool of choice for preventing paperclip maximizers from running amok. But unfortunately they don't always work.
One failure mode that you should be aware of is regulatory capture, where regulatory bodies are captured by the industries they control. Ajit Pai, head of the American Federal Communications Commission who just voted to eliminate net neutrality rules, has worked as Associate General Counsel for Verizon Communications Inc, the largest current descendant of the Bell telephone system monopoly. Why should someone with a transparent interest in a technology corporation end up in charge of a regulator for the industry that corporation operates within? Well, if you're going to regulate a highly complex technology, you need to recruit your regulators from among those people who understand it. And unfortunately most of those people are industry insiders. Ajit Pai is clearly very much aware of how Verizon is regulated, and wants to do something about it—just not necessarily in the public interest. When regulators end up staffed by people drawn from the industries they are supposed to control, they frequently end up working with their former officemates to make it easier to turn a profit, either by raising barriers to keep new insurgent companies out, or by dismantling safeguards that protect the public.
Another failure mode is regulatory lag, when a technology advances so rapidly that regulations are laughably obsolete by the time they're issued. Consider the EU directive requiring cookie notices on websites, to caution users that their activities were tracked and their privacy might be violated. This would have been a good idea, had it shown up in 1993 or 1996, but unfortunately it didn't show up until 2011, by which time the web was vastly more complex. Fingerprinting and tracking mechanisms that had nothing to do with cookies were already widespread by then. Tim Berners-Lee observed in 1995 that five years' worth of change was happening on the web for every twelve months of real-world time; by that yardstick, the cookie law came out nearly a century too late to do any good.
Again, look at Uber. This month the European Court of Justice ruled that Uber is a taxi service, not just a web app. This is arguably correct; the problem is, Uber has spread globally since it was founded eight years ago, subsidizing its drivers to put competing private hire firms out of business. Whether this is a net good for society is arguable; the problem is, a taxi driver can get awfully hungry if she has to wait eight years for a court ruling against a predator intent on disrupting her life.
So, to recap: firstly, we already have paperclip maximizers (and Musk's AI alarmism is curiously mirror-blind). Secondly, we have mechanisms for keeping them in check, but they don't work well against AIs that deploy the dark arts—especially corruption and bribery—and they're even worse againt true AIs that evolve too fast for human-mediated mechanisms like the Law to keep up with. Finally, unlike the naive vision of a paperclip maximizer, existing AIs have multiple agendas—their overt goal, but also profit-seeking, and expansion into new areas, and to accomodate the desires of whoever is currently in the driver's seat.
How it all went wrong
It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs. Everywhere I look I see voters protesting angrily against an entrenched establishment that seems determined to ignore the wants and needs of their human voters in favour of the machines. The Brexit upset was largely the result of a protest vote against the British political establishment; the election of Donald Trump likewise, with a side-order of racism on top. Our major political parties are led by people who are compatible with the system as it exists—a system that has been shaped over decades by corporations distorting our government and regulatory environments. We humans are living in a world shaped by the desires and needs of AIs, forced to live on their terms, and we are taught that we are valuable only insofar as we contribute to the rule of the machines.
Now, this is CCC, and we're all more interested in computers and communications technology than this historical crap. But as I said earlier, history is a secret weapon if you know how to use it. What history is good for is enabling us to spot recurring patterns in human behaviour that repeat across time scales outside our personal experience—decades or centuries apart. If we look at our historical very slow AIs, what lessons can we learn from them about modern AI—the flash flood of unprecedented deep learning and big data technologies that have overtaken us in the past decade?
We made a fundamentally flawed, terrible design decision back in 1995, that has damaged democratic political processes, crippled our ability to truly understand the world around us, and led to the angry upheavals of the present decade. That mistake was to fund the build-out of the public world wide web—as opposed to the earlier, government-funded corporate and academic internet—by monetizing eyeballs via advertising revenue.
(Note: Cory Doctorow has a contrarian thesis: The dotcom boom was also an economic bubble because the dotcoms came of age at a tipping point in financial deregulation, the point at which the Reagan-Clinton-Bush reforms that took the Depression-era brakes off financialization were really picking up steam. That meant that the tech industry's heady pace of development was the first testbed for treating corporate growth as the greatest virtue, built on the lie of the fiduciary duty to increase profit above all other considerations. I think he's entirely right about this, but it's a bit of a chicken-and-egg argument: we wouldn't have had a commercial web in the first place without a permissive, deregulated financial environment. My memory of working in the dot-com 1.0 bubble is that, outside of a couple of specific environments (the Silicon Valley area and the Boston-Cambridge corridor) venture capital was hard to find until late 1998 or thereabouts: the bubble's initial inflation was demand-driven rather than capital-driven, as the non-tech investment sector was late to the party. Caveat: I didn't win the lottery, so what do I know?)
The ad-supported web that we live with today wasn't inevitable. If you recall the web as it was in 1994, there were very few ads at all, and not much in the way of commerce. (What ads there were were mostly spam, on usenet and via email.) 1995 was the year the world wide web really came to public attention in the anglophone world and consumer-facing websites began to appear. Nobody really knew how this thing was going to be paid for (the original dot com bubble was all largely about working out how to monetize the web for the first time, and a lot of people lost their shirts in the process). And the naive initial assumption was that the transaction cost of setting up a TCP/IP connection over modem was too high to be supported by per-use microbilling, so we would bill customers indirectly, by shoving advertising banners in front of their eyes and hoping they'd click through and buy something.
Unfortunately, advertising is an industry. Which is to say, it's the product of one of those old-fashioned very slow AIs I've been talking about. Advertising tries to maximize its hold on the attention of the minds behind each human eyeball: the coupling of advertising with web search was an inevitable outgrowth. (How better to attract the attention of reluctant subjects than to find out what they're really interested in seeing, and sell ads that relate to those interests?)
The problem with applying the paperclip maximizer approach to monopolizing eyeballs, however, is that eyeballs are a scarce resource. There are only 168 hours in every week in which I can gaze at banner ads. Moreover, most ads are irrelevant to my interests and it doesn't matter how often you flash an ad for dog biscuits at me, I'm never going to buy any. (I'm a cat person.) To make best revenue-generating use of our eyeballs, it is necessary for the ad industry to learn who we are and what interests us, and to target us increasingly minutely in hope of hooking us with stuff we're attracted to.
At this point in a talk I'd usually go into an impassioned rant about the hideous corruption and evil of Facebook, but I'm guessing you've heard it all before so I won't bother. The too-long-didn't-read summary is, Facebook is as much a search engine as Google or Amazon. Facebook searches are optimized for Faces, that is, for human beings. If you want to find someone you fell out of touch with thirty years ago, Facebook probably knows where they live, what their favourite colour is, what size shoes they wear, and what they said about you to your friends all those years ago that made you cut them off.
Even if you don't have a Facebook account, Facebook has a You account—a hole in their social graph with a bunch of connections pointing into it and your name tagged on your friends' photographs. They know a lot about you, and they sell access to their social graph to advertisers who then target you, even if you don't think you use Facebook. Indeed, there's barely any point in not using Facebook these days: they're the social media Borg, resistance is futile.
However, Facebook is trying to get eyeballs on ads, as is Twitter, as is Google. To do this, they fine-tune the content they show you to make it more attractive to your eyes—and by 'attractive' I do not mean pleasant. We humans have an evolved automatic reflex to pay attention to threats and horrors as well as pleasurable stimuli: consider the way highway traffic always slows to a crawl as it is funnelled past an accident site. The algorithms that determine what to show us when we look at Facebook or Twitter take this bias into account. You might react more strongly to a public hanging in Iran than to a couple kissing: the algorithm knows, and will show you whatever makes you pay attention.
This brings me to another interesting point about computerized AI, as opposed to corporatized AI: AI algorithms tend to embody the prejudices and beliefs of the programmers. A couple of years ago I ran across an account of a webcam developed by mostly-pale-skinned silicon valley engineers that have difficulty focusing or achieving correct colour balance when pointing at dark-skinned faces. That's an example of human-programmer-induced bias. But with today's deep learning, bias can creep in via the data sets the neural networks are trained on. Microsoft's first foray into a conversational chatbot driven by machine learning, Tay, was yanked offline within days because when 4chan and Reddit based trolls discovered they could train it towards racism and sexism for shits and giggles.
Humans may be biased, but at least we're accountable and if someone gives you racist or sexist abuse to your face you can complain (or punch them). But it's impossible to punch a corporation, and it may not even be possible to identify the source of unfair bias when you're dealing with a machine learning system.
AI-based systems that concretize existing prejudices and social outlooks make it harder for activists like us to achieve social change. Traditional advertising works by playing on the target customer's insecurity and fear as much as on their aspirations, which in turn play on the target's relationship with their surrounding cultural matrix. Fear of loss of social status and privilege is a powerful stimulus, and fear and xenophobia are useful tools for attracting eyeballs.
What happens when we get pervasive social networks with learned biases against, say, feminism or Islam or melanin? Or deep learning systems trained on data sets contaminated by racist dipshits? Deep learning systems like the ones inside Facebook that determine which stories to show you to get you to pay as much attention as possible to the adverts?
I think you already know the answer to that.
Look to the future (it's bleak!)
Now, if this is sounding a bit bleak and unpleasant, you'd be right. I write sci-fi, you read or watch or play sci-fi; we're acculturated to think of science and technology as good things, that make our lives better.
But plenty of technologies have, historically, been heavily regulated or even criminalized for good reason, and once you get past the reflexive indignation at any criticism of technology and progress, you might agree that it is reasonable to ban individuals from owning nuclear weapons or nerve gas. Less obviously: they may not be weapons, but we've banned chlorofluorocarbon refrigerants because they were building up in the high stratosphere and destroying the ozone layer that protects us from UV-B radiation. And we banned tetraethyl lead additive in gasoline, because it poisoned people and led to a crime wave.
Nerve gas and leaded gasoline were 1930s technologies, promoted by 1930s corporations. Halogenated refrigerants and nuclear weapons are totally 1940s, and intercontinental ballistic missiles date to the 1950s. I submit that the 21st century is throwing up dangerous new technologies—just as our existing strategies for regulating very slow AIs have broken down.
Let me give you four examples—of new types of AI applications—that are going to warp our societies even worse than the old slow AIs of yore have done. This isn't an exhaustive list: these are just examples. We need to work out a general strategy for getting on top of this sort of AI before they get on top of us.
(Note that I do not have a solution to the regulatory problems I highlighted earlier, in the context of AI. This essay is polemical, intended to highlight the existence of a problem and spark a discussion, rather than a canned solution. After all, if the problem was easy to solve it wouldn't be a problem, would it?)
Firstly, Political hacking tools: social graph-directed propaganda
Topping my list of dangerous technologies that need to be regulated, this is low-hanging fruit after the electoral surprises of 2016. Cambridge Analytica pioneered the use of deep learning by scanning the Facebook and Twitter social graphs to indentify voters' political affiliations. They identified individuals vulnerable to persuasion who lived in electorally sensitive districts, and canvas them with propaganda that targeted their personal hot-button issues. The tools developed by web advertisers to sell products have now been weaponized for political purposes, and the amount of personal information about our affiliations that we expose on social media makes us vulnerable. Aside from the last US presidential election, there's mounting evidence that the British referendum on leaving the EU was subject to foreign cyberwar attack via weaponized social media, as was the most recent French presidential election.
I'm biting my tongue and trying not to take sides here: I have my own political affiliation, after all. But if social media companies don't work out how to identify and flag micro-targeted propaganda then democratic elections will be replaced by victories for whoever can buy the most trolls. And this won't simply be billionaires like the Koch brothers and Robert Mercer in the United States throwing elections to whoever will hand them the biggest tax cuts. Russian military cyberwar doctrine calls for the use of social media to confuse and disable perceived enemies, in addition to the increasingly familiar use of zero-day exploits for espionage via spear phishing and distributed denial of service attacks on infrastructure (which are practiced by western agencies as well). Sooner or later, the use of propaganda bot armies in cyberwar will go global, and at that point, our social discourse will be irreparably poisoned.
(By the way, I really hate the cyber- prefix; it usually indicates that the user has no idea what they're talking about. Unfortunately the term 'cyberwar' seems to have stuck. But I digress.)
Secondly, an adjunct to deep learning targeted propaganda is the use of neural network generated false video media.
We're used to Photoshopped images these days, but faking video and audio is still labour-intensive, right? Unfortunately, that's a nope: we're seeing first generation AI-assisted video porn, in which the faces of film stars are mapped onto those of other people in a video clip using software rather than a laborious human process. (Yes, of course porn is the first application: Rule 34 of the Internet applies.) Meanwhile, we have WaveNet, a system for generating realistic-sounding speech in the voice of a human speaker the neural network has been trained to mimic. This stuff is still geek-intensive and requires relatively expensive GPUs. But in less than a decade it'll be out in the wild, and just about anyone will be able to fake up a realistic-looking video of someone they don't like doing something horrible.
We're already seeing alarm over bizarre YouTube channels that attempt to monetize children's TV brands by scraping the video content off legitimate channels and adding their own advertising and keywords. Many of these channels are shaped by paperclip-maximizer advertising AIs that are simply trying to maximize their search ranking on YouTube. Add neural network driven tools for inserting Character A into Video B to click-maximizing bots and things are going to get very weird (and nasty). And they're only going to get weirder when these tools are deployed for political gain.
We tend to evaluate the inputs from our eyes and ears much less critically than what random strangers on the internet tell us—and we're already too vulnerable to fake news as it is. Soon they'll come for us, armed with believable video evidence. The smart money says that by 2027 you won't be able to believe anything you see in video unless there are cryptographic signatures on it, linking it back to the device that shot the raw feed—and you know how good most people are at using encryption? The dumb money is on total chaos.
Paperclip maximizers that focus on eyeballs are so 20th century. Advertising as an industry can only exist because of a quirk of our nervous system—that we are susceptible to addiction. Be it tobacco, gambling, or heroin, we recognize addictive behaviour when we see it. Or do we? It turns out that the human brain's reward feedback loops are relatively easy to game. Large corporations such as Zynga (Farmville) exist solely because of it; free-to-use social media platforms like Facebook and Twitter are dominant precisely because they are structured to reward frequent interaction and to generate emotional responses (not necessarily positive emotions—anger and hatred are just as good when it comes to directing eyeballs towards advertisers). "Smartphone addiction" is a side-effect of advertising as a revenue model: frequent short bursts of interaction keep us coming back for more.
Thanks to deep learning, neuroscientists have mechanised the process of making apps more addictive. Dopamine Labs
is one startup that provides tools to app developers to make any app more addictive, as well as to reduce the desire to continue a behaviour if it's undesirable. It goes a bit beyond automated A/B testing; A/B testing allows developers to plot a binary tree path between options, but true deep learning driven addictiveness maximizers can optimize for multiple attractors simultaneously. Now, Dopamine Labs seem, going by their public face, to have ethical qualms about the misuse of addiction maximizers in software. But neuroscience isn't a secret, and sooner or later some really unscrupulous people will try to see how far they can push it.
Let me give you a more specific scenario.
Apple have put a lot of effort into making realtime face recognition work with the iPhone X. You can't fool an iPhone X with a photo or even a simple mask: it does depth mapping to ensure your eyes are in the right place (and can tell whether they're open or closed) and recognize your face from underlying bone structure through makeup and bruises. It's running continuously, checking pretty much as often as every time you'd hit the home button on a more traditional smartphone UI, and it can see where your eyeballs are pointing. The purpose of this is to make it difficult for a phone thief to get anywhere if they steal your device. but it means your phone can monitor your facial expressions and correlate it against app usage. Your phone will be aware of precisely what you like to look at on its screen. With addiction-seeking deep learning and neural-network generated images, it is in principle possible to feed you an endlessly escallating payload of arousal-maximizing inputs. It might be Facebook or Twitter messages optimized to produce outrage, or it could be porn generated by AI to appeal to kinks you aren't even consciously aware of. But either way, the app now owns your central nervous system—and you will be monetized.
Finally, I'd like to raise a really hair-raising spectre that goes well beyond the use of deep learning and targeted propaganda in cyberwar.
Back in 2011, an obscure Russian software house launched an iPhone app for pickup artists called Girls around Me. (Spoiler: Apple pulled it like a hot potato when word got out.) The app works out where the user is using GPS, then queried FourSquare and Facebook for people matching a simple relational search—for single females (per Facebook) who have checked in (or been checked in by their friends) in your vicinity (via FourSquare). The app then displayed their locations on a map, along with links to their social media profiles.
If they were doing it today the interface would be gamified, showing strike rates and a leaderboard and flagging targets who succumbed to harassment as easy lays. But these days the cool kids and single adults are all using dating apps with a missing vowel in the name: only a creeper would want something like "Girls around Me", right?
Unfortunately there are even nastier uses than scraping social media to find potential victims for serial rapists. Does your social media profile indicate your political or religious affiliation? Nope? Don't worry, Cambridge Analytica can work them out with 99.9% precision just by scanning the tweets and Facebook comments you liked. Add a service that can identify peoples affiliation and location, and you have the beginning of a flash mob app: one that will show you people like Us and people like Them on a hyper-local map.
Imagine you're young, female, and a supermarket has figured out you're pregnant by analysing the pattern of your recent purchases, like Target back in 2012.
Now imagine that all the anti-abortion campaigners in your town have an app called "babies at risk" on their phones. Someone has paid for the analytics feed from the supermarket and the result is that every time you go near a family planning clinic a group of unfriendly anti-abortion protesters engulfs you.
Or imagine you're male and gay, and the "God Hates Fags" crowd has invented a 100% reliable Gaydar app (based on your Grindr profile) and is getting their fellow travellers to queer bash gay men only when they're alone or out-numbered 10:1. (That's the special horror of precise geolocation.) Or imagine you're in Pakistan and Christian/Muslim tensions are mounting, or you're in rural Alabama, or ... the possibilities are endless
Someone out there is working on it: a geolocation-aware social media scraping deep learning application, that uses a gamified, competitive interface to reward its "players" for joining in acts of mob violence against whoever the app developer hates. Probably it has an inoccuous-seeming but highly addictive training mode to get the users accustomed to working in teams and obeying the app's instructions—think Ingress or Pokemon Go. Then, at some pre-planned zero hour, it switches mode and starts rewarding players for violence—players who have been primed to think of their targets as vermin, by a steady drip-feed of micro-targeted dehumanizing propaganda delivered over a period of months.
And the worst bit of this picture?
Is that the app developer isn't a nation-state trying to disrupt its enemies, or an extremist political group trying to murder gays, jews, or muslims; it's just a paperclip maximizer doing what it does—and you are the paper.
"It seems to me that our current political upheavals are best understood as arising from the capture of post-1917 democratic institutions by large-scale AIs." (Read the whole thing is my recommendation)