-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathVitali_5.html
114 lines (69 loc) · 19 KB
/
Vitali_5.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" id="test" type="text/css" href="css_medievale.css">
<title>God is in the machine - TLS</title>
</head>
<body>
<article>
<header>
<figure>
<img src="images/vitali_5/image_1.jpg">
<figcaption>An employee demonstrates a “Police Pad”, Tbilisi, Georgia, 2012 © David Mdzinarishvili/Reuters</figcaption>
</figure>
<h1>TLS</h1>
<h2>God is in the machine</h2>
<h3>Carl Miller on the terrifying, hidden reality of Ridiculously Complicated Algorithms</h3>
<h4>Carl Miller</h4>
<h4>August 21, 2018</h4>
</header>
<section>
<p>“I’ll lose my job if anyone knows about this.”</p>
<p>There was a long silence which I didn’t dare to break. I had begged to make this meeting happen. And now the person I had long been trying to meet leaned towards me. “Someone is going to go through your book line by line,” he said, “to try to work out who I am.”</p>
<p>He’d been a talented researcher, an academic, until his friend started a small technology company. He had joined the company and helped it to grow. It eventually became so big that the company had been acquired by one of the tech giants. And so, then, was he.</p>
<p>He was now paid a fortune to help design the algorithms that were central to what the tech giant did. And he had signed solemn legal documents prohibiting him from speaking to me, or to anyone, about his work. But as the years passed, his concern – indeed his guilt – grew. “It’s power without responsibility.” He paused. “There’s so much power, and so little responsibility. This is not notional abstract power. This is real power about day-to-day lives. It’s both material and cultural and financial. The world has to know that this is how it works . . . There’s something rotten in the state of Denmark,” he said, quoting Hamlet a little melodramatically.</p>
<p>So he had decided to take a risk. “If they find out I’m doing this,” he said, “I’ll be marched out of my office and I’ll never work in technology again. That’s the best-case scenario.” He wasn’t just going to talk to me about his work. He was going to show me it.</p>
<p>From his satchel, the researcher pulled out his laptop. He tapped for a few minutes and, with a sense of occasion, turned the screen to face me. “It’s all there.” And there it was: a white screen with instructions neatly arranged in a series of boxes.</p>
<q>“In [3]” the first step says</q>
<q>“In [8], in [9]” says the next.</q>
<p>There were words in different colours, some green, some purple, some in red, in bold, in italics. I looked at the researcher, a proud grin spread across his face. There it was. An algorithm that really influenced people’s lives. And it was . . . totally underwhelming.</p>
<p>Twenty-three centuries ago, the Greek mathematician Eratosthenes sat in the Great Library of Alexandria and tried to find a way to identify prime numbers. He wrote every number from one to 100 in ten rows, and crossed out the one. He circled the two, crossed out all the multiples of two, circled the three and continued. He had created an algorithm, in essence something very simple. His ‘sieve’, as it was called, did what all algorithms do. It took an input, followed a series of well-described steps and produced an output. Input, process, output: that’s all an algorithm is, and has ever been.</p>
<p>Throughout their history, algorithms have been built to solve problems. They have been used to make astronomical calculations, build clocks and turn secret information into code. “Up till the nineties,” the researcher said, “algorithms still tended to be RSAs – Really Simple Algorithms. Previously it was pretty clear how stuff happened. You take the original Google algorithm. It was basically a popularity study. You’d just surface (or rank more highly) things that people clicked on more. In general, the people who made it understood how the thing worked.” Some algorithms were more complicated than others, but the input > process > output was generally transparent and understandable, at least to the people who built and used them.</p>
<p>The algorithm he had brought up on his screen was built to solve a problem, too. It ordered and organized reality in an important way, trying to separate what was important from what was irrelevant. But it was different from the RSAs. “It’s way more complicated than it looks,” he said, hovering a pencil over some of the short words in square brackets. “But I need to show you why.” And with that, we started to journey through his creation.</p>
<p>First, it imported “libraries”, a specific language of definitions, instructions and actions. Next, he showed me how it brought in data. “There’s a bit of a macho thing about feeding your algorithms as much data as possible,” he said. “The more data you feed it, the better. We work with a lot more data than most teams, actually,” he said, drawing his cursor longingly over the script that brought the huge, churning quantities of data that fed the algorithm. Gigabytes, terabytes, petabytes of data were ordered, there on the page.</p>
<p>By instruction fifteen, “functions” were added. “A function is a little factory,” the researcher said, highlighting each one. “These are the building blocks of the algorithm: a subalgorithm, basically.” He showed how he built each of these building blocks, stringing together phrases like “get component”, “filter by station”, “sort_nodes_in_degree”. Then, the main stage. “This is the business end of the algorithm,” he said. “This is where I stack up the building blocks.” He was supposed to be showing me how the functions were connected. But I was lost. “I don’t get it,” I said. “How can you keep track of what’s doing what?”</p>
<p>Each of these sub-algorithmic functions, his building blocks, was really another complete house. Each was a complex tangle of instructions and processes, and some were themselves made from sub-algorithmic building blocks. The screen looked simple, but I was looking at a blueprint of building blocks within building blocks within building blocks: millions of instructions in just a few pages of code. Its builder was sitting next to me, but even he struggled to explain the stages, retracing steps and correcting himself as he tried to hold the layer upon layer of abstraction in his head. He seemed to finish, but then paused. “I don’t really remember where that last bit comes from,” he said.</p>
<p>The researcher knew, of course, what data he’d fed into the process. He knew why he’d designed it, the problem it was trying to solve and the outputs that it produced. However, after he’d been trying to explain it for over an hour, he sat back in his chair, exhausted. “Yes, as you can see, the gap between input and output is difficult to understand,” he said. He’d flooded the algorithm with a huge amount of information, “a trend”, he said, because in the tech giant he could, and everyone did. But the amount of data meant it was hard to tell what the salient inputs within it were. “From a human perspective you’re not sure which of the inputs is significant; it’s hard to know what is actually driving the outputs. It’s hard to trace back, as a human, to know why a decision was made.”</p>
<p>Within his tech giant, algorithms rarely stand alone. Instead, they exist within webs. “I rely”, he said, “on signals that are produced by other algorithms.” His algorithm was fed by outputs that were shaped by other algorithms. It was like a car assembly line. He, like his colleagues, worked on a small, specific part of a much larger process.</p>
<p>The algorithm was also constantly changing. The data inputs were flowing into the algorithm in real time, but the actual weights, measures and trade-offs that the algorithm made weren’t static either. Some of the functions that the researcher had woven in used machine learning – techniques where the machine constantly learned and adapted to what the most important patterns, correlations and relationships were. It meant that the algorithm was constantly changing and moving as the world moved around it, and its diet of data changed to reflect that.</p>
<p>We sat there, looking at the computer, his creation laid out in multi-coloured type. “This is all to do with complexity,” he said contemplatively. “Complexity of input. Complexity of analysis. Complexity of how outputs are combined, structured and used.” One of the reasons that he’d been employed to build a process like this was exactly because it could handle complexity by being complex itself. It grasped the blinding number of factors, signals and influences that bounced off each other at every moment in ways that we simply cannot.</p>
<p>Algorithms have changed, from Really Simple to Ridiculously Complicated. They are capable of accomplishing tasks and tackling problems that they’ve never been able to do before. They are able, really, to handle an unfathomably complex world better than a human can. But exactly because they can, the way they work has become unfathomable too. Inputs loop from one algorithm to the next; data presses through more instructions, more code. The complexity, dynamism, the sheer not-understandability of the algorithm means that there is a middle part – between input and output – where it is possible that no one knows exactly what they’re doing. The algorithm learns whatever it learns. “The reality is, professionally, I only look under the hood when it goes wrong. And it can be physically impossible to understand what has actually happened.”</p>
<p>With a triumphant flick of his wrist, the researcher tapped a key and the algorithm began. Twenty seconds later, the algorithm was finished. There in black and white, was an output. One, of course, that I cannot specifically describe, but an output that many of us use every day. The algorithm had produced a kind of reality, really – one that we make decisions from, that can even change our lives.</p>
<p>The researcher scrolled through the bundle of instructions, and changed a single one to a two. A single value. The algorithm reran, and reality popped out again, but this time, a quarter of the results had ceased to exist.</p>
<p>“OK,” I said, “what happened there? Why did you change it? You know the two is wrong. But how do you know the one is right?”</p>
<p>“That”, he said, gesticulating at the sabotaged result, “is the point. It’s a heuristic. I tried it, and it seemed to work. Then I tested it, and the result looked right. I can’t say the one is true. I can only say that it passed minimum evaluation criteria. The whole algorithm is full of parameters that could have been something else. Truth is dead,” he sighed. “There is only output.”</p>
<p>“Who checks these?” I asked.</p>
<p>“Me.”</p>
<p>“What about your boss?”</p>
<p>“You’ve seen how difficult it is to really understand. Sometimes I struggle with it, and I created it. The reality is that if the algorithm looks like it’s doing the job that it’s supposed to do, and people aren’t complaining, then there isn’t much incentive to really comb through all those instructions and those layers of abstracted code to work out what is happening.” The preferences you see online – the news you read, the products you view, the adverts that appear – are all dependent on values that don’t necessarily have to be what they are. They are not true, they’ve just passed minimum evaluation criteria.</p>
<p>Jure Leskovec spoke with a strong Slovenian accent, softened slightly by the rolling Rs I had become used to in California. Jure had spent time at Facebook and as chief scientist at Pinterest before moving back to academia. We were sitting in his office in Stanford, which, like the other centres of tech in California, seemed to be expanding rapidly. As we spoke, clouds of hot, white dust drifted up past his window from drilling below.</p>
<p>He grabbed a pen and sprang towards an enormous whiteboard that took up a full wall of his office. His latest work was building an algorithm to help criminal court judges make better decisions over whether to grant bail. “You have a judge,” he said, drawing a large, black rectangle on the board with a J in it, “and a defendant. The judge is trying to make a single determination: if bailed, will the defendant commit crime or no crime? So I can train a machine learning algorithm to answer the question, ‘If I release you, will you commit another crime or not?’”</p>
<p>The scribbling on the board became more profuse, as Jure excitedly sketched out the study. He’d gathered criminal records data on people who, when bailed, committed another crime, and other data on people who hadn’t. He also found a way, by comparing lenient and stricter judges, of constructing data on people who were released but who normally would have been locked up, and whether they had committed another crime too.</p>
<p>“The point”, he said, “is that our algorithm outperforms human judges by 30 per cent. So far, these machine learning algorithms have mostly been used in, y’know, recommendations, the online world. And I would say that these types of domains are low stakes. You might get a bad ad. You have a bad Friday night because you’ve watched the wrong movie. That’s the worst that can happen to you. But if you think about applying these algorithms to highstakes domains –”</p>
<p>“Which is increasingly happening?” I cut in.</p>
<p>“Which I think is increasingly happening. Then you have to make sure that the methods that we develop, and the standards about the way we use these methods and the way we verify them, are incredibly rigorous.”</p>
<p>Already, according to the <em>Wall Street Journal</em>, at least fifteen states in the US use automated risk-assessment tools to aid judges in making parole decisions. Predpol is, amongst others, a company that uses algorithms to predict areas where crime in the future is likely to happen on the basis of crimes committed in the past. Pegged is a company that offers this kind of technology (powered by artificial intelligence and fuelled by huge amounts of data) to help find the best candidates for any particular job. Algorithms are being used as contract negotiators, making split-second decisions over which terms to offer and accept. Algorithms are not only becoming more complex, they are also taking on more and more important jobs.</p>
<p>“I think there is a huge revolution to come,” Jure said. “In how decisions are made in society.” He wasn’t saying that algorithms should take over, only that they should be used to support human decisions. “This bail example – I can say algorithms do better. I’m advocating, let’s use these algorithms to help the human judges. They have a really hard time; they have like a minute to make a decision. They often have no feedback on whether they made the right decision, and there is no knowledge-sharing between the judges. I think it’s clear that when human and machine have access to the same data, machine will beat the human. We see this over and over. Just give it enough data.” Humans have given agency, genuine decisional power, to processes that are so complex they are hidden.</p>
<p>“Weapons of math destruction” is how the writer Cathy O’Neil describes the nasty and pernicious kinds of algorithms that are not subject to the same challenges that human decision-makers are. Parole algorithms (not Jure’s) can bias decisions on the basis of income or (indirectly) ethnicity. Recruitment algorithms can reject candidates on the basis of mistaken identity. In some circumstances, such as policing, they might create feedback loops, sending police into areas with more crime, which causes more crime to be detected.</p>
<p>The problem is that in many cases, we simply don’t know. The researcher had taken a personal risk to show me what he had created, because his algorithm, like most that really affect us, is proprietary and hidden; they are expensive pieces of intellectual property that we cannot understand, and we cannot challenge. A “black box society”, as the academic Frank Pasquale describes it: a society harmed by a whole new kind of secrecy that obscures the automated judgements that affect our lives.</p>
<p>If, as Jure suspects, machine judgement will become measurably better than human judgement for important decisions, the argument for using it will only grow stronger. And somewhere in that gap between inputs and outputs – the actual decision making part of the process itself – is something that can shape our lives in meaningful ways yet has become less and less understandable.</p>
<p>“We need”, Jure said emphatically, “to step up and come up with the means to evaluate – vet – algorithms in unbiased ways. We need to be able to interpret and explain their decisions. We don’t want an optimal algorithm. We want one simple enough that an expert can look at it and say nothing crazy is happening here. I think we need to get serious about how do we get these things ready for societal deployment, for high-stakes decision environments? How do we debug these things to ensure some level of quality?”</p>
<p>There is something happening here that is deeper than any single algorithm. They are at the forefront of what, at times, appears to be almost a new philosophy. “God is the machine,” the researcher told me. “The black box is the truth. If it works, it works. We shouldn’t even try to work out what the machine is spitting out – they’ll pick up patterns we won’t even know about.”</p>
</section>
<footer>
<p>An abridged extract from <em>The Death of the Gods: The new global power grab</em> by Carl Miller. Published by William Heinemann, £20. © Carl Miller, 2018.</p>
</footer>
</article>
</body>
</html>