iOS developer
632 stories

Targeting Meritocracy



Prospect Magazine writes about the problem with meritocracy. First Things thinks meritocracy is killing America. Feminist Philosophers comes out against meritocracy. The Guardian says “down with meritocracy”. Vox calls for an atack on the false god of meritocracy. There’s even an Against Meritocracy book. Given that meritocracy seems almost tautologically good (doesn’t it just mean positions going to those who deserve them?), there sure do seem to be a lot of people against it.

Some of these people are just being pointlessly edgy. The third article seem to admit that a true meritocracy would be a good thing, but argues that we don’t have one right now. This hardly seems “against meritocracy”, any more than saying we don’t have full racial equality right now means you’re “against racial equality”, but whatever, I guess you’ve got to get clicks somehow.

The other articles actually mean it. Their argument seems to be gesturing at the idea that elites send their kids to private schools, where they get all A+s and end up as president of the Junior Strivers Club. Then they go to Harvard and dazzle their professors with their sparkling wit and dapper suits. Then they get hired right out of college to high-paying management positions at Chase-Bear-Goldman-Sallie-Manhattan-Stearns-Sachs-Mae-FEDGOV. Then they eat truffle-flavored caviar all day and tell each other “Unlike past generations of elites, we are meritocrats who truly deserve our positions, on account of our merit”, as the poor gnash their teeth outside.

Grant that this is all true, and that it’s bad. Does that mean we should be against meritocracy?


There’s a weird assumption throughout all these articles, that meritocracy is founded on the belief that smart people deserve good jobs as a reward for being smart. Freddie de Boer, in his review of yet another anti-meritocracy book, puts it best:

I reject meritocracy because I reject the idea of human deserts. I don’t believe that an individual’s material conditions should be determined by what he or she “deserves,” no matter the criteria and regardless of the accuracy of the system contrived to measure it. I believe an equal best should be done for all people at all times.

More practically, I believe that anything resembling an accurate assessment of what someone deserves is impossible, inevitably drowned in a sea of confounding variables, entrenched advantage, genetic and physiological tendencies, parental influence, peer effects, random chance, and the conditions under which a person labors. To reflect on the immateriality of human deserts is not a denial of choice; it is a denial of self-determination. Reality is indifferent to meritocracy’s perceived need to “give people what they deserve.”

I think this is both entirely true and entirely missing the point. The intuition behind meritocracy is this: if your life depends on a difficult surgery, would you prefer the hospital hire a surgeon who aced medical school, or a surgeon who had to complete remedial training to barely scrape by with a C-? If you prefer the former, you’re a meritocrat with respect to surgeons. Generalize a little, and you have the argument for being a meritocrat everywhere else.

The Federal Reserve making good versus bad decisions can be the difference between an economic boom or a recession, and ten million workers getting raises or getting laid off. When you’ve got that much riding on a decision, you want the best decision-maker possible – that is, you want to choose the head of the Federal Reserve based on merit.

This has nothing to do with fairness, deserts, or anything else. If some rich parents pay for their unborn kid to have experimental gene therapy that makes him a superhumanly-brilliant economist, and it works, and through no credit of his own he becomes a superhumanly-brilliant economist – then I want that kid in charge of the Federal Reserve. And if you care about saving ten million people’s jobs, you do too.


Does this mean we just have to suck it up and let the truffle-eating Harvard-graduating elites at Chase-Bear-Goldman-Sallie-Manhattan-Stearns-Sachs-Mae-FEDGOV lord it over the rest of us?

No. The real solution to this problem is the one none of the anti-meritocracy articles dare suggest: accept that education and merit are two different things!

I work with a lot of lower- and working-class patients, and one complaint I hear again and again is that their organization won’t promote them without a college degree. Some of them have been specifically told “You do great work, and we think you’d be a great candidate for a management position, but it’s our policy that we can’t promote someone to a manager unless they’ve gone to college”. Some of these people are too poor to afford to go to college. Others aren’t sure they could pass; maybe they have great people skills and great mechanical skills but subpar writing-term-paper skills. Though I’ve met the occasional one who goes to college and rises to great heights, usually they sit at the highest non-degree-requiring tier of their organization, doomed to perpetually clean up after the mistakes of their incompetent-but-degree-having managers. These people have loads of merit. In a meritocracy, they’d be up at the top, competing for CEO positions. In our society, they’re stuck.

The problem isn’t just getting into college. It’s that success in college only weakly correlates with success in the real world. I got into medical school because I got good grades in college; those good grades were in my major, philosophy. Someone else who was a slightly worse philosopher would never have made it to medical school; maybe they would have been a better doctor. Maybe someone who didn’t get the best grades in college has the right skills to be a nurse, or a firefighter, or a police officer. If so, we’ll never know; all three of those occupations are gradually shifting to acceptance conditional on college performance. Ulysses Grant graduated in the bottom half of his West Point class, but turned out to be the only guy capable of matching General Lee and winning the Civil War after a bunch of superficially better-credentialed generals failed. If there’s a modern Grant with poor grades but excellent real-world fighting ability, are we confident our modern educationocracy will find him? Are we confident it will even try?

Remember that IQ correlates with chess talent at a modest r = 0.24, and chess champion Garry Kasparov has only a medium-high IQ of 135. If Kasparov’s educational success matched his IQ, he might or might not have made it into Harvard; he certainly wouldn’t have been their star student. And if it was only that kind of educational success that gave spots on some kind of national chess team, Kasparov and a bunch of other grandmasters would never have a chance. Real meritocracy is what you get when you ignore the degrees and check who can actually win a chess game.

One of the few places I see this going well is in programming. Triplebyte (conflict of interest notice: SSC sponsor) asks people who want a programming job to take a test of their programming ability, “no resume needed”. Then it matches them with tech companies that want the kind of programming the applicant is good at. It doesn’t matter whether you were president of the Junior Strivers’ Club in college. It doesn’t matter whether you managed to make it past the gatekeepers trying to keep you out for not excluding the right kind of upper-class vibe. What matters is whether you can code or not. As a result, a bunch of the people I know are poor/transgender/mentally ill people who couldn’t do college for whatever reason, bought some computer science books and studied on their own, and got hired by some big tech company. Programming is almost the only well-paying field where people can still do this, and it doesn’t surprise me that the establishment keeps portraying its culture as uniquely evil and demanding it be dismantled.

I think we should be doing the opposite: reworking every field we can on the same model. Instead of Goldman Sachs hiring whoever does best at Harvard, they should hire people who can demonstrate their knowledge of investing principles or (even better) who can demonstrate an ability to predict the market better than chance. Some of these people will be the academic stars who learned how to do it at Harvard Business School. But a lot of others will be ordinary working-class people who self-studied or who happen to have a gift, the investing equivalents of General Grant and Garry Kasparov.

I don’t think the writers of the anti-meritocracy articles above really disagree with this. I think they’re probably using a different definition of meritocracy where it does mean “rule by well-educated people with prestigious credentials”. But I think it’s important to defend the word “meritocracy” as meaning what it says – decision by merit, rather than by wealth, class, race, or education – and as a good thing. If we let the word be tarnished as some sort of vague signifier of a corrupt system, then it’s too easy for the people who really are in that corrupt system to exploit the decline and fall of the only word we have to signal an alternative. “Oh, you don’t like that all the important jobs go to upper-class people instead of the people who are best at them? You’d prefer they be given out based on merit? But haven’t you read The New Inquiry, First Things, and Vox? Believing in so-called ‘meritocracy’ is totally uncool!” And then we lose one of the only rallying points, one of the few pieces of vocabulary we have to express what’s wrong with the current system and what would be a preferable alternative. We ought to reject the redefinition of “meritocracy” to mean “positions go to people based on their class and ability to go to Harvard”, and reclaim it as meaning exactly what we want instead – positions going to those who are best at them and can best use them to help others. Which is what we want.

(None of this solves one of the biggest problems that the anti-meritocracy folk are complaining about: the fact that there’s a distinction between millionaire Goldman Sachs analysts and starving poor people in the first place. I’m just saying that in a world where somebody has to be an investment banker, a surgeon, or a Federal Reserve chair, I’d rather choose them by true meritocracy than by anything else.)

[see here for more discussion]

Read the whole story
11 hours ago
Denver, CO
Share this story


1 Share

Rclone (via Felix Schwarz):

Rclone is a command line program to sync files and directories to and from:

  • Google Drive
  • Amazon S3
  • Openstack Swift / Rackspace cloud files / Memset Memstore
  • Dropbox
  • Google Cloud Storage
  • Amazon Drive
  • Microsoft OneDrive
  • Hubic
  • Backblaze B2
  • Yandex Disk
  • SFTP
  • The local filesystem

There’s also an interesting overview of the features of the different cloud storage systems.

Read the whole story
2 days ago
Denver, CO
Share this story

Awaken, a documentary full of arresting imagery


This might be the most beautiful three minutes of your day. Director Tom Lowe is making a feature-length documentary “exploring humanity’s relationship with technology and the natural world” called Awaken. This trailer is stuffed with some of the most arresting imagery I’ve seen in a long time. Perhaps most striking is the moving time lapse footage, which was shot from a helicopter using equipment of Lowe’s own design…I don’t think I’ve seen anything quite like it before.

Awaken will be out next year and, unsurprisingly, is being executive produced by Terrence Malick (Voyage of Time) and Godfrey Reggio (Koyaanisqatsi, etc.).

Tags: Awaken   movies   time lapse   Tom Lowe   trailers   video
Read the whole story
2 days ago
Denver, CO
Share this story

Beating cancer is a team sport

1 Comment and 3 Shares

Senator John McCain has been diagnosed with a particularly aggressive form of brain cancer. The tumor has been removed and McCain is recovering at home with his family. I wish Senator McCain well and hope for a speedy recovery.

In the wake of his diagnosis, many of those expressing support for McCain reference his considerable personal strength in his fight against cancer. President Obama said:

John McCain is an American hero & one of the bravest fighters I’ve ever known. Cancer doesn’t know what it’s up against. Give it hell, John.

McCain’s daughter Meghan references his toughness and fearlessness in a statement released yesterday. Vice-President Joe Biden expressed similar sentiments on Twitter:

John and I have been friends for 40 years. He’s gotten through so much difficulty with so much grace. He is strong — and he will beat this.

This is the right thing to say to those going through something like this, and hearing this encouragement and having the will & energy to meet this challenge will undoubtably increase McCain’s chances of survival. But what Biden said next is perhaps more relevant:

Incredible progress in cancer research and treatment in just the last year offers new promise and new hope. You can win this fight, John.

As with polio, smallpox, measles, and countless other diseases before it, beating cancer is not something an individual can do. Being afflicted with cancer is the individual’s burden to bear but society’s responsibility to cure. In his excellent biography of cancer from 2011, The Emperor of All Maladies, Siddhartha Mukherjee talks about the progress we’ve made on cancer:

Incremental advances can add up to transformative changes. In 2005, an avalanche of papers cascading through the scientific literature converged on a remarkably consistent message — the national physiognomy of cancer had subtly but fundamentally changed. The mortality for nearly every major form of cancer — lung, breast, colon, and prostate — had continuously dropped for fifteen straight years. There had been no single, drastic turn but rather a steady and powerful attrition: mortality had declined by about 1 percent every year. The rate might sound modest, but its cumulative effect was remarkable: between 1990 and 2005, the cancer-specific death rate had dropped nearly 15 percent, a decline unprecedented in the history of the disease. The empire of cancer was still indubitably vast — more than half a million American men and women died of cancer in 2005 — but it was losing power, fraying at its borders.

What precipitated this steady decline? There was no single answer but rather a multitude. For lung cancer, the driver of decline was primarily prevention — a slow attrition in smoking sparked off by the Doll-Hill and Wynder-Graham studies, fueled by the surgeon general’s report, and brought to its full boil by a combination of political activism (the FTC action on warning labels), inventive litigation (the Banzhaf and Cipollone cases), medical advocacy, and countermarketing (the antitobacco advertisements). For colon and cervical cancer, the declines were almost certainly due to the successes of secondary prevention — cancer screening. Colon cancers were detected at earlier and earlier stages in their evolution, often in the premalignant state, and treated with relatively minor surgeries. Cervical cancer screening using Papanicolaou’s smearing technique was being offered at primary-care centers throughout the nation, and as with colon cancer, premalignant lesions were excised using relatively minor surgeries. For leukemia, lymphoma, and testicular cancer, in contrast, the declining numbers reflected the successes of chemotherapeutic treatment. In childhood ALL, cure rates of 80 percent were routinely being achieved. Hodgkin’s disease was similarly curable, and so, too, were some large-cell aggressive lymphomas. Indeed, for Hodgkin’s disease, testicular cancer, and childhood leukemias, the burning question was not how much chemotherapy was curative, but how little: trials were addressing whether milder and less toxic doses of drugs, scaled back from the original protocols, could achieve equivalent cure rates.

Perhaps most symbolically, the decline in breast cancer mortality epitomized the cumulative and collaborative nature of these victories — and the importance of attacking cancer using multiple independent prongs. Between 1990 and 2005, breast cancer mortality had dwindled an unprecedented 24 percent. Three interventions had potentially driven down the breast cancer death rate-mammography (screening to catch early breast cancer and thereby prevent invasive breast cancer), surgery, and adjuvant chemotherapy (chemotherapy after surgery to remove remnant cancer cells).

Understanding how to defeat cancer is an instance where America’s fierce insistence on individualism does us a disservice. Individuals with freedom to pursue their own goals are capable of a great deal, but some problems require massive collective coordination and effort. Beating cancer is a team sport; it can only be defeated by a diverse collection of people and institutions working hard toward the same goal. It will take government-funded research, privately funded research, a strong educational system, philanthropy, and government agencies from around the world working together. This effort also requires a system of healthcare that’s available to everybody, not just to those who can afford it. Although cancer is not a contagious disease like measles or smallpox, the diagnosis and treatment of each and every case brings us closer to understanding how to defeat it. We make this effort together, we spend this time, energy, and money, so that 10, 20, or 30 years from now, our children and grandchildren won’t have to suffer like our friends and family do now.

Tags: cancer   Emperor of All Maladies   healthcare   John McCain   medicine   politics   science   Siddhartha Mukherjee
Read the whole story
3 days ago
Denver, CO
Share this story
1 public comment
5 days ago
"Beating cancer is a team sport; it can only be defeated by a diverse collection of people and institutions working hard toward the same goal. It will take government-funded research, privately funded research, a strong educational system, philanthropy, and government agencies from around the world working together. This effort also requires a system of healthcare that’s available to everybody, not just to those who can afford it. Although cancer is not a contagious disease like measles or smallpox, the diagnosis and treatment of each and every case brings us closer to understanding how to defeat it. We make this effort together, we spend this time, energy, and money, so that 10, 20, or 30 years from now, our children and grandchildren won’t have to suffer like our friends and family do now."

Explaining the Gap: Visualizing One’s Predictions Improves Recall and Comprehension of Data


What if Visualizations Asked Users to Predict the Data First?

Imagine a data journalist writing a story about home prices in Denver. The journalist is planning to add a visualization to help users understand the trend of home price in 2014, 2015 and 2016.

The visualization could just show you the data, like we encounter all the time. But what if instead the interface prompts you to draw what you think the median home price in Denver looks like first?

Even if you don’t have specific knowledge about median home prices in Denver in 2014, 2015 and 2016, you might have some general prior knowledge that you could use to predict the data. For instance, in any healthy market, we might expect a home price to go up. Maybe you also suspect that Denver has many quality jobs.

Let’s assume that you drew the below trend, after considering a few factors like these.

The interface now shows you the actual data. What do you think about when you see the gap between the actual data and your prediction? Is your prediction close to the actual trend? If not, then how far, and what might have been the reason for your error?

Why Prior Knowledge Matters in Visualization Interaction

As you interacted with the visualization above, you had to think about what you know about the data domain, and how to externalize that knowledge. We wondered:

“Are there benefits to reflecting on one’s expectations while using a data visualization?”

A few studies in cognitive psychology have shown that making predictions about something as you learn can help you better understand the information. A well-known technique in education called self-explanation might explain the process that you went through as you thought about the gap between your prediction and the data. This technique was originally used by prompting students to explain a phenomena they were learning about, such as in a textbook, to themselves. Multiple studies showed that students who generated a larger number of thoughtful self-explanations tended to learn the material more accurately, as measured by tests they took later. We guessed that predicting the data in a visualization, and examining the gap between the prediction and the actual data, would encourage a visualization user to notice and self-explain why they were off. And that this process would, similar to self-explanation from a text, lead to better understanding of the information.

We designed a controlled online experiment to test how predicting data and other active reflection techniques like self-explanation affect a user’s ability to recall the data in a visualization. Being able to remember what you saw is one proxy for understanding, which is often used in cognitive psychology.

Three Ways to Interact with Prior Knowledge

So, how might a visualization elicit users’ prior knowledge? We designed three interactive techniques we expected might help users reflect on their prior knowledge, as they interacted with a data visualization (Fig. 1). We tested how these interactive techniques affect a user’s ability to recall the data by running a study on Amazon Mechanical Turk.

Fig 1. Three interactive techniques to prompt reflection on prior knowledge

Prompting users to explain to themselves the data can be a good way to make them to reflect on their prior knowledge. For instance, a user might think “Prices might have increased less between 2015 and 2016 because new jobs were down”. In our study, we gave participants in explanation conditions a text box along with the visualized data, and asked them to write their explanation of the data.

Next, asking users to predict the data might prompt them to think about what they already know. If the visualization makes the gap between their prediction and the data clear, the user might then think about the gap (e.g., “I was mostly right about the trend, but I was off by 100K in 2016”). In our study, we prompt participants to draw their prediction directly on the visualization before seeing the data.

Lastly, providing feedback by annotating the gap between the user’s prior knowledge and the data can help users focus on adjusting their prior knowledge. By emphasizing the gap, users may be more likely to perceive how much they should update their knowledge to accurately perceive reality. In our study, we provided personalized feedback to feedback conditions that described the accuracy of the overall trend (e.g., “you underestimated the median home price of Denver in 2014”) and the accuracy of prediction of individual data points (e.g., “you are 30K off”).

Fig 2. Experimental conditions

We combined the three techniques to create 4 experimental conditions and one baseline condition (Fig. 2).

We expected that all three interactive techniques would improve the user’s ability to recall the data later. Generating self-explanations should help a user recall the data more accurately, because of the proven efficacy of self-explanation in learning environments. But no one had ever tested self-explanation in a visualization setting.

We also expected participants who were asked to predict the data would recall the data more accurately compared to participants who had not been prompted to do so: prediction prompts a user to construct and externalize the prior knowledge, and viewing the gap provides visual feedback on how much to update one’s knowledge.

We wondered, what about when the data is presented in the text format? With text, the gap between the participant’s prediction and the actual data would need to be described in text. To compare the effect of our techniques with visualization and text, we added text conditions where participants saw the data summarized in a paragraph, though the information was otherwise identical to the visualization.

How to Measure Data Memorability?

Fig 3. Testing dataset

To evaluate our expectations, we needed a dataset to test on. Line graphs showing several data values for a few categorical variables (Fig. 3) are widely used in the social and physical science. As you’ll notice, this data format allows us to observe how well participants can recall high level patterns and also individual data points. As a specific data set with this format, we chose data on the percentage of different ethnicities that voted Republican in the 2008 election in various. Crowdworkers rated this data as moderately familiar in an initial study we did to identify how familiar various data sets were. ‘Moderately familiar’ was our goal because we thought people would need some prior knowledge (enough to make a prediction) but not so much that they were experts in order for our techniques to potentially help. Our preliminary study on data set familiarity found that of the data sets we showed them, participants were most familiar with the calories in various fast food items and least familiar with the results of a scientific study on rat activity levels.

Fig 4. Two measures of recall errors

To measure data memorability between the conditions, we devised two measures to observe participants’ ability to recall high level patterns and the individual data points (Fig. 4). In our study, participants were asked to recall the data after spending three minutes on a distractor task.

Prediction and Explanation Improve Data Recall

Our expectation was if participants used our three interactive techniques while interacting with the visualization, they would remember data better than those who just examined the visualization as we usually do. It turned out that these techniques really helped! Participants who were prompted to predict and/or explain the data recalled the individual data points 24% more accurately than participants who just examined the data but didn’t explicitly predict or explain. More interestingly, if these techniques were prompted sequentially (e.g., predicting first, then explain, predicting first, then examine feedback) the techniques enhanced the ability to recall the trend of the data 21% more than participants who weren’t prompted. These effects were reliable as determined by statistical tests.

“As we expected, prediction and explanation helped participants to remember the data accurately.”

Then what about when the data presented in text? We found that only the explanation technique worked to improve recall. This result implies that self-explanation, which is usually applied in learning environments, also works in a data presentation setting. Also, we confirmed that prompting users to predict data didn’t necessarily help improve their ability to recall the data when participants interacted with the data as text. This is in contrast to the result from the visual condition, where prediction helped people to recall the data accurately. Imagine seeing your prediction as a line alongside the data in a visualization like the Denver example — it makes sense that the visual feedback would make it easier to focus on the difference. With text, a user has to spend greater effort to “see” the gap, and many users may not be motivated to do this.

Deeper Interaction Between a User and a Visualization

Our finding that predicting data helps users remember the data better paves the way for new, prediction-oriented visualization designs. How might users of visualizations beyond line charts make predictions? We have begun to identify important design choices and types of differences — in other words, to describe the design space — for prediction-oriented visualizations.

Fig 5. Possible variable types for prediction task

Designers can ask users to predict quantitative variables in different chart types like a bar chart (Fig. 5). As the above choropleth shows, prediction for categorical variables can be implemented as well, for instance, by choosing the category to apply (by clicking) and then brushing over the region to indicate the prediction. Like in the example of the dendrogram (far right), the designer can also prompt users to predict the data structure.

Fig 6. Contextualization cues

Designers also can implement contextualization cues to guide the user as they form a guess. Based on how familiar they expect their audience to be with the data, designers can adjust how much data to reveal (Fig.5, Partial Prediction). Designers also can bound the range of prediction to help users make reasonable predictions (Fig. 6, Bounded prediction). Designers should consider the scale range of quantitative x and y-axes, since these ranges can have a big impact on the users’ prediction. Providing a few other data points that can influence the users’ prediction will guide users’ prediction.

Fig 7. Types of feedback

In our study, we found that feedback can be a helpful device to enhance participants’ ability to recall on top of the prediction technique. As in our study, feedback on how the user’s prediction compares to the data can prompt deeper reflection on prior knowledge. Personalized feedback can be based on the accuracy of a prediction, like “Overall, you were 80% right in guessing the amount of CO2 emission” or can include a social comparison: “your prediction is more accurate than 50% of the people who predicted”. When feedback is provided after a prediction, participants tended to recall not only the individual data points, but also the overall trend in our study. For a similar effect, a designer could provide feedback based on the main trend, like “you overestimated the slope of the overall trend” or based on the individual values like “you are wrong by 3 percent”. These feedback can be delivered in text format or visual annotation. And, the act of prediction and receiving feedback may also help users who are not very familiar with visualizations to build their graphical literacy, or basic understanding of how to read and use graphs.

Current visualization research and practice lacks techniques for explicitly incorporating a user’s prior knowledge. We think there is lots of work to do in envisioning what possibilities exist in terms of interaction techniques, as well as studying the effects of these techniques. For example, Can thinking about others’ visualized predictions have similar effects to predicting yourself, or influence your beliefs about the data? Can prediction and reflection on the gap help people understand difficult aspects of data, like uncertainty? Or help people understand Bayesian analysis, in which expectations play an important role? Stay tuned for future posts about our upcoming InfoVis 2017 papers on several of these topics!

Want to learn more about our work? Check out our paper.

This post was written by Yea-Seul Kim and Jessica Hullman. This work is in collaboration with Katharina Reinecke.

Explaining the Gap: Visualizing One’s Predictions Improves Recall and Comprehension of Data was originally published in HCI & Design at UW on Medium, where people are continuing the conversation by highlighting and responding to this story.

Read the whole story
4 days ago
Denver, CO
12 days ago
Share this story

The law expects civilians to remain calm even when police don't

Thank you to my friends and people throughout the country who have reached out to show support in the wake of my potentially tragic encounter with Knoxville Officer Matthew Janish.

I wanted to show him the keys or reach into my bag for the registration and bill of sale. I fought every impulse to do anything that would make him feel threatened. I don't have de-escalation training. I'm the one being held at gunpoint. I'm the one thinking my life could end if he panics. Yet, I'm the one expected to remain calm.

It seems that the legal system is really asking civilians to de-escalate adrenaline-fueled cops. We must remain calm while facing a loaded gun while the trained officers can panic and overreact.

What about our lives? Who protects us from the people who are supposed to protect us?

Previously, previously, previously, previously, previously.

Read the whole story
4 days ago
Denver, CO
12 days ago
Share this story
Next Page of Stories