Parallel Programming Environments: Less is More

The single most important paper for programming language designers to read came out in 2000. It wasn’t written by a computer scientist, mathematician, or physical scientist.  It was written by a couple of professors studying psychology and marketing:  “When Choice is Demotivating: Can One Desire too Much of a Good Thing?” Iyengar, S. S., & Lepper, M. Journal of Personality and Social Psychology, 79, 995-1006. (2000).

This paper explored the phenomena of “choice overload.” Here is what they did:
Create two displays of Gourmet jams. One display has 24 jars. The other has 6. Each display invites people to try the jams and offers them a discount coupon to buy the jam. They alternated these displays in a grocery store and tracked how many people passed the displays, how many people stopped and sampled the jams, and how many subsequently used the offered coupon to buy the jam.

The results were surprising.

  • 24 jar display: 60% of the people passing the display sampled the jam, 3% purchased jam.
  • 6 jar display: 40% of the people passing the display sampled the jam.  30% purchased jam.

The larger display was better at getting people’s attention. But the number of choices overwhelmed them and they just walked away without deciding to purchase a jam. In other words, if the goal is to sell product, less is more. Too much choice is demotivating.
Admittedly, selecting a gourmet jam is insignificant. Maybe for more important issues, “choice overload” is not relevant? The authors of this paper, however, went on to consider more important choices such as 401K plans. And choice overload was present.

Choice overload is real. When people are faced with too many choices, the natural tendency is to “not make a choice” and  just walk away (probably in frustration).

Why is this relevant to parallel programming?

Why does this matter? Think about it. We (that is, computer companies) want to sell hardware. To do that, we need software. We display our platforms and hope software developers will spend their valuable development dollars porting to our platform.

So what is the situation today with multicore processors?  A software vendor walks up to “our display.”  We show them our nice hardware with its many cores and we tell them they will need to convert their software so that it will scale.  And then we show the parallel programming environments they can work with: MPI, OpenMP, OpenCL, TBB, Erlang, OpenSHMEM, Go, Cilk, BSP, CHARM++, Legion, Co-array Fortran, X10, Chapel, Pthreads, windows threads, C++’11, GA, Java, UPC, Titanium, Parlog, CnC, … and the list goes on and on. And they respond as any rational person would: they run away screaming.

Think about the impression this glut of choices causes. If we “experts” can’t agree on how to write a parallel program, what makes us believe parallel programming is ready for the masses? In our quest to find that perfect language to make parallel programming easy, we actually harm our agenda and scare away the software developers we need.

We need to spend less time creating new languages and more time making the languages we have work. This is why anytime I hear someone talk about their great new language, I pretty much ignore them.

Tell me how to make OpenMP and OpenCL work better.  Tell me how to fix MPI so it runs with equal efficiency on shared memory and distributed memory systems. Help me figure out how to get pthreads and OpenMP components to work together. Help me understand solution frameworks so high level programmers can create the software they need without becoming parallel algorithm experts. But don’t waste my time with new languages. With hundreds of languages and APIs out there, is anyone really dumb enough to think “yet another one” will fix our parallel programming problems?