Timothy G. Mattson, Ph.D.
Senior Principal Engineer
LinkedIn Page
Intel Corp,
Parallel Computing Laboratory
Mail Stop JF2-65
2111 NE 25th Avenue,
Hillsboro, OR 97124
phone: 503-264-8742
  • Biographies: My professional — and not so professional — background.
  • Current research: Yes, even corporate types like me get to do real research sometimes.
  • Key Publications: This list is always out of date, but I try to keep the major papers on this list.

Brief, Stuffy Bio

Tim Mattson earned a PhD. in Chemistry for his work on quantum molecular scattering. This was followed by a Post-doc at Caltech where he ported his molecular scattering software to the Caltech/JPL hypercubes. Since then, he has held a number of commercial and academic positions with computational science on high performance computers as the common thread.

Dr. Mattson joined Intel in 1993 to work on a variety of parallel computing problems. This included benchmarking, system performance modeling, and applications research. He was a senior scientists on Intel’s ASCI teraFLOPS project: a project that resulted in the first computer to run MPLINPACK in excess of one teraFLOPS.

Currently, he is working in Intel’s Parallel Computing Laboratory. His goal is to develop technologies that will make parallel computing more accessible to the general programmer. This includes OpenMP,  OpenCL, and runtime systems for Exascale computers.

Long, Informal Bio

In graduate school, I couldn’t decide whether to be a chemist, physicist, mathematician or a computer scientist. Enjoying chaos, I choose all four by getting a Ph.D. in chemistry (U.C. Santa Cruz, 1985) for solving a physics problem (quantum scattering) with new numerical methods (approximate potential methods for coupled systems of radial Schroedinger equations), on the primitive computers available in those days (Vax 750).
My confusion deepened during a Caltech post-doc in Geoffrey Fox’s Concurrent Computation project where I took my differential equation solvers and ported them to the Caltech/JPL hypercubes. These machines were painful to use, but being a true masochist, I fell in love with parallel computers and have been using them ever since.

The details of my career are boring, but basically involve industrial experience in radar signal processing, seismic signal processing, numerical analysis, computational chemistry and of course, the use of parallel computers. I emphasize “the use of” parallel computers. I have always measured the value of a computer by how useful it is. Eventually, I ended up at Yale (1991) where my research took me into the depths of many different parallel programming environments (p4, PVM, TCGMSG, Linda, Parlog, CPS, and many others) on many different parallel computers including clusters of workstations.

In 1993, I left Yale and joined Intel’s Scalable Systems Division (SSD).That was an exciting time at Intel SSD. The Paragon supercomputer was new and a huge amount of work was needed to understand how to use this big machine (with the best front panel lights in the industry). My job was to get inside the user’s heads and make sure our products really worked for their problems. This resulted in a collection of performance models to help guide the design of future parallel computers.

The pinnacle of my work at Intel SSD was the ASCI Option Red supercomputer – the world’s first computer to run the MPLINPACK benchmark in excess of one teraFLOPS (1.34 TFLOPS to be exact). I was a senior scientist on this project and was in the middle of the whole project. I helped write the proposals, verify the design, and debug the system. I was responsible for communicating technical issues with the customers and had to make sure the initial applications effectively scaled on the machine. When we delivered the system to the customer, I left Intel SSD and moved to Intel’s long range research laboratory — the Microcomputer Research Lab (MRL).

At MRL, my job was to solve once and for all the software crisis in parallel computing. Even with almost 20 years of research, the parallel computing community hasn’t attracted more than a miniscule fraction of programmers. Clearly, we are doing something wrong. My hypothesis is that we can solve this problem, but only if we work from the algorithm down to the hardware — not the traditional hardware first mentality. This work in part led to OpenMP (OK, a very small part — the good people at SGI and KAI played a much larger role in getting OpenMP started).

Once OpenMP got off the ground, I moved to Intel’s software products group (the Microcomputer Software Laboratory or MSL) to support the transition of OpenMP from research to product. With that underway, I’ve moved onto other application of concurrency to computing at Intel (clusters, distributed computing, and peer to peer computing).

At some point around 2002 I changed gears to become the head of life sciences strategic marketing.  This may seem like a strange transition for me.  We were under the impression that the principle problem in life science pertained to scientific computing.  If we could improve molecular modeling to find better drug targets, everything else would fall into place.   We found, however, that this was NOT the case.  The problem with the life sciences sector back in the early days of the new millennium was data management.  You didn’t need a molecular modeling person like me to help out with life sciences.  You needed an expert in databases and IT infrastructure.

Therefore in 2005 I returned to the computer science research world.  My goal was to make parallel computing so easy, that the general programmer will use it routinely. People have been trying to pull this off for decades, but no one has come close to succeeding. My approach was to solve the parallel programming problem at the algorithm design phase by creating a language of design patterns for parallel application programmers. Once in place, we will generate an object oriented framework from the pattern language. This combination of a pattern language, an object oriented framework, and quality parallel programming environments will solve the problem once and for all. For more information, take a look at my project’s web page:

There is so much more.  Just summarizing quickly;  I worked on an 80 core microprocessor which was the first microprocessor to run a program at a teraFLOP.  This was part of a long term effort to explore the use of message passing methods for many-core chips. Our second chip in that research effort was the 48 core single-chip-cloud computer.

Around 2008 I started working on OpenCL.  I was actively engaged with OpenCL through OpenCL 2.0 where (among other things) I led the effort to write down a formal memory model for OpenCL.

In 2012 I shifted gears to work on big data problems. They made me the Intel PI for our bigdata research center based at MIT.  The world of data management was all new to me.  Fortunately, I worked with really smart people and they pulled me along quickly.  I’m proud of what we came up with.  It included the concept of polystore data systems, systems for streaming analytics, and a new storage manager for array data.  I also have had a great time working on Graph Algorithms which led to the GraphBLAS API for constructing “Graphs in the language of Linear Algebra”.

A short informal description of my research

This will be very short for now.  At the time I’m writing this in the spring of 2018, I am working on the fundamental building blocks of graph algorithms expressed in terms of linear algebra.  This is  hard-core mathematics work that keeps my brain healthy.  I also work on problems in data management.  I help out when I can to publicize the amazing TileDB project; searching to support new use-cases of the system.  Likewise I help out with the emerging field of Polystore systems with my good friends at the MIT Lincoln Laboratory.  Much of my time these days, however, is dedicated to managing a team of researchers working on programming systems.  That is pulling me away from doing actual research, but its good for me to expand my horizons and learn a bit about managing people.


Key Papers and Technical Reports

I have a pretty long list of publications. Here’s an excerpt of the full list.

  • Design of the GraphBLAS API for C,   Aydin Buluc, Tim Mattson, Scott McMillan, Jose Moreira, and Carl Yang,  Graph Algorithms Building Blocks workshop at IPDPS, 2017.
  • The Open Community Runtime: A Runtime System for Extreme Scale Computing, Timothy G. Mattson, Romain Cledat, Vincent Cave, Vivek Sarkar, Zoran Budimlic, Sanjay Chatterjee,Josh Fryman, Ivan Ganev, Robin Knauerhase, Min Lee, Benoit Meister, Brian Nickerson, Nick Pepperling,Bala Seshasayee,  Sagnak Tasirlar, Justin Teller, and Nick Vrvilo, IEEE High Performance Extreme Computing, 2016.
  • The TileDB Array Data Storage Manager, Stavros Papadopoulos, Kushal Datta, Samuel Madden, Timothy Mattson, VLDB Volume 10, No. 4, December 2016
  • The Big Dawg Polystore System, J. Duggan, A. J. Elmore, M. Stonebraker, M. Balazinska, B. Howe,  J. Kepner, S. Madden, D. Maier, T. Mattson, S. Zdoânik.  ACM Sigmod Record, 44(3), 2015.
  • The Parallel Research Kernels: A tool for architecture and programming  system investigation, Proceedings of the IEEE High Performance Extreme  Computing Conference, Rob F. Van der Wijngaart and Timothy G. Mattson,  2014
  • Light-weight Communications on Intel’s Single-Chip-Cloud Computer Processor, Rob F. van der Wijngaart, Timothy G. Mattson, Werner Haas, Operating Systems Review, ACM, vol 45, number 1, pp. 73-83, January 2011.
  • OpenCL Programming Guide, Aaftab Munshi, Ben Gaster, Timothy G. Mattson, James Fung, and Dan Ginsburg, Addison Wesley, ISBN: 0321749642, 2011
  • A design pattern language for engineering (parallel) software, K. Keutzer and T. G. Mattson,   Intel Technology Journal, Vol. 13, Issue 4, pp. 6-19, 2009.
  • Introduction to concurrency in programming languages, Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen, CRC Press, ISBN: 1420072137, 2009
  • Patterns for Parallel Programming, Timothy G. Mattson, Beverly A. Sanders and Berna L. Massingill, Addison Wesley, Design Patterns series, ISBN: 0321228111, 2004.
  • An Introduction to OpenMP 2.0, Proceedings of the WOMPEI workshop, Springer Verlag Lecture Notes in computer Science (2000).
  • The Intel TFLOP Supercomputer Intel Technical Journal, Q198 issue (1998) with G. Henry
  • The Performance of the Intel TFLOPS Supercomputer Intel Technical Journal, Q198 issue (1998) with G. Henry, B. Cole, and P. Fay
  • A TeraFLOP in 1996: The ASCI TeraFLOP Supercomputer, Proceedings of the International Parallel Processing Symposium (1996) with D. Scott and S. Wheat.
  • Comparing Programming Environments for Portable Parallel Computing, International Journal of Supercomputing Applications, vol 12 p. 233 (1995).
  • Portable Molecular Dynamics Software for Parallel Computing, in Parallel Computing in Computational Chemistry (ed. by Tim Mattson), ACS Symposium Series, No. 592, (1994) with G. Ravishanker.
  • The efficiency of Linda for general purpose scientific computing, Scientific Programming, vol3, p. 61 (1994).
  • Interval Global Optimization in Computational Chemistry, Scientific Computing Associates, Inc. NSF SBIR final report (1992)
  • The Strand Language: Scientific Computing meets Concurrent Logic Programming, Proceedings of the University of Oregon Workshop on Parallel Implementation of Languages for symbolic computation (1990).
  • An Out of Core FFT for Parallel Computers, Quantitative Technology Corp, Technical Report, (1989).
  • Design and Implementation of a general purpose Mathematics Library in Ada, Proceedings of the TriAda Conference (1989)
  • Chemical Reaction Dynamics: integration of coupled sets of ordinary differential equations on the Caltech hypercube, The third Conference on Hypercube Concurrent Computers and Applications, Vol 2, edited by G.C. Fox, ACM Press, p. 1051 (1988) with P. Hipes and A. Kuppermann.
  • Solution of Coupled-Channel Schroedinger Equation Using Constant, Linear, and Quadratic Reference Potentials: The Series, Bessel, Magnus and Perturbatively Corrected Magnus Propagators, Molecular Physics, vol. 52, p. 319 (1984) with R. Anderson.

…. plus many others (100+ total).