|Timothy G. Mattson, Ph.D.
Senior Research Scientist
LinkedIn PageIntel Corp, Computational Software Laboratory
Mail Stop DP2-226
2800 Center Drive
DuPont, WA 97124email: firstname.lastname@example.org
Brief, Stuffy Bio
Tim Mattson earned a PhD. in Chemistry for his work on quantum molecular scattering. This was followed by a Post-doc at Caltech where he ported his molecular scattering software to the Caltech/JPL hypercubes. Since then, he has held a number of commercial and academic positions with computational science on high performance computers as the common thread.
Dr. Mattson joined Intel in 1993 to work on a variety of parallel computing problems. This included benchmarking, system performance modeling, and applications research. He was a senior scientists on Intel’s ASCI teraFLOPS project: a project that resulted in the first computer to run MPLINPACK in excess of one teraFLOPS.
Currently, he is working in Intel’s Parallel Computing Laboratory. His goal is to develop technologies that will make parallel computing more accessible to the general programmer. This includes OpenMP, OpenCL, and runtime systems for Exascale computers.
Long, Informal Bio
In graduate school, I couldn’t decide whether to be a chemist, physicist, mathematician or a computer scientist. Enjoying chaos, I choose all four by getting a Ph.D. in chemistry (U.C. Santa Cruz, 1985) for solving a physics problem (quantum scattering) with new numerical methods (approximate potential methods for coupled systems of radial Schroedinger equations), on the primitive computers available in those days (Vax 750).
My confusion deepened during a Caltech post-doc in Geoffrey Fox’s Concurrent Computation project where I took my differential equation solvers and ported them to the Caltech/JPL hypercubes. These machines were painful to use, but being a true masochist, I fell in love with parallel computers and have been using them ever since.
The details of my career are boring, but basically involve industrial experience in radar signal processing, seismic signal processing, numerical analysis, computational chemistry and of course, the use of parallel computers. I emphasize “the use of” parallel computers. I have always measured the value of a computer by how useful it is. Eventually, I ended up at Yale (1991) where my research took me into the depths of many different parallel programming environments (p4, PVM, TCGMSG, Linda, Parlog, CPS, and many others) on many different parallel computers including clusters of workstations.
In 1993, I left Yale and joined Intel’s Scalable Systems Division (SSD).That was an exciting time at Intel SSD. The Paragon supercomputer was new and a huge amount of work was needed to understand how to use this big machine (with the best front panel lights in the industry). My job was to get inside the user’s heads and make sure our products really worked for their problems. This resulted in a collection of performance models to help guide the design of future parallel computers.
The pinnacle of my work at Intel SSD was the ASCI Option Red supercomputer – the world’s first computer to run the MPLINPACK benchmark in excess of one teraFLOPS (1.34 TFLOPS to be exact). I was a senior scientist on this project and was in the middle of the whole project. I helped write the proposals, verify the design, and debug the system. I was responsible for communicating technical issues with the customers and had to make sure the initial applications effectively scaled on the machine. When we delivered the system to the customer, I left Intel SSD and moved to Intel’s long range research laboratory — the Microcomputer Research Lab (MRL).
At MRL, my job was to solve once and for all the software crisis in parallel computing. Even with almost 20 years of research, the parallel computing community hasn’t attracted more than a miniscule fraction of programmers. Clearly, we are doing something wrong. My hypothesis is that we can solve this problem, but only if we work from the algorithm down to the hardware — not the traditional hardware first mentality. This work in part led to OpenMP (OK, a very small part — the good people at SGI and KAI played a much larger role in getting OpenMP started).
Once OpenMP got off the ground, I moved to Intel’s software products group (the Microcomputer Software Laboratory or MSL) to support the transition of OpenMP from research to product. With that underway, I’ve moved onto other application of concurrency to computing at Intel (clusters, distributed computing, and peer to peer computing).
A short informal description of my research
My research will make parallel computing so easy, that the general programmer will use it routinely. People have been trying to pull this off for a few decades, but no one has come close to succeeding. I intend to buck that trend.
My approach is to solve the parallel programming problem at the algorithm design phase by creating a language of design patterns for parallel application programmers. Once in place, we will generate an object oriented framework from the pattern language. This combination of a pattern language, an object oriented framework, and quality parallel programming environments will solve the problem once and for all. For more information, take a look at my project’s web page: www.cise.ufl.edu/research/ParallelPatterns/.
I’m actively involved with the OpenMP shared memory programming API. I’ve worked on all the OpenMP specifications “out there” today and I am CEO of the corporation that “owns” OpenMP (the OpenMP Architecture review Board). I am also very active in cluster computing. I am one of the leaders of the Open Cluster Group . We are working to create self-contained packages that include everything you need to easily build and use a cluster for high performance computing.
Finally, since distributed computing is just one extreme of cluster computing, and peer to peer computing is distributed computing carried to extremes, I am involved in fostering initiatives to make different peer to peer systems interoperatoble. I conduct this work as a member of the steering committee for the Peer to Peer working group.
Key Papers and Technical Reports
I have a pretty long list of publications. Here’s an excerpt of the full list.
- Design of the GraphBLAS API for C, Aydin Buluc, Tim Mattson, Scott McMillan, Jose Moreira, and Carl Yang, Graph Algorithms Building Blocks workshop at IPDPS, 2017.
- The Open Community Runtime: A Runtime System for Extreme Scale Computing, Timothy G. Mattson, Romain Cledat, Vincent Cave, Vivek Sarkar, Zoran Budimlic, Sanjay Chatterjee,Josh Fryman, Ivan Ganev, Robin Knauerhase, Min Lee, Benoit Meister, Brian Nickerson, Nick Pepperling,Bala Seshasayee, Sagnak Tasirlar, Justin Teller, and Nick Vrvilo, IEEE High Performance Extreme Computing, 2016.
- The Big Dawg Polystore System, J. Duggan, A. J. Elmore, M. Stonebraker, M. Balazinska, B. Howe, J. Kepner, S. Madden, D. Maier, T. Mattson, S. Zdoânik. ACM Sigmod Record, 44(3), 2015.
- Light-weight Communications on Intel’s Single-Chip-Cloud Computer Processor, Rob F. van der Wijngaart, Timothy G. Mattson, Werner Haas, Operating Systems Review, ACM, vol 45, number 1, pp. 73-83, January 2011.
- A design pattern language for engineering (parallel) software, K. Keutzer and T. G. Mattson, Intel Technology Journal, Vol. 13, Issue 4, pp. 6-19, 2009.
- An Introduction to OpenMP 2.0, Proceedings of the WOMPEI workshop, Springer Verlag Lecture Notes in computer Science (2000).
- The Intel TFLOP Supercomputer Intel Technical Journal, Q198 issue (1998) with G. Henry
- The Performance of the Intel TFLOPS Supercomputer Intel Technical Journal, Q198 issue (1998) with G. Henry, B. Cole, and P. Fay
- A TeraFLOP in 1996: The ASCI TeraFLOP Supercomputer, Proceedings of the International Parallel Processing Symposium (1996) with D. Scott and S. Wheat.
- Comparing Programming Environments for Portable Parallel Computing, International Journal of Supercomputing Applications, vol 12 p. 233 (1995).
- Portable Molecular Dynamics Software for Parallel Computing, in Parallel Computing in Computational Chemistry (ed. by Tim Mattson), ACS Symposium Series, No. 592, (1994) with G. Ravishanker.
- The efficiency of Linda for general purpose scientific computing, Scientific Programming, vol3, p. 61 (1994).
- Interval Global Optimization in Computational Chemistry, Scientific Computing Associates, Inc. NSF SBIR final report (1992)
- The Strand Language: Scientific Computing meets Concurrent Logic Programming, Proceedings of the University of Oregon Workshop on Parallel Implementation of Languages for symbolic computation (1990).
- An Out of Core FFT for Parallel Computers, Quantitative Technology Corp, Technical Report, (1989).
- Design and Implementation of a general purpose Mathematics Library in Ada, Proceedings of the TriAda Conference (1989)
- Chemical Reaction Dynamics: integration of coupled sets of ordinary differential equations on the Caltech hypercube, The third Conference on Hypercube Concurrent Computers and Applications, Vol 2, edited by G.C. Fox, ACM Press, p. 1051 (1988) with P. Hipes and A. Kuppermann.
- Solution of Coupled-Channel Schroedinger Equation Using Constant, Linear, and Quadratic Reference Potentials: The Series, Bessel, Magnus and Perturbatively Corrected Magnus Propagators, Molecular Physics, vol. 52, p. 319 (1984) with R. Anderson.
…. plus many others (100+ total).