Google Senior Staff Research Scientist Kathryn S. McKinley launched her career as a Computer Science professor and then switched to industry after twenty years in academia. Like her career, her research areas have taken several twists and turns.
“In the late 1980s and early 1990s, the most powerful supercomputers were distributed. That hardware, like Intel’s Paragon, organized Central Processing Units (CPUs) and their memory into a two-dimensional grid (torus) and communication was neighbor to neighbor,” said the Rice University alumna (B.A. ‘85, M.S. ‘90, Ph.D. ’92).
McKinley’s novel programming tools and compiler algorithms helped introduce parallelism and data locality into applications. Her research focused on shared memory parallel computing hardware: CPUs on separate chips accessed the same shared memory, communicating on a shared bus.
“Optimizing for any parallel hardware boils down to improving data locality; that is, decreasing the amount of data movement and communication between CPUs, while performing as many operations in parallel on those CPUs as possible,” she said.
Because the application developers for shared memory machines do not need to specify on which CPU to place the data, programming them is simpler. Distributed memory machine applications ran faster but were harder to program.
“The core of most applications today” she said, “use a shared memory computer – in phones, desktops, individual machines in data centers – although multiple CPUs reside on the same chip these days, so communication is much faster than it was. Today’s distributed computing environments network these shared memory computers together at a data center on a regional or global scale. Both models are in use, but not in the ways people were predicting in the 1990s.
“In the early 1990s, single CPU hardware performance –the very simplest to program– was improving exponentially every two years or so. This rapid evolution led funding agencies and the tech industry to abandon investment in parallelism on individual machines, a huge mistake, in retrospect.”
It was clear to McKinley that the area of parallel computing she was most interested in would no longer be supported, just as she graduated with her Ph.D. and was starting as an Assistant Professor at the University of Massachusetts, Amherst (UMass). There, she built a research group, shifting her research focus to data locality remained on single CPU (sequential) machines. McKinley uses the metaphor of kitchen storage to explain.
“Computer memories are a lot like household organization. In your kitchen, you have a cabinet by the stove, a larger pantry across the room, and more space in the garage. Next to the stove, you place a few daily-use items — your frying pan, dishes and utensils. Rarely accessed items, like the waffle iron, go in the pantry. Holiday pans and dishes used only once a year are relegated to the garage. Single CPU memory hierarchies work the same way, performing their work faster when their data is nearby, but hampered by the size constraints of memory nearest to the CPU.”
Focusing on improved data locality for single CPU machines allowed McKinley to launch her academic career at UMass with what felt like a safe research bet, but she also pursued some riskier projects. She bet too early on heterogeneous hardware with her NSF CAREER award funding, but managed to catch the rise of object-oriented languages, Java in particular.
McKinley had taken a memory management class at Rice with Hans Boehm, renowned for his work on explicit memory management. Eliot Moss, her colleague at UMass, was already recognized as a world leader in automatic memory management (known as ‘garbage collection’) for object-oriented languages such as ML and Modula-2.
“Having studied with Hans and working with Eliot, it was inevitable. I had to work on memory management and garbage collection,” said McKinley.
“I chose Java as the vehicle for my programming language, compiler, and memory management performance optimization work. Java was on the rise, combining portability with more productive programming –but it needed better performance. At the same time, I knew parallel machines were not dead. They were the only way to gain performance in the long run.”
Her wider research group, including colleagues from IBM and the Australian National University (ANU), built several iterations of a Memory Management garbage collection Toolkit (MMTk). Even before hardware developments could fully benefit from their efforts, the group adhered to principles of concurrency and parallelism. After spending her sabbatical at the University of Texas at Austin to be close to family, she made the move permanent, joining as an Associate Professor in the middle of this project.
Some of the most impactful results were the DaCapo Java Benchmarks, new performance evaluation methodologies, and a new class of garbage collectors, called mark-region. ANU’s Steve Blackburn and McKinley introduced mark-region garbage collection in 2008. After 20 years, Blackburn and McKinley are still collaborating on garbage collection. Their Immix mark-region collector remains the fastest documented in the literature, and is seeing wide adoption.
In 2011, she took a career twist and joined industry to more directly impact production systems. Today, McKinley’s primary responsibility at Google is improving cloud performance and efficiency. She jokes that she’s back where she started.
“My career is bracketed by optimizing shared memory parallel hardware. My first project at Rice was an interactive parallel programming tool that combined automatic parallelism and analysis with user editing. Now I help build tools and analyze the overall efficiency of Google’s parallel hardware and operating systems.”
“Our team built an operating system tracing tool that samples cloud performance at scale. Our trace collections (hundreds of thousands of traces from Google data centers) and visualization tools help our developers find and resolve performance problems such as tail latencies, so our systems perform better. One slow response can translate into a bad user experience.”
While she surmounted the challenges of surging and waning parallel computing priorities, McKinley also dealt with a different kind of career obstacle: gender harassment. Based on her experience, she agreed to serve as the face for six anonymous stories of women who faced gender and sexual harassment in academic computer science departments and at conferences. Her February 2018 SIGARCH blog post, “What Happens to Us Does Not Happen to Most of You,” helped start deeper conversations and subsequent action in the CS academic community.
“But we still have a long way to go,” she said. “My blog post has been downloaded more than 30,000 times, and many computing academics expressed to me and others that they thought none of this was happening ‘in our backyard.’ They particularly just assumed nothing could ever have happened recently –or in the past– to a person of my stature and seniority in the community. One longtime colleague wept expressing how upset he was by what happened to us.”
Before her personal experience with harassment in the work place, McKinley had already built several support networks which she said are critical to helping individuals overcome negative experiences.
She became close friends with several other Rice women in the CS and the computational and applied mathematics Ph.D. programs. “Rice was pretty amazing in terms of the number of women in our field. Some of the fabulous women in our group included Mary Hall, Gina Goff, Marina Kalem, Rebecca Parsons, Linda Torczon and Virginia Torczon. We were not tokens at Rice.”
In 1991, the Computing Research Association (CRA) launched a Committee on the Status of Women in Computing Research (CRA-W). As McKinley and her friends began graduating and taking jobs in different cities in 1992, CRA-W provided additional allies for those women.
“When I was starting at UMass, I knew I needed more role models who were doing the same things. In 1993, Mary Hall and I attended the first CRA-W workshop for women considering or starting faculty positions in computing.
“At that workshop, a set of senior women mentored and sponsored me, including Susan Eggers, Mary Lou Soffa, Barbara Ryder, and Mary Jane Irwin. Mary Hall was a Stanford postdoc (and is now a professor at the University of Utah), and we formed peer networks with women at our same stage at that workshop.”
Having received and benefited from this type of career support, McKinley tries to help other women create their own support networks, both informally and as part of her volunteer activities on the CRA-W Board.
Like her volunteer and mentoring activities with CRA-W, a broad spectrum of technology challenges energize McKinley. “I really like collaborating on problems that span an entire complex system, like issues that need to be addressed in the programming language abstraction, the operating system and the hardware.
“It is not as fun for me to work on something by myself. A good day is when I collaborate with colleagues to solve a technical problem with great data and analysis. For example, understanding our infrastructure better, or figuring out how to do something we could not do before, so our customers have a better experience.”
Kathryn McKinley completed her B.A. in Computer Science in 1985 and earned her M.S. in 1990 and her Ph.D. in 1992. Her adviser was Ken Kennedy.