Skip to main content
NC State Home

Jiajia Li

JL

Assistant Professor

Engineering Building II (EB2) 3320

Website

Bio

Jiajia Li is an Assistant Professor in Department of Computer Science at North Carolina State University (NCSU), Raleigh, NC. Her research emphasizes on high performance computing with a focus on the interaction among applications, numerical methods, data structures, algorithms, automatic performance tuning, and computer architectures. She is eager to pursue high performance sparse (multi-)linear algebra, solvers, and tensor decompositions for large-scale data analytics and domain applications on diverse computer architectures.

Jiajia Li was an Assistant Professor in Department of Computer Science at the College of William & Mary (W&M), Williamsburg, VA and a Research Scientist at High Performance Computing group of Pacific Northwest National Laboratory (PNNL), Richland, WA from 2018-2022. She has received her Ph.D. degree (Aug. 2018) in Computational Science & Engineering at Georgia Institute of Technology, advised by Professor Richard Vuduc. She has received Rising Stars in Computational and Data Sciences, Best Student Paper Award, and IBM PhD Fellowship. Before, she was a research intern of IBM Thomas J. Watson Research Center and Intel Parallel Computing Lab in the summers of 2016 and 2015 respectively. In the past, she has received a Ph.D. degree (Jul. 2013) from Institute of Computing Technology at Chinese Academy of Sciences. She received her B.S. (Jul. 2008) in Computational Mathematics from Dalian University of Technology in the Accelerated Student Program (2/180).

Please feel free to drop me an email @ jiajia.li@ncsu.edu if you have questions about CS Ph.D. program, research collaboration, research/career/international life suggestions, etc.

Education

Ph.D. Georgia Institute of Technology 2018

Ph.D. University of Chinese Academy of Sciences 2013

B.S. Dalian University of Technology 2008

Area(s) of Expertise

Architecture and Operating Systems
Parallel and Distributed Systems
Scientific and High Performance Computing

Publications

View all publications

Grants

Date: 10/01/21 - 9/30/25
Amount: $249,473.00
Funding Agencies: National Science Foundation (NSF)

GPUs have become common in today������������������s computing systems. However, it is challenging to efficiently map software applications to GPU architectures. Performance inefficiencies can hide deep in heterogeneous code bases, impeding applications from obtaining bare-metal performance. In this project, we will develop DrGPU to systematically study the performance inefficiencies in heterogeneous CPU-GPU systems with novel measurement, analysis, and optimization techniques.

Date: 10/01/22 - 9/30/24
Amount: $62,490.00
Funding Agencies: National Science Foundation (NSF)

This proposal targets performance optimization for sparse tensor networks on heterogeneous architectures. Our optimization crosses high-performance computing, algorithms, runtime, compiler, and computer architecture areas by proposing compressed representations and organization, load-balanced and memory heterogeneity-aware algorithms with memorization and intelligent data allocation, and designing specialized accelerators. The whole infrastructure will be applied to scale diverse application scenarios.

Date: 08/19/20 - 9/30/24
Amount: $249,840.00
Funding Agencies: National Science Foundation (NSF)

Cloud environments employ various microservices and serverless functions to handle web or database requests. Although cloud provides a uniformed infrastructure for resource management, it can easily suffer from performance inefficiency in the entire cloud software stack. To address this issue, we will develop CloudProf. This project has the following goals. First, it will break the abstraction introduced by the runtime systems of managed languages for intra-application optimization. Second, it will identify problematic interactions across microservices for inter-service optimization. Third, it will break the abstraction introduced by virtual machines and containers for the optimization of the entire cloud software stack.

Date: 11/14/22 - 12/31/23
Amount: $73,633.00
Funding Agencies: US Dept. of Energy (DOE)

This project will work on developing efficient sparse tensor algorithms for hypergraphs on diverse computer platforms. Dr. Li���s group will accelerate sparse tensor algorithms from two perspectives, sparse tensor memory representation and effective performance tuning methods. Dr. Li���s group will develop novel data structures taking symmetry, hyper-sparsity, and high dimensionality features into consideration, by extending Li���s prior work.


View all grants
  • The 39th IEEE International Conference on Computer Design (ICCD’21) Best Paper Award - 2021
  • Rising Stars for Women in Computational and Data Sciences - 2019
  • Principles and Practice of Parallel Programming. (PPoPP’19) Best Paper Award Finalist - 2019
  • ACM/IEEE International Conference for High-Performance Computing, Networking, Storage, and Analysis (SC’18) Best Student Paper Award - 2018
  • IBM PhD Fellowship - 2017-2018.
  • ZhuLiYueHua Award for the Excellent PhD Students of Chinese Academy of Sciences (Top 0.2%) - 2013
  • Xia Peisu Scholarship of Institute of Computing Technology (Top 1%) - 2011