Sian Jin is an assistant professor who joined the Department of Computer & Information Sciences in January. He earned his PhD in computer engineering from Indiana University in 2023, after receiving his bachelor’s in physics from Beijing Normal University. He is currently a research assistant at Argonne National Laboratory (ANL), after a similar position at Los Alamos National Laboratory (LANL) from 2019 to 2020.
His research interests include high-performance computing (HPC) data reduction & lossy compression for improving the performance for scientific data analytics and management. Within the past five years, he has published more than 20 papers in top conferences and journals, including the Symposium on Principles and Practice of Parallel Programming, International Conference on Very Large Databases and EuroSys, the European chapter of ACM SIGOPS (Special Interest Group in Operating Systems).
At the heart of your research, what is the big question you are trying to answer?
I am working on optimizing the use of supercomputers for large-scale problems.
How do you do that?
My approach involves collaborating with domain experts to identify challenges and develop software solutions that enhance performance across various scientific applications.
What unique approaches do you bring to your research?
My background in physics, combined with an interdisciplinary approach, enables me to deeply understand the unique challenges of scientists. This comprehension allows for the seamless integration of data reduction solutions into scientific applications, significantly boosting their performance.
What recent breakthrough in your field has influenced your own research?
The development of large language models, such as ChatGPT, represents a significant breakthrough. These models pose unique challenges in terms of training and deployment, which my research addresses by developing frameworks that minimize data flow and enhance performance.
Are there specific faculty collaborations that you're excited to explore?
Yes, my research spans a variety of applications, opening up many collaboration possibilities within CST. For example, I am currently working on NIH proposals with Mindy Shi on sequence data compression and planning NSF and DoE proposals with Xubin He and external partners.
How do you plan to engage with research and industry partners to apply your research findings?
My collaborations with national laboratories like ANL, LANL and Lawrence Berkeley National Laboratory aim to enhance data reduction techniques for more efficient HPC application runs. I am also exploring opportunities with local industries to reduce AI model training costs and times.
In what ways do you hope your research will contribute to society at large?
First, my research aims to empower domain scientists with better supercomputing tools, resulting in more scientific discoveries, such as how our universe evolves and how to accurately forecast the weather. Second, my research is aimed at improving AI model training efficiency, thereby making large models more accessible and affordable.
What brought you to Temple University?
One of the biggest reasons is the supportive and friendly departmental environment at Temple. I think this is very important for new faculty like me, coupled with my personal preference for East Coast living.
What inspired or motivated you to pursue a career in science?
It has been a long-term goal of mine to pursue a career in science. Both of my parents are university faculty members and I was inspired by their dedication to advancing science. This instilled in me a deep appreciation for scientific inquiry and the desire to contribute to the field myself.