VAST lab at UCLA

The VAST lab at UCLA investigates cutting-edge research topics at the intersection of VLSI technologies, design automation,  architecture and compiler optimization at multiple scales, from micro-architecture building blocks,  to heterogeneous compute nodes, and scalable data centers.  Current focuses include architecture and design automation for emerging technologies, customizable domain-specific computing with applications to multiple domains, such as imaging processing, bioinformatics, data mining and machine learning.

Latest News

March 4, 2021 | 0 comments

Computer Science Professor Jason Cong and his students Licheng Guo, Yuze Chi, Jie Wang, Jason Lau, and Weikang Qiao, in collaboration with Professor Zhiru Zhang and his student Ecenur Ustun at the Cornell University, have received the Best Paper...

February 11, 2021 | 0 comments

Linghao was selected as one of the four winners of EDAA Outstanding Dissertations Award 2020. In the dissertation, he focused on the architecture for deep learning and graph processing.

Linghao is a postdoctoral researcher under the...

February 11, 2021 | 0 comments

Atefeh Sohrabizadeh is one of 15 winners of the Cadence Women in Technology Scholarship. Atefeh joined the Ph.D. program in UCLA Computer Science Program in Fall 2018. Her research interests lie in parallel architecture and programming. She is...

Latest Publications

[PDF]: Extending High-Level Synthesis for Task-Parallel Programs
Conference publication
Yuze Chi, Licheng Guo, Jason Lau, Young-kyu Choi, Jie Wang, and Jason Cong
[PDF]: FANS: FPGA-Accelerated Near-Storage Sorting
Conference publication
Weikang Qiao, Jihun Oh, Licheng Guo, Mau-Chung Frank Chang, Jason Cong
[PDF]: MOCHA: Multinode Cost Optimization in Heterogeneous Clouds with Accelerators.
Conference publication
Peipei Zhou, Jiayi Sheng, Cody Hao Yu, Peng Wei, Jie Wang, Di Wu, Jason Cong
[PDF]: AutoSA: A Polyhedral Compiler for High-Performance Systolic Arrays on FPGA
Conference publication
Jie Wang, Licheng Guo, and Jason Cong
[PDF]: HBM Connect: High-Performance HLS Interconnect for FPGA HBM
Conference publication
Young-kyu Choi, Yuze Chi, Weikang Qiao, Nikola Samardzic, and Jason Cong
[PDF]: AutoBridge: Coupling Coarse-Grained Floorplanning and Pipelining for High-Frequency HLS Design on Multi-Die FPGAs
Conference publication
Licheng Guo, Yuze Chi, Jie Wang, Jason Lau, Weikang Qiao, Ecenur Ustun, Zhiru Zhang, Jason Cong
Optimal Layout Synthesis for Quantum Computing
Conference publication
Bochen Tan and Jason Cong
[PDF]: BLINK: Bit-Sparse LSTM Inference Kernel Enabling Efficient Calcium Trace Extraction for Neurofeedback Devices
Conference publication
Zhe Chen, Garrett J. Blair, Hugh T. Blair, Jason Cong
SACNN: Self-Attention Convolutional Neural Network for Low-Dose CT Denoising with Self-supervised Perceptual Loss Network
Journal publication
Meng Li, William Hsu, Xiaodong Xie, Jason Cong, and Wen Gao
2019 DAC Roundtable
Journal publication
Giovanni De Micheli, Antun Domic, Massimiliano Di Ventra, Martin Roettler, and Jason Cong

Our Projects

Quantum computing (QC) has been shown, in theory, to hold huge advantages over classical computing. However, there remains many engineering challenges in the implementation of real-world QC applications. In order to devide-and-conquer, we can split the task as below.


Heterogeneous computing with extensive use of accelerators, such as FPGAs and GPUs, has shown great promise to bring in orders of magnitude improvement in computing efficiency for a wide range of applications. The latest advances in industry have led to highly integrated heterogeneous hardware...

Direction 1: Real-Time Neural Signal Processing for Closed-Loop Neurofeedback Applications. 

Recent work in this project got the Best Paper Award in ISLPED'18, and...

In the Big Data era, the volume of data is exploding, putting forward a new challenge to the existing computer system. Traditionally, the computer system is designed to be computing-centric, in which the data from IO devices are transferred and then processed by the CPU. However, the data...

In this project, we explore efficient algorithms and architectures for state-of-the-art deep learning based applications. In the first work, we are exploring learning algorithms and acceleration techniques on graph learning algorithms. The second work, Caffeine, offers a uniformed...

In the era of big data, many applications present siginificant compuational challenges. For example, in the field of bio-infomatics, the computation demand for personalized cancer treatment is prohibitively high for the general-purpose computing technologies, as tumor heterogeneity...

To meet ever-increasing computing needs and overcome power density limitations, the computing industry has entered theera of parallelization, with tens to hundreds of computing cores integrated into a single...