VAST lab at UCLA

The VAST lab at UCLA investigates cutting-edge research topics at the intersection of VLSI technologies, design automation,  architecture and compiler optimization at multiple scales, from micro-architecture building blocks,  to heterogeneous compute nodes, and scalable data centers.  Current focuses include architecture and design automation for emerging technologies, customizable domain-specific computing with applications to multiple domains, such as imaging processing, bioinformatics, data mining and machine learning.

The greatest online casino games, payouts and bonuses in Canada can be found at JackpotCity.

Latest News

July 22, 2022 | 0 comments

During July 10th and 14th, Prof. Jason Cong and members of the UCLA VAST attended the DAC 2022 conference and participated in a wide range of activities, including presenting three research papers and a tutorial, moderating a panel on the future...

July 7, 2022 | 0 comments

The team led by Professors Jason Cong and Yizhou Sun from the CS Department were recently awarded $1.2M  from the National Science Foundation (NSF) for the project entitled “High Level Synthesis via Graph-Centric Deep Learning”.


June 7, 2022 | 0 comments

Licheng Guo was selected as one of the four winners of the 2022 Outstanding Graduate Student Research Awards by the UCLA CS department. Licheng is currently a fifth-year Ph.D. student under the supervision of Prof.Jason Cong. He received his...

Latest Publications

FPGA Acceleration of Probabilistic Sentential Decision Diagrams with High-Level Synthesis
Journal publication
Young-kyu Choi, Carlos Santillana, Yujia Shen, Adnan Darwiche, Jason Cong
OverGen: Improving FPGA Usability through Domain-specific Overlay Generation
Conference publication
Sihao Liu , Jian Weng , Dylan Kupsh, Atefeh Sohrabizadeh , Zhengrong Wang , Licheng Guo, Jiuyang Liu, Maxim Zhulin, Rishabh Mani, Lucheng Zhang, Jason Cong, Tony Nowatzki
Domain-Specific Quantum Architecture Optimization
Journal publication
Wan-Hsuan Lin, Bochen Tan, Murphy Yuezhen Niu, Jason Kimko, and Jason Cong
FPGA HLS Today: Successes, Challenges, and Opportunities
Journal publication
Jason Cong, Jason Lau, Gai Liu, Stephen Neuendorffer, Peichen Pan, Kees Vissers, Zhiru Zhang
[PDF]: Efficient Kernels for Real-Time Position Decoding from In Vivo Calcium Images
Conference publication
Zhe Chen, Jim Zhou, Garrett J. Blair, Hugh T. Blair, and Jason Cong
Energy Efficient LSTM Inference Accelerator for Real-Time Causal Prediction
Journal publication
Zhe Chen, Hugh T. Blair, Jason Cong
Improving GNN-Based Accelerator Design Automation with Meta Learning
Conference publication
Yunsheng Bai, Atefeh Sohrabizadeh, Yizhou Sun, and Jason Cong
Automated Accelerator Optimization Aided by Graph Neural Networks
Conference publication
Atefeh Sohrabizadeh, Yunsheng Bai, Yizhou Sun, and Jason Cong
[PDF]: Serpens: A High Bandwidth Memory Based Accelerator for General-Purpose Sparse Matrix-Vector Multiplication
Conference publication
Linghao Song, Yuze Chi, Licheng Guo, and Jason Cong
StreamGCN: Accelerating Graph Convolutional Networks with Streaming Processing
Conference publication
Atefeh Sohrabizadeh, Yuze Chi, Jason Cong

Our Projects

computing (QC) has been shown, in theory, to hold huge advantages over classical computing. However, there remains many engineering challenges in the implementation of real-world QC applications. In order to divide-and-conquer, we can split the task as below.


Heterogeneous computing with extensive use of accelerators, such as FPGAs and GPUs, has shown great promise to bring in orders of magnitude improvement in computing efficiency for a wide range of applications. The latest advances in industry have led to highly integrated heterogeneous hardware...

Direction 1: Real-Time Neural Signal Processing for Closed-Loop Neurofeedback Applications.

The miniaturized fluorescence microscope (Miniscope) and the tetrodes assembly are emerging techniques in observing the activity of a large population of neuros in vivo. It opens up new research...

In the Big Data era, the volume of data is exploding, putting forward a new challenge to existing computer systems. Traditionally, the computer system is designed to be computing-centric, in which the data from IO devices is transferred and then processed by the CPU. However, this data movement...

In this project, we explore efficient algorithms and architectures for state-of-the-art deep learning based applications. In the first work, we are exploring learning algorithms and acceleration techniques on graph learning algorithms. The second work, Caffeine, offers a uniformed...

In the era of big data, many applications present siginificant compuational challenges. For example, in the field of bio-infomatics, the computation demand for personalized cancer treatment is prohibitively high for the general-purpose computing technologies, as tumor heterogeneity...

To meet ever-increasing computing needs and overcome power density limitations, the computing industry has entered theera of parallelization, with tens to hundreds of computing cores integrated into a single...

Software Releases

Serpens is a high bandwidth memory based accelerator for general-purpose sparse matrix-vector multiplication. We build Serpens accelerator on Xilinx Alveo U280 card. Serpens achieves up to 60.55 GFLOP/s (30,204 MTEPS).

Pyxis collects open-source accelerator designs and the performance data.

Sextans is an FPGA accelerator for general-purpose Sparse-Matrix Dense-Matrix Multiplication (SpMM).