VAST lab at UCLA

The VAST lab at UCLA investigates cutting-edge research topics at the intersection of VLSI technologies, design automation,  architecture and compiler optimization at multiple scales, from micro-architecture building blocks,  to heterogeneous compute nodes, and scalable data centers.  Current focuses include architecture and design automation for emerging technologies, customizable domain-specific computing with applications to multiple domains, such as imaging processing, bioinformatics, data mining and machine learning.

Latest News

August 1, 2018 | 0 comments

The paper is co-authored by Zhe Chen, Andrew Howe, Hugh T. Blair, and Jason Cong with the title  "CLINK: Compact LSTM Inference Kernel for Energy Efficient Neurofeedback Devices.” received the Best Paper Award at the International Symposium on...

May 2, 2018 | 0 comments

From Apr 29th to May 1st, Zhenyuan Ruan, Weikang Qiao, Jie Wang, Tianhe Yu, Prof. Cong from VAST lab and Dr. Zhenman Fang (postdoc alumni from VAST lab) attended 2018 International Symposium on Field-Programmable Custom Computing Machines (FCCM...

April 26, 2018 | 0 comments

We are pleased to announce that Professors Jason Cong (CS) and Song-Chun Zhu (CS and Statistics) are part of the University of Virginia’s new $27.5M Center on Research in Intelligent Storage and Processing in Memory (CRISP)—one of six Joint...

Latest Publications

[PDF]: PolySA: Polyhedral-Based Systolic Array Auto-Compilation
Conference publication
Jason Cong, Jie Wang
[PDF]: SODA: Stencil with Optimized Dataflow Architecture
Conference publication
Yuze Chi, Jason Cong, Peng Wei, and Peipei Zhou
Computed tomography image enhancement using 3D convolutional neural network
Conference publication
Li M, Shen S, Gao W, Hsu W, Cong J.
[PDF]: SMEM++: A Pipelined and Time-Multiplexed SMEM Seeding Accelerator for Genome Sequencing
Conference publication
Jason Cong, Licheng Guo, Po-Tsang Huang, Peng Wei and Tianhe Yu
[PDF]: CLINK: Compact LSTM Inference Kernel for Energy Efficient Neurofeedback Devices
Conference publication
Zhe Chen, Hugh T. Blair, Andrew Howe, Jason Cong
[PDF]: From JVM to FPGA: Bridging Abstraction Hierarchy via Optimized Deep Pipelining
Conference publication
Jason Cong, Peng Wei and Cody Hao Yu
[PDF]: Scaling for edge inference of deep neural networks
Journal publication
Xiaowei Xu , Yukun Ding, Sharon Xiaobo Hu, Michael Niemier, Jason Cong, Yu Hu, and Yiyu Shi
[PDF]: Functional Isolation of Tumor-Initiating Cells using Microfluidic-Based Migration Identifies Phosphatidylserine Decarboxylase as a Key Regulator
Journal publication
Yu-Chih Chen, Brock Humphries, Riley Brien, Anne E. Gibbons, Yu-Ting Chen, Tonela Qyli, Henry R. Haley, Matthew E. Pirone, Benjamin Chiang, Annie Xiao, Yu-Heng Cheng, Yi Luan, Zhixiong Zhang, Jason Cong, Kathryn E. Luker, Gary D. Luker & Euisik Yoon
S2FA: An Accelerator Automation Framework for Heterogeneous Computing in Datacenters
Conference publication
Cody Hao Yu, Peng Wei, Max Grossman, Peng Zhang, Vivek Sarkar, Jason Cong

Our Projects

Heterogeneous computing with extensive use of accelerators, such as FPGAs and GPUs, has shown great promise to bring in orders of magnitude improvement in computing efficiency for a wide range of applications. The latest advances in industry have led to highly integrated heterogeneous hardware...

Recent work from this project got the Best Paper Award in ISLPED'18.

Moore’s law has driven the exponential growth of information technology for more than 50 years, during which the ever-...

In the Big Data era, the volume of data is exploding, putting forward a new challenge to the existing computer system. Traditionally, the computer system is designed to be computing-centric, in which the data from IO devices are transferred and then processed by the CPU. However, the data...

In this project, we explore efficient algorithms and architectures for state-of-the-art deep learning based applications. The first work, Caffeine, offers a uniformed framework to accelerate the full stack of convolutional neural networks (CNN), including both convolutional layers and...

Many applications in precision medicine present significant computational challenges.  For example, the computation demand for personalized cancer treatment is prohibitively high for the general-purpose computing technologies, as tumor heterogeneity requires great sequencing depths,...

With the increasing of the system complexity, the needs of system level design automation becomes more and more urgent. The maturity of high-level synthesis pushes the desgin abstraction from register-transfer level (RTL) to software programming language like C/C++. However, the state-of-art...

http://www.cdsc.ucla.edu

To meet ever-increasing computing needs and overcome power density limitations, the computing industry has entered theera of parallelization, with tens to hundreds of computing cores integrated into a single...

Software Releases

Cloud-scale BWAMEM (CS-BWAMEM) is an ultrafast and highly scalable aligner built on top of cloud infrastructures, including Spark and Hadoop distributed file system (HDFS). It leverages the abundant computing resources in a public or private cloud to fully exploit the parallelism obtained from...

With the rapid evolution of CPU-FPGA heterogeneous acceleration platforms, it is critical for both platform developers and users to quantify the fundamental microarchitectural features of the platforms. We developed a set of microbenchmarks to evaluate mainstream CPU-FPGA platforms.

The...

PARADE is a cycle-accurate full-system simulation platform that enables the design and exploration of the emerging accelerator-rich architectures (ARA). It extends the widely used gem5 simulator with high-level synthesis (HLS) support. 

...