Congratulations to Atefeh for receiving theĀ 2024 Computer Science Graduate Student Award
In the past few decades, HLS tools were introduced to raise the abstraction level and free designers from delving into architecture details at the circuit level. While HLS can significantly reduce the efforts involved in the hardware architecture design, not every HLS code yields optimal performance, requiring designers to articulate the most suitable microarchitecture for the target application. This can affect the design turn-around times as there are more choices to explore at a higher level. Moreover, this limitation has confined the DSA community primarily to hardware designers, impeding widespread adoption. Atefeh's research addresses this issue by synergizing customized computing and machine learning. Specifically, her effort consists of two core parts: 1) Customized computing for machine learning, exemplified by FlexCNN (CNN accelerator) and StreamGCN (GCN accelerator). 2) Machine learning facilitates the optimization of customized computing through AutoDSE (bottleneck-based optimizer), GNN -DSE, and HARP (HLS tool surrogates)