About me

I am a final-year undergraduate at Turing class, Peking university. I major in computer science and minor in mathematics. I am fortunate to be advised by Prof. Di He and Prof. Liwei Wang. I worked as an intern in Prof. Cho-Jui Hsieh’s group at UCLA remotely in 2021.

My research area is machine learning, with special interests in models and algorithms inspired by theoretical insights. Recently, my work focuses on neural-network-based PDE solvers and the Transformer.

I will join CMU as a PhD student this fall!

Publications & Preprints

[1] Stable, Fast and Accurate: Kernelized Attention with Relative Positional Encoding (NeurIPS 2021) [PDF]
Shengjie Luo*, Shanda Li*, Tianle Cai, Di He, Dinglan Peng, Shuxin Zheng, Guolin Ke, Liwei Wang, Tie-Yan Liu

[2] Can Vision Transformers Perform Convolution? (Preprint) [PDF]
Shanda Li, Xiangning Chen, Di He, Cho-Jui Hsieh

[3] Learning Physics-Informed Neural Networks without Stacked Back-propagation (Preprint) [PDF]
Di He*, Wenlei Shi*, Shanda Li*, Xiaotian Gao, Jia Zhang, Jiang Bian, Liwei Wang, Tie-Yan Liu