Jiaxuan Wang
Me at Grand Canyon, 2014

Hobbies


basketball (20+ years); guitar (4 years); violin (20+ years);

Skills


C++ (4 years); python (7 years); Java (1 year); PyTorch (3 years);
I'm a 4th year computer science and engineering PhD candidate at the University of Michigan, advised by professor Jenna Wiens.

My current research focuses on model interpretability. However, I'm interested in a wide range of topics including time series analysis, non-convex optimization, reinforcement learning, and sports analytics. My work is usually motivated by application in healthcare.

I spent most of my pre-college years in Beijing. For the past 7 years, I've been living in Ann Arbor. I completed my undergrad at the Unisersity of Michigan as a computer science major and math minor in 2017, and started the PhD program in Fall 2017. In undergrad, I worked with professor Jia Deng to augment CNN with rotation invariant filters (2015-2016). I also worked as a machine learning intern at Bloomberg L.P. (2016) on parsing and ranking natural language query for financial charts, under the supervision of Dr. Konstantine Arkoudas and Dr. Srivas Prasad. Last summer (2020), I worked as a research intern in the Adaptive Systems and Interaction Group at Microsoft Research, mentored by Scott Lundberg, working on unifying Shapley value based model interpretation methods with a causal graph.

Publications

* denotes equal contribution
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
TL;DR: Don't choose between true to the model or true to the data: do both and have a system level view.
Jiaxuan Wang, Jenna Wiens, Scott Lundberg
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
[paper][code]

AdaSGD: Bridging the gap between SGD and Adam
TL;DR: Speed of Adam and performance of SGD may be achieved by adapting a single learning rate.
Jiaxuan Wang, Jenna Wiens
arxiv preprint, 2020
[paper][code]

Relaxed Weight Sharing: Effectively Modeling Time-Varying Relationships in Clinical Time-Series
TL;DR: Reducing temporal conditional shift using multi task learning by treating each time step as a separate task.
Jeeheh Oh*, Jiaxuan Wang*, Shengpu Tang, Michael Sjoding, Jenna Wiens
Machine Learning for Healthcare, 2019
[paper][code]

Learning Credible Models
TL;DR: Learning models that are accurate and comply with human intuition so that they don't use proxy variables.
Jiaxuan Wang, Jeeheh Oh, Haozhu Wang, Jenna Wiens
ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2018
[paper][code]

The Advantage of Doubling: A Deep Reinforcement Learning Approach to Studying the Double Team in the NBA
TL;DR: It is a bad idea to double team star playes like LeBron who can pass and score.
Jiaxuan Wang*, Ian Fox*, Jonathan Skaza, Nick Linck, Satinder Singh, Jenna Wiens
MIT Sloan Sports Analytics Conference, 2018
[paper][code]

Learning to Exploit Invariances in Clinical Time-Series Data using Sequence Transformer Networks
TL;DR: A learnable data preprocessing module for time series data in healthcare.
Jeeheh Oh, Jiaxuan Wang, and Jenna Wiens
Machine Learning for Healthcare, 2018
[paper][code]

HICO: A Benchmark for Recognizing Human-Object Interactions in Images
TL;DR: A new image dataset focusing on who did what.
Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, Jia Deng
International Conference on Computer Vision (ICCV) 2015
[paper][data][code]

© Jiaxuan Wang