Vijay Sadashivaiah
I am a PhD candidate in Computer Science at the Rensselaer Polytechnic Institute advised by Professor James A. Hendler and Professor Pingkun Yan.
My research interests lie in improving the interpretability of computer vision and multimodal neural networks (text, images, point-based, etc.) and foundation models in critical applications. I'm generally interested in computer vision, deep learning, generative AI, and Explainable AI.
Previously, I received my B.S. in Electrical Engineering at PES Institute of Technology, a M.S. in Computer Science at Rensselaer Polytechnic Institute and a M.S. in Biomedical Engineering at Johns Hopkins University.
I have been advised by such wonderful advisors: Professor Sridevi Sarma at JHU, Dr. Qiang Chen and Dr. Kristen Maynard at LIBD, Professor Carl Petersen at EPFL, and Professor Achuta Kadambi at MIT.
Email  / 
Resume  / 
Google Scholar  / 
Twitter  / 
LinkedIn  / 
GitHub
|
|
|
Explaining Chest X-ray Pathology Models using Textual Concepts
Vijay Sadashivaiah,
Mannudeep K Kalra,
Ronny Luss,
Pingkun Yan,
James A. Hendler
arXiv, 2024
paper
We propose to explain chest X-ray pathology models using textual concepts. This is achieved by leveraging the joint latent space of image and text in vision-language models.
|
|
Beyond Visual Augmentation: Investigating Bias in Multi-Modal Text Generation
Fnu Mohbat,
Vijay Sadashivaiah,
Keerthiram Murugesan,
Amit Dhurandhar,
Ronny Luss,
Pin-Yu Chen
TrustNLP at NAACL, 2024
paper
We evaluated the influence of bias on multimodal text generation models. In particular, we studied the impact of visual augmentation using state-of-the-art diffusion models when generating text.
|
|
To Transfer or Not to Transfer: Suppressing Concepts from Source Representations
Vijay Sadashivaiah*,
Keerthiram Murugesan*,
Ronny Luss,
Pin-Yu Chen,
Christopher R. Sims,
James A. Hendler,
Amit Dhurandhar (* equal contribution)
Transactions on Machine Learning Research (TMLR), 2024
paper
We propose to suppress user-determined semantically meaningful concepts (viz. eyeglasses, smiling) from intermediate representations in computer vision tasks.
|
|
Auto-Transfer: Learning to Route Transferrable Representations
Keerthiram Murugesan*,
Vijay Sadashivaiah*,
Ronny Luss,
Karthikeyan Shanmugam,
Pin-Yu Chen,
Amit Dhurandhar (* equal contribution)
International Conference on Learning Representations (ICLR), 2022
paper /
code
We introduce multi-armed bandit based representation routing to improve transfer learning in computer vision tasks.
|
|
Improving Language Model Predictions via Prompts Enriched with Knowledge Graphs
Ryan Brate,
Minh-Hoang Dang,
Fabian Hoppe,
Yuan He,
Albert Meroño-Peñuelar,
Vijay Sadashivaiah
Deep Learning for Knowledge Graphs Workshop at ISWC, 2022
paper
We propose to imporve language model predictions by enriching the prompts from knowledge graphs.
|
|
SUFI: An automated approach to spectral unmixing of fluorescent multiplex images captured in mouse and postmortem human brain tissues
Vijay Sadashivaiah,
Madhavi Tippani,
Stephanie C Page,
Sang Ho Kwon,
Svitlana V Bach,
Rahul A Bharadwaj,
Thomas M Hyde,
Joel E Kleinman,
Andrew E Jaffe,
Kristen R Maynard,
BMC Neuroscience, 2021
paper /
code
An automated approach to spectral unmixing of fluorescent images
|
|
Single-nucleus transcriptome analysis reveals cell-type-specific molecular signatures across reward circuitry in the human brain
Matthew N Tran,
Kristen R Maynard,
Abby Spangler,
Louise A Huuki,
Kelsey D Montgomery,
Vijay Sadashivaiah,
Madhavi Tippani,
Brianna K Barry,
Dana B Hancock,
Stephanie C Hicks,
Joel E Kleinman,
Thomas M Hyde,
Leonardo Collado-Torres,
Andrew E Jaffe,
Keri Martinowich
Neuron 2021
paper /
code
A single-nucleus RNA-sequencing resource of 70,615 high-quality nuclei to generate a molecular taxonomy of cell types across five human brain regions.
|
|
KCNH2-3.1 mediates aberrant complement activation and impaired hippocampal-medial prefrontal circuitry associated with working memory deficits
Ming Ren,
Zhonghua Hu,
Dr. Qiang Chen,
Andrew E Jaffe,
Yingbo Li,
Vijay Sadashivaiah,
Shujuan Zhu,
Nina Rajpurohit,
Joo Heon Shin, Wei Xia,
Yankai Jia,
Jingxian Wu,
Sunny Lang Qin,
Xinjian Li,
Jian Zhu,
Qingjun Tian,
Daniel Paredes,
Fengyu Zhang,
Kuan Hong Wang,
Venkata S Mattay,
Joseph H Callicott,
Karen F Berman,
Daniel R Weinberger,
Feng Yang
Molecular Psychiatry 2019
paper
|
|
Modeling the interactions between stimulation and physiologically induced APs in a mammalian nerve fiber: dependence on frequency and fiber diameter.
Vijay Sadashivaiah,
Pierre Sacré,
Yun Guan,
William S Anderson,
Sridevi V Sarma
Journal of Computational Neuroscience 2019, EMBC 2018, EMBC 2017
paper 1 /
paper 2 /
paper 3 /
paper 4 /
code
We constructed a mechanistic, stochastic and functional models of nerve fiber to quantify interactions.
|
|
Voltage-sensitive dye imaging of mouse neocortex during a whisker detection task
Alexandros Kyriakatos,
Vijay Sadashivaiah,
Yifei Zhang,
Alessandro Motta,
Mattieu Auffret,
Carl CH Petersen
Neurophotonics 2017
paper
We studied the sensory motor interactions in mice brain using voltage sensitive dye imaging.
|
|