Sathyaprakash Narayanan

Hi There. I am an AI Researcher interested in Deep Learning, Neuromorphic Engineering, and Reinforcement Learning.

I have had the privilege to work as Machine Learning Scientist II at Lytx Inc.

Research Associate at NeuRonICS Lab, DESE; Indian Institute of Science, Bangalore and as a Teaching Assistant at Great Learning, Bangalore.

In the meantime, I single-handedly authored and built a model compression/inference package sconce (AutoDL Compression and Inference).

Fast-forward to today, I'm an ECE Graduate Student at the University of California, Santa Cruz. Under by Professor and Mentor Jason Eshraghian at UCSC.


Teaching at UCSC:

  • [TIM170]: Management of Technology Seminar - Spring 2024
  • [TIM170]: Management of Technology Seminar - Winter 2024
  • [TIM170]: Management of Technology Seminar - Fall 2024
  • [ECE103]: Singal and Systems - Spring 2023
  • [CSE30]: Programming Abstractions in Python - Winter 2023

CV /  Email /  Github /  Google Scholar /  LinkedIn /  Kaggle /  Twitter

profile photo
Research Interest

My research interests broadly are and but is not restricted to:

  • Computer Vision
  • Deep Learning
  • Machine Learning
  • Efficient ML/DL Model Hardware Deployment
  • Computational Imaging
  • Signal Processing
  • Compressive Sensing
  • Neuromorphic Engineering
  • Event Based Data


Recent Update
  • [Jun, 2024] Selected as a Reviewer for WACV 2025
  • [Feb, 2024] Second Version of sconce: model compression/inference package Released sconce: __version__= "Spider"
  • [Jan, 2024] First Version of sconce: model compression/inference package Released sconce: __version__= "Ant"
  • [Jun, 2023] Selected as a Reviewer for WACV 2024
  • [May, 2023] Selected as a SIP 2023 Mentor
  • [Apr, 2023] Pushed the first version of Model Compression(tomoco)
  • [Jan, 2023] Selected as a Reviewer for WACV 2023
  • [Spring 2023] ECE103: Teaching Assistant for Signals and Systems Coursework at UCSC
  • [Winter 2023] CSE30: Teaching Assistant for Programming Abstractions: Python at UCSC
  • [Dec, 2021] Patent Published: METHOD AND SYSTEM FOR RECOGNIZING ACTIVITIES IN SURROUNDING ENVIRONMENT FOR CONTROLLING NAVIGATION OF AUTONOMOUS VEHICLE
  • [June, 2021] Paper Selected for ICIP 2021

Works

sconce: AutoML Model Compression Package
Sathyaprakash Narayanan

A one-stop solution for Model Compression, from NAS, Pruning, Quantization, Layer Fusion, Sparse Engine to CUDA optimizations for Inference Optimizations.


Real-Time Object Detection and Localization in Compressive Sensed Video Sathyaprakash Narayanan*, Yeshwanth Ravi Theja*, Venkat Rangan, Anirban Charkraborty, Chetan Singh Thakur
IEEE International Conference on Image Processing (ICIP), 2021

A novel task of detection and localization of objects directly on the compressed frames. Thereby mitigating the need to reconstruct the frames and reducing the search rate up to 20x (compression rate). We achieved an accuracy of 46.27% mAP with the proposed model. We were also able to show real-time inference on an NVIDIA TX2 embedded board with 45.11% mAP.


n-HAR: A Neuromorphic Event-Based Human Activity Recognition System using Memory Surfaces
Bibrat Ranjan Pradhan, Yeshwanth Ravi Theja, Sathyaprakash Narayanan, Anirban Charkraborty, Chetan Singh Thakur
IEEE International Symposium on Circuits and Systems (ISCAS), 2019

A system to achieve the task of human activity recognition based on the event-based camera data. We show that such tasks, which generally need high frame rate sensors for accurate predictions, can be achieved by adapting existing computer vision techniques to the spiking domain. We used event memory surfaces to make the sparse event data compatible with deep convolutional neural networks (CNNs). We also provide the community with a new dataset consisting of five categories of human activities captured in real world without any simulations. We achieved an accuracy of 94.3% using event memory surfaces on our activity recognition dataset.


Real-Time Implementation of Proto-Object Based Visual Saliency Model
Sathyaprakash Narayanan, Yeshwanth Ravi Theja, Jamal Lottier, Ernst Niebur, Ralph Etienne-Cummings, Chetan Singh Thakur
IEEE International Symposium on Circuits and Systems (ISCAS), 2019

We will demonstrate a real-time implementation of a protoobject based neuromorphic visual saliency model on an embedded processing board. Visual saliency models are difficult to implement in hardware for real-time applications due to their computational complexity. The conventional implementation is not optimal because of the requirement of a large number of convolution operations for filtering on several feature channels across multiple image pyramids. Our current implementation considers the dynamic temporal motion change by convoluting along time efficiently by parallelly processing them.


A Compressive Sensing Video dataset using Pixel-wise coded exposure
Sathyaprakash Narayanan, Yeshwanth Ravi Theja, Chetan Singh Thakur arXiv, 2019

Manifold amount of video data gets generated every minute as we read this document, ranging from surveillance to broadcasting purposes. There are two roadblocks that restrain us from using this data as such, first being the storage which restricts us from only storing the information based on the hardware constraints. Secondly, the computation required to process this data is highly expensive which makes it infeasible to work on them. Compressive sensing(CS) is a signal process technique, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Shannon-Nyquist sampling theorem. There are two conditions under which recovery is possible. In this work we propose a new comprehensive video dataset, where the data is compressed using pixel-wise coded exposure that resolves various other impediments.



Responsibilities
spi Reviewer ISCAS 2024
Reviewer, WACV 2019, 2020, 2022, 2023, 2024
Reviewer, BMVC 2018
tpami Reviewer, TPAMI IEEE Transactions on Pattern Analysis and Machine Intelligence
spi SIP Mentor 2023

Forked from Leonid Keselman's Jekyll fork of this page.