🎓 M.Tech in Artificial Intelligence & Machine Learning
🔬 AI Researcher (in transition) | LLMs | GenAI | MLOps | Multimodal | Responsible AI | ML System Optimization
☁️ 14+ Years of Experience in Cloud & DevOps | 🧠 Building AI Systems that Scale
“I don’t just build models. I build the systems — and the trust — that make AI research impactful.”
I’m an AI researcher in the making, blending Cloud & DevOps leadership (14+ years) with a deep curiosity for frontier AI research.
I specialize in:
- 🤖 Fine-tuning & optimizing LLMs and transformer-based architectures
- 🧠 Exploring multimodal learning (language + vision)
- ☁️ Architecting infrastructure for large-scale distributed training
- 🧪 Building reproducible MLOps workflows for AI research
- ⚡ Optimizing ML systems for performance, efficiency & scalability
- ⚖️ Integrating principles of Fair, Interpretable & Trustworthy ML
With a strong engineering backbone, I thrive at the intersection of AI research, system design, and responsible innovation.
| Domain 🧠 | Focus Areas ✨ |
|---|---|
| 📚 LLMs & GenAI | Pre-training, fine-tuning, LoRA, RAG, evaluation, domain adaptation |
| 🖼️ Computer Vision | Vision Transformers (ViTs), representation learning, multimodal fusion |
| ⚡ ML System Optimization | Distributed training, model efficiency, quantization, serving, cost & latency tuning |
| 🧭 Responsible AI (FAccT) | Fairness, interpretability, transparency, explainability, bias mitigation |
| 🧪 Research Infrastructure & MLOps | Experiment tracking, scaling, reproducibility, containerized workflows |
| 🧠 Project | 📝 Description | 🧰 Focus | 🧪 Stack |
|---|---|---|---|
| llm-finetune-lora | LoRA fine-tuning of LLMs for domain-specific tasks | LLM, NLP, Optimization | PyTorch · HuggingFace · LoRA |
| multimodal-ai-lab | Exploring joint learning from text & image inputs | Multimodal Learning | Transformers · OpenCV · PyTorch |
| fair-ml-evaluation | Building a pipeline to evaluate ML models for fairness & interpretability | Responsible AI (FAccT) | AIF360 · SHAP · Sklearn |
| mlops-for-research | Reproducible experiment orchestration at scale | MLOps | MLFlow · K8s · GitHub Actions |
| ml-system-optimization | Experiments with distributed training, quantization & inference acceleration | ML System Optimization | PyTorch · CUDA · AWS |
| distributed-training-infra | Cloud infra setup for distributed AI training | AI Infra | Terraform · Kubernetes · AWS Batch |
👉 (Flagship repos will be pinned as they mature — stay tuned.)
- 🧭 14+ years designing scalable cloud & DevOps solutions for global enterprises.
- 🧠 M.Tech in AI/ML with a research focus on LLMs, Multimodal AI, ML System Optimization, and Responsible AI.
- 🤖 Expertise in model training, fine-tuning, optimization, and deployment at scale.
- ⚡ Specialized in distributed AI, system efficiency, and inference acceleration.
- ⚖️ Passionate about building fair, interpretable, and trustworthy ML systems.
- 🧪 Believer in open research, reproducibility, and engineering rigor.
- ✍️ Fine-tuning and adapting LLMs for specialized domains
- 🖼️ Multimodal AI: bridging language and vision
- ⚡ ML System Optimization: quantization, LoRA, distillation, serving, performance tuning
- 🧭 MLOps for reproducible research
- ⚖️ Fair, Interpretable, and Trustworthy ML (FAccT)
I aspire to grow as an AI Researcher who codes —
someone who contributes to scientific advances while building the infrastructure and optimizations that make those advances scalable, efficient, and trustworthy in the real world.
⭐ “Great models are built twice — once in research, and once again in optimized, responsible systems.”

