Skip to content

AI & Software Engineer

SWE Intern at DevDynamics · Final year AI & DS, VIIT Pune

            

I build AI systems that actually ship.

Available for opportunities

0+

Companies Experience

0+

Hackathons Won

0+

Systems Shipped

Backend-heavy engineer with a taste for AI systems.

I'm a full-stack engineer drawn to backend-heavy systems and the messy parts of AI — the places where correctness, latency, and reliability actually matter once real users show up. I care about shipping things that hold up in production, not just polished demos.

At DevDynamics I built an AST-based code analysis engine in TypeScript with Tree-sitter and AST-GREP, doing semantic impact analysis and dependency-graph construction across 19 languages for automated PR reviews. I then took the codebase context service from 4.5 GB peak RAM down to under 700 MB and from 2-minute cold starts to 30 seconds with a two-tier (LRU + DB-backed) cache. Alongside that: a production MCP server exposing engineering analytics to AI clients, and Stripe billing on RefactoAI.

Right now I'm looking for teams working on hard backend, distributed-systems, or applied-AI problems — places where I can own features end-to-end and make measurable dents in real systems.

How I work

I bias toward simple architectures that are easy to operate and extend.

I care about measurable outcomes — latency, reliability, and product impact.

I like owning the path from rough idea to shipped feature.

Currently

SWE Intern at DevDynamics, wrapping a B.Tech in AI & Data Science at VIIT Pune.

Highlights

Grand Finalist · Smart India Hackathon 2024

Top 5 nationally · AICTE, Ministry of Law & Justice

5× Hackathon Winner

Scalable cloud-native and AI-driven solutions across events

What I bring to the table.

Three areas where I've built real systems — not just polished demos.

Backend Systems

Distributed & reliable

Backend Systems

<p>

Kafka-based microservices, REST and GraphQL APIs, database optimization, AWS infrastructure (EC2, S3, Lambda, ALB, VPC), Docker-based deployments, and CI/CD pipelines that actually hold up under load.

</p>
AI / LLM Engineering

Production-grade AI

AI / LLM Engineering

<p>

RAG pipelines, LangChain and LangGraph workflows, AST-based semantic code analysis across 19 languages, MCP servers exposing AI-friendly APIs, and LLM output validation with schema enforcement.

</p>
Full-Stack Product

End-to-end ownership

Full-Stack Product

<p>

Next.js and React frontends, Node.js and FastAPI backends, Stripe billing, real-time features, PostgreSQL and MongoDB data layers — built and shipped from zero to production.

</p>

Tech Stack

Languages

  • JavaScript
  • TypeScript
  • Python
  • C++
  • Golang

Tech

  • React.js
  • Next.js
  • Node.js
  • FastAPI
  • GraphQL

Data

  • PostgreSQL
  • MongoDB
  • Redis
  • Apache Kafka

Cloud/DevOps

  • AWS
  • GCP (Compute Engine)
  • Docker
  • GitHub Actions CI/CD
  • Linux
  • Bash

AI/ML

  • LangChain
  • LangGraph
  • Transformers
  • RAG

Where I've shipped.

Three roles, one throughline: measurable backend and AI infrastructure that survived contact with production.

  1. DevDynamics· Current

    Software Engineering InternPune, India

    Jul 2025 — Present

    • Built an AST-based code analysis engine in TypeScript with Tree-sitter and AST-GREP — semantic impact analysis, dependency graphs, and symbol-level change tracking across 19 languages, powering automated PR reviews.
    • Cut peak RAM on the codebase context service from 4.5 GB to under 700 MB and processing time from 2 min to 30 sec via a two-tier (LRU + DB-backed) cache that persists parsed IRs across cold Lambda starts.
    • Shipped an end-to-end MCP server exposing engineering-analytics metrics to AI clients with secure auth, unlocking automated AI-driven workflows.
    • Integrated Stripe on RefactoAI for paid signups and recurring billing — plans, webhooks, proration, and grace-period handling.
  2. Indian Aviation Academy

    Software Development InternRemote · Delhi, India

    Jan 2025 — May 2025

    • Worked on Pradipti, a national internship portal serving 500+ interns; built an AI chatbot on FastAPI + Docker now handling 1000+ queries daily.
    • Built and maintained a responsive Next.js frontend and Node.js / FastAPI backends on PostgreSQL, reliably handling 300+ daily transactions.

Selected work.

A small set of builds I'm proud of. Focusing on depth over volume. Each one solved a specific, non-trivial problem end-to-end.

01 / Project

Advista

AI competitive intelligence platform

Built for marketers who need real intelligence, not summaries. Runs async multi-source research in the background so the UI stays live, then delivers structured 7-section market reports tied to a persisted session history.

FastAPILangGraphGroqCeleryRedisReactPostgreSQLPrismaFirebaseDockerAWS Lambda

Links

ClientAPILive

Repos include the implementation details, architecture choices, and trade-offs behind the build.

  • LangGraph agentic workflow with Zod schema-enforced LLM extraction keeps every report section consistent and predictably structured across runs.
  • Celery + Redis task queue decouples Groq inference from the HTTP cycle; the client polls a job-status endpoint while workers handle multi-source synthesis.
  • Firebase Auth scopes each session and artifact to a verified user; containerized in Docker and deployed on AWS Lambda with Prisma/Postgres persistence.

02 / Project

ValueX

Wealth management AI microservice

A wealth management API built without frameworks: a raw intent classifier, 8 specialist agents, and disciplined cost control from the ground up.

FastAPIPythonOpenAISSEpytest

Links

GitHub

Repos include the implementation details, architecture choices, and trade-offs behind the build.

  • Single LLM call maps each query to 1 of 8 specialist agents; a pre-LLM safety guard blocks unsafe inputs in under 10ms before any inference runs.
  • SSE streaming delivers p95 first-token latency under 2s while holding a hard $0.05/query cost ceiling at gpt-4.1 pricing.
  • 100+ labeled eval queries with ≥85% routing accuracy and ≥95% safety recall; CI mocks the LLM layer so guard and routing logic are tested on every push.

03 / Project

QueryPilot

Perplexity-style deep research assistant

Ask a question, get a sourced answer streamed in real time. Full-stack research engine with structured citations and persistent authenticated sessions.

TypeScriptBunGoogle GeminiTavilyPostgreSQLPrismaReactExpress

Links

GitHub

Repos include the implementation details, architecture choices, and trade-offs behind the build.

  • Tavily advanced-depth search feeds a Gemini 2.5 Flash streaming pipeline; citation objects (title + URL) appended as structured JSON at stream end, not inlined.
  • bcrypt accounts with JWT auth; each query and its citation list stored in Postgres via Prisma, with full history queryable per user.
  • Monorepo on Bun: Express 5 API + React 19 / React Router 7 / Tailwind 4 client, both bundled natively with hot reload in dev.

04 / Project

Microservice Notification System

Kafka-backed pub/sub notifications for e-commerce

A microservice notification platform engineered for 2,000+ msg/min sustained throughput, with priority delivery, dead letter handling, and real-time observability.

Node.jsTypeScriptGraphQLKafkaMongoDBRedisDocker

Links

GitHubLive

Repos include the implementation details, architecture choices, and trade-offs behind the build.

  • Kafka consumer groups partition traffic by priority tier; failed deliveries route to dead letter queues with configurable retry and reprocessing.
  • GraphQL API with JWT auth exposes subscription management and delivery status without coupling consumers to Kafka internals.
  • Prometheus + Grafana dashboards track throughput, consumer lag, and error rate in real time for operational debugging.

Let's work together.

If you have a role, project, or problem worth solving, send it over and I’ll reply directly.

Start a conversation

ayushjrathod7@gmail.com

The fastest way to reach me is email. I’m happy to discuss full-time roles, internships, and select freelance work.