Shyam Ajudia

From lab bench to production-grade data platforms

I build genomic infrastructure that scientists actually want to use. With 5+ years bridging bench science and cloud engineering, I've automated NGS pipelines that cut processing time in half, deployed production Kubernetes clusters for research workloads, and designed CI/CD systems that let biologists ship reproducible analyses without thinking about containers. My wet-lab background means I don't just deploy infrastructure—I understand the experiments running on it.

24/7 uptime

Reliability engineered

Self-healing hybrid clusters with automated failover keeping research platforms continuously available

1M+ samples

Scale proven in production

Sequencing and analysis workloads orchestrated end-to-end without sacrificing reproducibility or compliance

3k+ hours saved

Efficiency unlocked

Systematic workflow automation reclaimed the equivalent of two FTEs across data processing and deployment

Experience

Career progression from research to technical operations leadership

Roles span academic research, biotechnology, and healthcare IT. Each position strengthened the bridge between scientific discovery and scalable technical implementation.

Technical Operations Lead

Advanced Cellular Dynamics

2019 — 2024

Led molecular biology operations while building computational infrastructure and web-based scientific workflow tools for high-throughput research.

  • Built automated pipelines processing large-scale genomic datasets with 50% reduction in turnaround time, orchestrating workflows across cloud infrastructure
  • Deployed production NGS workflows on Kubernetes and AWS Batch via Nextflow Tower, managing TB-scale genomic data with Terraform-provisioned infrastructure
  • Developed web-based scientific applications using SvelteKit and FastAPI with containerized deployments, creating custom analysis tools and interactive dashboards that streamlined biological data workflows
  • Operated and maintained Illumina NGS platforms (iSeq100, NextSeq500), managing complete workflows from library preparation through data analysis
  • Mentored 4+ research associates across NGS, automation, and molecular biology workflows
  • Implemented LIMS integrations and ETL pipelines syncing instrumentation data with cloud analytics environments

IT Systems Support (Part-Time)

Allcare Medical Clinic

2025

Maintained healthcare IT infrastructure and supported EMR system implementation.

  • Managed network infrastructure and system reliability for clinical operations
  • Supported technology adoption through training and technical documentation

Research Assistant

University of Washington

2017 — 2018

Developed molecular biology techniques and data systems supporting neural development research in Drosophila genetics.

  • Designed and constructed expression plasmids with targeted transcription activation systems
  • Performed quantitative behavioral assays and data analysis for neuronal characterization
  • Configured laboratory data collection systems and automated analysis workflows
B.S. Molecular Biology · University of Washington

Featured Projects

Hybrid services connecting lab work to cloud scale

Core infrastructure services keep my platform running: the GitOps Kubernetes platform, this portfolio, and the immutable operating system that keeps my personal desktop reproducible.

GitOps Kubernetes Platform

Self-hosted production Kubernetes built as a GitOps control plane for every service I run.

Problem

Needed production-grade infrastructure at home capable of sustaining public workloads without managed cloud safety nets.

Solution

Provisioned bare-metal cluster nodes, wired Flux CD pipelines, centralized secrets with Infisical, and automated lifecycle work via Renovate.

Stack

  • Kubernetes
  • Flux CD
  • Infisical
  • MetalLB
  • Renovate
  • Cloudflare Tunnels
Source code →

This Website

Astro + TypeScript portfolio deployed through the same GitOps platform it documents.

Problem
Required a modern portfolio that demonstrates platform engineering craft while running on my own infrastructure.
Solution
Built the site with Astro and TypeScript, containerised it with Docker, and ship updates through GitHub Actions into the Kubernetes cluster.

Stack

  • Astro
  • TypeScript
  • Docker
  • GitHub Actions
  • Kubernetes
  • Cloudflare Tunnels

Results

  • Self-hosted delivery on the GitOps Kubernetes platform
  • CI/CD automated from commit to production via GitHub Actions
  • Edge routing and TLS handled by Cloudflare Tunnels

Technical Challenges

  • Keeping container images lightweight for rapid GitOps rollouts
  • Orchestrating zero-downtime releases on a homelab control plane
Source code →

YamshyOS

Fedora Atomic image engineered with immutable infrastructure patterns for my daily workstation.

Problem

Wanted a reproducible desktop environment that keeps my personal workstation consistent with the same GitOps and supply-chain guarantees I trust elsewhere.

Solution

Used BlueBuild to compose a custom Fedora Atomic variant tailored to my desktop, layered rpm-ostree updates, and signed releases through Sigstore.

Stack

  • Fedora Atomic
  • BlueBuild
  • rpm-ostree
  • Podman
  • Sigstore

Results

  • Cryptographically signed system images for trustworthy personal installs
  • Atomic updates with instant rollback via rpm-ostree
  • GitOps configuration keeps my desktop converged with declarative state

Technical Challenges

  • Integrating Sigstore signing into an immutable image build pipeline
  • Balancing containerised workloads with read-only host constraints
Source code →

Skill Architecture

Capabilities organised by scientific outcome

Choose a lens—scientific computing, infrastructure, data, collaboration—to see how tooling decisions connect directly to scientific velocity and compliance.

Capabilities

Application-aligned Expertise

Each capability bundle maps to a measurable outcome: faster approvals, cheaper pipelines, tighter scientific feedback loops. Explore the focus areas driving measurable impact.

Scientific Computing

Production-grade compute pipelines transforming raw sequencer output into analysis-ready biology. Automated workflows keep terabyte-scale studies reproducible while matching the pace of translational research teams.

  • Orchestrated RNA/DNA-seq and variant calling workflows through Nextflow Tower on AWS Batch, processing high-throughput sequences from Illumina instruments to validated biological insights
  • Automated IC50, dose-response, and kinase inhibition analyses with Python/R pipelines featuring built-in statistical validation and quality control gates
  • Trained scikit-learn models on experimental datasets to classify cellular phenotypes and predict compound efficacy, integrating ML predictions into research decision workflows
Core
Python, R, SQL, Nextflow, Bash, statistical modeling
HPC & Pipelines
AWS Batch, Nextflow Tower, nf-core workflows, Galaxy, Conda environments, Docker containerization, cloud-native computing
Analysis & Viz
GraphPad Prism, Python (pandas/scikit-learn), R, Plotly, Power BI, automated reporting, interactive dashboards

Contact

Scale your genomics infrastructure without losing scientific rigor

I partner with biotech, pharmaceutical, and research organizations to build production genomics platforms—combining 5+ years of molecular biology expertise with cloud-native infrastructure engineering.

Share your current challenge—whether it's scaling NGS workflows, modernizing lab infrastructure, or automating data pipelines—and I'll respond with practical next steps within two business days.

Focus areas

  • Genomics Platform Architecture End-to-end NGS pipelines, Nextflow/nf-core workflows, AWS/Azure optimization, Kubernetes orchestration
  • Research Infrastructure Modernization GitOps workflows, Terraform IaC, containerization, CI/CD automation, reproducible science
  • Laboratory Data Automation Instrument integration, LIMS connections, ETL pipelines, automated analysis, real-time dashboards