"
I engineer security the same way high-scale teams engineer reliability — as a
system property designed, enforced, and continuously verified. Production
behavior must remain predictable even under adversarial pressure.
"
Information Security Analyst (IC-2) at ZEE Entertainment, building
security control planes across CI/CD,
artifact supply chains, Kubernetes runtime, and
GCP cloud governance — securing 350+ microservices so that
insecure paths become operationally hard and secure delivery is the default.
📍 Bengaluru, India🏢 InfoSec Analyst IC-2 @ ZEE🧩 DevSecOps · K8s · AppSec · Cloud🟢 Open to DevSecOps opportunities
0+
Microservices secured
0+
CI/CD security gates
0
Pro certifications
0+
Years in security
Security Control Plane — All Systems NominalUpdated: just now
Anshumaan Singh is a security engineer — the person whose job is to make sure software companies don't get hacked, don't leak data, and can move fast without breaking security. He does this at ZEE Entertainment (a major Indian media company) for 350+ apps. The technical content here is proof of work for hiring managers and fellow engineers.
Most security teams react. Watch alerts. Triage tickets. Ship reports.
I build systems where the dangerous path is operationally hard
before anyone notices it exists.
At ZEE Entertainment I secured 350+ microservices from code commit to production —
not by reviewing more, but by making insecure releases
structurally impossible. No bypass paths. No rebuilds in prod.
No “we’ll fix it next sprint.”
I treat security the way reliability engineers treat uptime: with
invariants. If a control can be bypassed — it isn’t one.
If a gate produces noise — it erodes trust. If evidence doesn’t travel with
the release — it doesn’t exist. That’s the philosophy behind everything I ship.
350+microservices in scope
6industry certifications
2+years at ZEE
3cloud platforms secured
0x15E350+Microservices secured end-to-end
0x64100%CIS Kubernetes Benchmark
0x000Production incidents on my watch
02
cat principles.md Engineering Philosophy
Why I'm focused here: Security engineering requires a philosophy — not just a checklist. These are the six rules I apply when designing every system or control.
Six principles tested under production pressure — not borrowed from a framework.
// principles
RULE_01
Shift Smart, Not Just Left
Security bolted to a sprint is a tax. Built into the platform it’s invisible — and teams ship faster because of it. The paved road must be the secure road.
RULE_02
Identity is the Control Plane
Network perimeters trust the packet. I trust the identity. Short-lived, OIDC-federated, cryptographically verifiable — unforgeable by design.
RULE_03
Guardrails over Gates
Gates block. Guardrails guide. One kills velocity — the other multiplies it. Build systems where the safe path is also the easiest path.
RULE_04
Detection as Code
An alert no one acts on is just log noise with extra steps. Every detection I ship maps to a playbook and a decision — not a Slack ping.
RULE_05
Evidence over Assertions
Don’t tell auditors you’re secure — show them. SBOM linked to commit. Scan output signed. Promotion gate logged. Theater out. Evidence in.
RULE_06
Risk-based, Not Fear-based
Not every critical CVE is worth blocking. Not every low is safe to ship. CVSS + EPSS + reachability = one decision: block, allow with evidence, or accept with expiry.
03
git log --all What I Built
Why I'm focused here: I build security into the platform, not onto it. These are the actual systems I engineered — CI/CD control planes, supply chains, K8s guardrails, AppSec responses.
Real pipelines. Real enforcement. Built for 350+ services — not a conference slide.
// SECURE SYSTEM DESIGN — END-TO-END ARCHITECTURE FLOW
Each layer enforces a different security control. No single point of bypass. Evidence flows from commit to runtime.
05
Reference Architecture
Why I'm focused here: Architecture is where security either holds or breaks. I design systems where the pipeline itself enforces trust — from git commit to Kubernetes runtime, every hop is verified.
Source to prod. No untrusted artifact reaches runtime. Every transition has a gate and a receipt.
Pipeline Evidence Snapshot
SBOM / Attestation / Scan pipeline output
SBOM ✓
🔍 Click to expand
Proof-of-implementation artifact — real pipeline output, not a diagram.
// Every release is trustworthy — or it doesn't ship.
01
Source Governance
Pre-commit + signed commits + PR-only merges
CODEOWNERS + branch protection + required checks
Main branch as single source of truth
02
CI Verification
Ephemeral runners + deterministic security gates
SAST + SCA + Secrets + IaC enforced at build time
Quality gate: pass/fail with evidence artifacts retained
Every layer adds enforcement. eBPF gives kernel-level visibility without modifying application code. Istio encrypts all east-west traffic. ArgoCD ensures only Git-approved state runs in production.
06
tail -f incidents.log Case Studies
Why I'm focused here: Real security engineers get called at 2 am. These are the incidents I resolved — what I detected, how I responded, and what I shipped to prevent recurrence.
Not hypotheticals. Not labs. These happened — here’s what I did.
Integrated into CI/CD with blocking thresholds and evidence retention.
✓ Improved coverage beyond static analysis
07
nmap --threat-model Threat Model
I model threats the way attackers think — not the way auditors check boxes. Kill-chain mapped. Response-ready. No theoretical vectors that can’t be exploited.
EPSS scores sourced from FIRST.org. Trivy + Prisma cross-validated. Updated per pipeline run.
09
./score-risk.sh Risk Engine
Not every critical CVE should block a release. Not every low-severity is safe to ignore. This is how I separate signal from noise and keep engineering teams moving.
Engineering foundations: systems, networks, embedded systems, signal processing, and computing fundamentals.
SystemsNetworkingProblem Solving
12
ls -la ~/repos Open Source Projects
Real repositories. Real security engineering. Every repo has commits. Every problem is documented.
// github.com/anshumaan-10
k8s-security-lab
Shell
10 real Kubernetes misconfigurations — each exploited end-to-end and documented with a hardening guide. A hands-on security workshop covering RBAC escapes, privileged pod exploits, host namespace attacks, and more.
Intentionally vulnerable Flask application built for the Kubernetes security lab. Demonstrates RCE chains, container escape paths, and host namespace attacks — purpose-built to be exploited and studied.
flaskvulnerable-appRCEcontainer-escapehost-escape
Vulnerability Lab
★ Featured
k8s-lab-deployments
Shell
Production-pattern Kubernetes manifests, ArgoCD GitOps app definitions, and cluster setup scripts powering the k8s-security-lab. Demonstrates real-world deployment patterns and GitOps workflows.
kubernetesargocdgitopsmanifests
GitOps + K8s
image-attestation-cosign
Dockerfile
Container image signing and attestation using Sigstore Cosign. Demonstrates the full supply chain integrity flow — sign the image, attach SBOM, verify before deploy. No trusted image without a verified signature.
cosignsigstoresupply-chainSBOM
Supply Chain Security
★ Featured
kyverno-policy-demo
YAML
Policy-as-code with Kyverno for Kubernetes admission control. Demonstrates how to block privileged pods, enforce image registries, require labels, and auto-mutate workloads — governance without manual review.
kyvernopolicy-as-codeadmission-controlOPA
Policy as Code
custom-secret-regex
Regex
Custom regex patterns for detecting org-specific secrets in CI/CD scanning pipelines. Handles Azure storage keys, internal API tokens, custom service credentials — beyond what default scanners catch.
Why I'm focused here: Honest answers to questions I actually get — about what I do, how I think, and what I'm looking for next.
What does a typical week look like for you?
Security engineering sprints: reviewing CI/CD pipeline scan results, remediating high-priority CVEs with engineering teams, tuning SIEM detections, participating in design reviews for new microservices, and writing security runbooks. No two weeks are identical — that's the point.
What makes your DevSecOps approach different?
I treat security as a system property, not a review step. My pipelines don't "check" security — they structurally prevent insecure releases. The CI gate either passes or the build doesn't exist. No "we'll fix it next sprint." No bypass path.
Open to new opportunities?
Yes. I'm open to connecting with teams building security engineering at scale — DevSecOps, Kubernetes security, supply chain integrity, or cloud security roles. Remote-forward or Bengaluru-based. Reach out via email or LinkedIn.
How do you approach CVE triage?
CVSS alone is noise. I use a three-signal model: CVSS (technical severity) + EPSS (exploitation probability in the wild) + reachability (is the affected code path actually executed in my runtime?). Only the intersection of all three drives immediate action.
Do you do pen testing or just pipeline security?
Both. I've done web app, API, and mobile pen testing at ZEE Entertainment — auth bypass, access control, API misuse, and misconfiguration validation. I also built the DAST automation that runs post-deploy in CI/CD via OWASP ZAP with kubectl runtime URL discovery.
What's your stack for a greenfield security program?
Start with identity (OIDC, short-lived credentials). Layer in CI security gates (SAST + SCA + secrets scan). Add image scanning + SBOM before registry push. Enforce Kubernetes admission via OPA/Kyverno. Connect to SIEM. Everything else is config on top of those six primitives.
How do you manage security at scale across 350+ microservices?
Policy as code is the only way. You can't review 350+ services manually — you make the insecure path impossible at the platform level. I use Kyverno admission policies to block unsafe deployments, org-level GitHub Actions templates to standardize CI security gates, and a single evidence model (scan output + digest + approval) that every service inherits automatically. The controls live in the platform, not in per-service config.
What's your approach to threat modeling — STRIDE, PASTA, or something else?
I use STRIDE for system-level threat enumeration during design reviews, and PASTA for risk-priority ranking in sprint planning. In practice, I focus on attack surfaces: credential paths (secrets, tokens, IAM), pipeline injection points (runner compromise, dependency substitution), and runtime escape vectors (privileged pods, host mounts, RBAC gaps). The framework is less important than covering the actual blast radius per component.
How do you handle security incidents at 2am?
Pre-built runbooks and a clear escalation tree — decisions made during incidents should be policy, not improvisation. I've built runbooks for secrets leakage (validate exposure → rotate → audit trail), container escape (isolate pod → revoke credentials → force redeploy), and RBAC violation (lockdown namespace → audit log review → policy update). The goal is to move from detection to containment in under 15 minutes.
What's your view on eBPF and service mesh for cloud-native security?
eBPF is the most impactful new primitive for runtime security — kernel-level visibility with near-zero overhead, no agent sidecar needed. Falco + Cilium gives you both network policy enforcement and syscall-level threat detection. Istio mTLS ensures all east-west traffic is encrypted and identity-verified. Together they remove implicit trust from the data plane. I'm actively deepening expertise in Tetragon for process-level observability.
How do you measure the effectiveness of a DevSecOps program?
Four key metrics: (1) Mean Time to Remediate critical CVEs — should be hours, not sprint cycles. (2) Gate escape rate — how many high-risk changes slip through CI? Should be zero. (3) SBOM freshness — is every running artifact's dependency graph known and current? (4) Policy violation trend — are we blocking more insecure patterns or are teams routing around controls? Improving all four simultaneously means the program is actually working.
18
curl medium.com/@anshu Writing & Publications
// articles fetched
// engineering insight
"Zero trust is not a product to buy — it's a mental model to apply. Every access decision should be made as if the network is already compromised."
— Anshumaan Singh
221 followers/700+ claps/10+ articles published/Security engineering depth — written for practitioners, not marketing
Hey! Ask me anything about my security engineering background — certs, skills, what I build at ZEE, SBOM, K8s, open source. I'll answer in character. 🔐