All articles

Article

iOS Security for Mobile Engineers — Thinking Like a Defender

Security on iOS is risk management, not paranoia. This tutorial walks through threat modelling, the platform building blocks (Keychain, Data Protection, Secure Enclave, TLS, OAuth), and a six-step decision framework you can run on every feature — anchored to one example app, PocketPay.

Picture of the author
Mahmoud Albelbeisi
Published on
iOS Security for Mobile Engineers — key security principles and best practices

A note before section 1

PocketPay — a small peer-to-peer payment app for iOS. Users sign up with a phone number, link a bank account, send money to friends, and check their transaction history. The app holds an auth token, a session, a 4-digit PIN, and uses Face ID for fast unlock. There's a balance display, push notifications for incoming payments, and a deep link that opens a pre-filled payment screen. We'll return to PocketPay in every section so one mental picture grows with the ideas, and one app's defences harden as the tutorial progresses.


1. The Mental Model

Security is risk management. It is not paranoia, and it is not a checklist you copy from somewhere else. Every choice you make as an iOS engineer either reduces risk, accepts risk, or buys nothing for the cost.

Three numbers sit behind every security decision:

  • Likelihood — how often this attack is tried, in your context.
  • Impact — how bad it is if the attack lands.
  • Cost of defence — engineering time, plus the drag on the product.

If the cost of defence is bigger than the loss it prevents, the defence is theatre. Take it out.

Think of it like… locks on your front door. A normal lock keeps casual people out. A bank vault keeps determined thieves out. You install the right lock for what's behind the door. A vault on a garden shed is theatre.

In software, this looks like… PocketPay protects the auth token in Keychain — cheap, big win. It does not run custom anti-debugging code — expensive, almost no win for a small consumer app. The cost-to-loss ratio drives the choice.

The biggest myth is "Apple handles security." It is half-true. Apple gives you world-class primitives — the Secure Enclave, Keychain, App Sandbox, App Transport Security. They are excellent. But Apple does not stop you from misusing them. Save your token to UserDefaults and the Secure Enclave never sees it.

The other half of the mental model is the principle of least privilege (give every component only the permissions it needs). Every entitlement, every permission, every shared file area is a door. Open as few doors as you can.

Carry this sentence in your head:

"Security is not a feature you add. It is a property your design either has or fails to have."

The diagram says one thing: every concrete decision sits downstream of those seven inputs. Without them, you are guessing.


2. The Threat Model — Who, What, How

A threat model (a short written list of what you protect, who attacks, and how) is the shortest path from "we should do security" to a useful answer. Most engineers have one in their head. They never write it down. That is the bug.

Walk through PocketPay's threat model with me. Four blocks: assets, attackers, attack surface, then the output table.

2.1 Assets — what the attacker wants

An asset (anything of value the attacker is after) for PocketPay:

  • The auth token — gets the attacker into the user's account.
  • The linked bank account number — enables fraud against the user's bank.
  • The transaction history — leaks who paid whom, when.
  • The phone number — the recovery vector for many other accounts.
  • The contact list (if accessed) — useful for phishing the user's friends.

Money, identity, and graph. Those are the three categories. Most apps have at least two.

Real situation: PocketPay's first threat-model meeting fits on a napkin. Five rows, two hours of arguing. The output is the cheapest piece of paper the team will ever produce — every later decision lands faster because of it.

2.2 Attackers — who they are and what they get

An attacker (any person or system trying to take an asset) is not a faceless hacker in a hoodie. It is one of these archetypes:

  • Curious roommate — five minutes alone with an unlocked phone.
  • Lost-phone finder — has the device, no passcode known. May try restore tricks or PIN brute force (automatically guessing every 4-digit code).
  • Network attacker on shared Wi-Fi — café, airport, conference. Can intercept and modify traffic. May try a man-in-the-middle (MITM) (an attacker who sits between client and server and forges traffic).
  • Compromised SDK / supply chain — a third-party SDK in the app updates and now exfiltrates data. The attacker never touched your code; they touched a dependency.
  • Malicious profile / MDM — a profile installed by a tricked user adds a rogue Certificate Authority. Now the attacker can MITM the user even outside cafés.

Each archetype has different cost to attack, different access, and different goals. Mixing them in one fix wastes time.

A real-world flavour: NSO Group's Pegasus — well-known iOS spyware delivered through zero-click exploits — is a distinct archetype again (state-level attacker). It is not most apps' threat. Knowing it exists helps you pick where you stop.

2.3 Attack surface — every door in the house

The attack surface (every place where untrusted input meets the app) is the door list. For PocketPay:

  • URL schemes and deep linkspocketpay://send?....
  • Push notifications with action buttons.
  • The pasteboard (the system clipboard) and the share sheet.
  • Keyboard extensions — third-party keyboards see what the user types.
  • Third-party SDKs linked into the binary.
  • Network responses — every JSON field is untrusted input.
  • Files in iCloud — backups, drive sync.

Each entry is one door. Each door needs one thought: who can knock, and what happens when they do.

2.4 The output — a one-page table

Quick preview — the table below names attack surface, asset, likelihood, impact, defence, residual risk. Definitions:

  • Attack surface — every place untrusted input enters the app.
  • Asset — anything of value the attacker wants.
  • Likelihood — how often this attack is tried, in your context.
  • Impact — how bad the outcome is if it lands.
  • Defence — the specific control you apply.
  • Residual risk — what remains after the defence.

This is the deliverable. One row per realistic attack:

AssetAttackerAttackLikelihoodImpactDefenceResidual
Auth tokenNetwork attackerMITM on /loginMedium (public Wi-Fi)High (full account takeover)TLS + cert pinningPin rotation pain; corp proxy break
Auth tokenLost-phone finderRead from diskLowHighKeychain + biometricBiometric bypass on jailbroken device
Transaction listCurious roommateOpen the appHighLow (privacy only)Biometric unlock on resumeRoommate sees app icon
Bank accountCompromised SDKExfiltrate over networkMediumCriticalPrivacy manifest review; SDK pinning; egress allowlist on backendNew SDKs always need a fresh pass

After this section, you should be able to fill one in for your own app. If you cannot, the model is not done.

Then a quadrant of where PocketPay's risks fall:

The chart's job is to force an honest conversation. The plaintext token is screaming. The jailbreak detection bypass barely matters. Most teams spend their time on the wrong quadrant.

"A threat model you wrote down beats a threat model you remember, every time."


3. The Security Building Blocks on iOS

Each block follows the same shape so you can scan them: definition → analogy → PocketPay use → why it matters → Use it / Skip it / You pay → real situation → common tools.

3.1 Keychain

Keychain (the system store for small secrets, hardware-backed where the Secure Enclave is available) is where iOS stores passwords, tokens, and small keys. The OS encrypts the data with a key tied to the user's passcode and (on supported devices) the Secure Enclave.

Think of it like… a hotel safe in your room. Only you, with the right code, can open it. Other guests in the hotel cannot.

In software, this looks like… PocketPay saves the auth token to Keychain with kSecAttrAccessibleWhenUnlockedThisDeviceOnly. A full disk image off the phone cannot read it without unlocking.

Why it matters: a token in Keychain survives an attacker with offline access to the phone. A token in UserDefaults does not.

  • Use it when you store any secret (a small piece of data that must not leak — password, token, key) or credential (something proving identity, like a username/password pair).
  • Skip it when the data is large (photos, videos) or already encrypted by a higher-level system.
  • You pay a small API to learn (queries, attributes, error codes).
// 6-line Keychain save (simplified)
let query: [String: Any] = [
    kSecClass as String: kSecClassGenericPassword,
    kSecAttrAccount as String: "auth_token",
    kSecValueData as String: token.data(using: .utf8)!,
    kSecAttrAccessible as String: kSecAttrAccessibleWhenUnlockedThisDeviceOnly
]
SecItemAdd(query as CFDictionary, nil)

Don't confuse with… Keychain holds secrets (small structured items). Data Protection (next block) encrypts arbitrary files at rest. Different tools for different jobs.

Real situation: PocketPay v1 shipped the auth token in UserDefaults. The threat-model review caught it in 30 minutes. The fix was a 6-line Keychain wrapper. In production, the bug would have meant any backup-restore tool could read the token in plaintext.

Common tools: Keychain Services (system, C-style API), KeychainAccess (Swift wrapper, open source), SAMKeychain (older Objective-C wrapper).

3.2 Data Protection (file protection class)

Data Protection (file-level encryption tied to the device passcode) lets you mark each file with a protection class. The OS encrypts and decrypts at the right times.

Think of it like… filing cabinets in a building, each with a different unlock rule. Some open only at your desk; some open even when you are out at lunch.

In software, this looks like… PocketPay stores the encrypted transaction cache with NSFileProtectionComplete. The file is unreadable when the device is locked.

Why it matters: lost-phone attacks dissolve. The data is unreadable until the user unlocks.

  • Use it when you write any sensitive file to disk.
  • Skip it when the file is non-sensitive (icons, public images).
  • You pay edge cases: background tasks, push handlers, and pre-first-unlock contexts cannot read fully-protected files.

Real situation: PocketPay's push extension wants to badge the icon with the unread payment count. It cannot read the encrypted cache. The fix is a small unencrypted counter stored separately, which leaks no real signal.

Common tools: NSFileProtectionComplete, NSFileProtectionCompleteUntilFirstUserAuthentication, NSFileProtectionNone (default for new files).

3.3 App Sandbox

App Sandbox (the OS-enforced fence that keeps each app's files and data away from other apps) is free. Every iOS app runs inside one.

Think of it like… every flat in a building has its own door. Your neighbour cannot walk into your kitchen, even though you share a hallway.

In software, this looks like… PocketPay cannot read another app's files. Another app cannot read PocketPay's files. The sandbox enforces the wall.

Why it matters: the OS does the bulk of inter-app isolation for you. Most apps never have to think about it.

  • Use it when always (it is on by default).
  • Skip it when you cannot — the sandbox is mandatory.
  • You pay the cost of using shared mechanisms (App Groups, Keychain access groups, document picker) when two apps from your team need to share data.

Don't confuse with… the App Sandbox isolates files. The Keychain access group (a shared key that lets two apps from the same team read the same Keychain items) shares secrets between sibling apps.

Real situation: PocketPay ships a Today widget. The widget needs the user's name to render. The team adds an App Group, writes the name (not the token) to a shared file, and the widget reads it. The token stays in the main app's Keychain.

Common tools: App Sandbox (system, mandatory), App Groups (shared file area for sibling apps), Keychain access groups (shared secrets across sibling apps).

3.4 Secure Enclave

Secure Enclave (a small chip inside the processor that holds keys the OS itself cannot read) is where the strongest device keys live.

Think of it like… a vault inside a bank. Even bank staff cannot enter it. They can only ask the vault to do things for them.

In software, this looks like… PocketPay creates a private key bound to biometric unlock. The key never leaves the chip. The app asks the Enclave to sign a payload; the chip signs it after Face ID succeeds.

Why it matters: even if an attacker reads memory, dumps the disk, or owns root on a jailbroken device, the key inside the Secure Enclave is out of reach.

  • Use it when the key represents real authority (a signing key, a session-binding key).
  • Skip it when you only need to encrypt data at rest — Keychain plus Data Protection is usually enough.
  • You pay complexity (key creation flags, attestation), and the keys cannot be backed up or shared between devices.

Common tools: CryptoKit (modern Swift API for Secure Enclave keys), SecKeyCreateRandomKey with kSecAttrTokenIDSecureEnclave (low-level).

3.5 LocalAuthentication (Face ID / Touch ID)

LocalAuthentication (the system framework for biometric unlock) gives you Face ID or Touch ID on supported devices. The OS handles the camera, the prompts, the failure modes.

Think of it like… unlocking a door with your face. Quick. No password to forget.

In software, this looks like… PocketPay calls LAContext.evaluatePolicy(...). If the user's face matches, the OS releases a Keychain item bound to biometric. That item decrypts the session (server-side state that lets the client skip re-authentication for a while) blob.

Why it matters: a strong, low-friction unlock keeps users in the app. It also gates sensitive actions.

  • Use it when you want a fast unlock or a fresh-auth gate before a sensitive action.
  • Skip it when the action is not worth a prompt (it costs the user a second every time).
  • You pay the biometry-changed (a flag the OS sets when face/fingerprint data changes, invalidating biometric-bound keys) edge case: if the user adds a new face, the key may become unusable until re-enrolled.

Don't confuse with… Authentication answers "who are you?". Authorisation answers "what may you do?". Face ID is authentication. Whether the server lets that user transfer $5,000 is authorisation.

Real situation: PocketPay requires fresh biometric for any transfer over $200. The OS prompt fires; Face ID succeeds; the app signs the transfer with a Secure Enclave key. The server checks the signature and only then debits the account.

Common tools: LocalAuthentication (system Face ID / Touch ID API).

3.6 TLS and App Transport Security (ATS)

TLS (Transport Layer Security) (the standard protocol that encrypts and authenticates network traffic between client and server) protects every HTTPS connection. App Transport Security (ATS) (an iOS rule that blocks plaintext HTTP unless you opt out per domain) makes it the default.

Think of it like… a sealed envelope between two post offices that already trust each other's stamp.

In software, this looks like… PocketPay uses https://api.pocketpay.example. URLSession refuses any plaintext call. The certificate chain is checked against the system trust store.

Why it matters: TLS by itself stops most casual network attackers. Turning ATS off is almost always a smell.

  • Use it when always.
  • Skip it when you have a written reason (a legacy domain you control), and you log it as tech debt with an exit date.
  • You pay the small cost of having no plaintext fallback.

Common tools: URLSession (system, ATS-aware), Network.framework / NWConnection (modern, lower-level).

3.7 Certificate pinning

Certificate pinning (refusing any TLS certificate the app was not built to trust, even if the system trust store accepts it) defends against rogue Certificate Authorities and corporate proxies that install their own root.

Think of it like… a post office that refuses letters from any stamp except a specific one — even other valid stamps.

In software, this looks like… PocketPay's URLSessionDelegate compares the server's public key against a pinned hash. Anything else fails.

Why it matters: the OS trust store has hundreds of CAs. Any of them, if compromised, can issue a certificate for your domain. Pinning closes that door.

  • Use it when the threat model includes a network attacker with a planted CA (corporate networks, untrusted countries, stolen MDM profiles).
  • Skip it when rotation pain outweighs the gain; a pin you forget to rotate locks every user out.
  • You pay rotation pain and a release-train dependency: you cannot rotate the cert faster than you can ship a new build.

Real situation: PocketPay pinned a single leaf certificate in v3. Three months later the cert expired. Every user got a hard error. Now the pinning code accepts either of two pinned public keys, and the team rotates one at a time. Same protection, no outage.

Common tools: URLSessionDelegate (native, manual), TrustKit (library with reporting).

3.8 OAuth 2.0 and OpenID Connect tokens

OAuth 2.0 (an authorisation protocol that issues short-lived tokens after a login) is the standard way to grant a client app access to a user's account. OpenID Connect (OIDC) (a thin identity layer on top of OAuth 2.0) adds an id_token so the app knows who the user is.

A token (access / refresh / ID) (a string that proves the bearer has permission) comes in three shapes:

  • access token — short-lived (minutes), used in every API call.
  • refresh token — long-lived (days/weeks), exchanged for new access tokens.
  • id_token — proves identity (used for sign-in).

Think of it like… a one-day pass to a gym (access), plus a membership card you show to get a new one (refresh).

In software, this looks like… PocketPay's access token expires every 15 minutes. The refresh token sits in Keychain with biometric protection. When the access token expires, the app uses the refresh token to get a new one.

Why it matters: short-lived access tokens limit the blast radius of any leak. Refresh tokens, kept securely, let the user stay logged in without re-typing their password.

  • Use it when you have any user authentication.
  • Skip it when you genuinely have no concept of users (rare).
  • You pay the protocol learning curve and the rotation logic.

Common tools: AppAuth-iOS (OAuth 2.0/OIDC client, well-maintained), ASWebAuthenticationSession (system in-app browser for OAuth), Sign in with Apple (native identity provider).

3.9 App Tracking Transparency, IDFA, and Privacy Manifests

App Tracking Transparency (ATT) (a system prompt that asks the user before the app tracks them across other apps and websites) is the consent layer. IDFA (Identifier for Advertisers) (an OS-issued ID for ad attribution) is what apps usually want when they ask for tracking. Privacy manifest (an XML file that declares what data the app or SDK collects and why) is the disclosure layer. App Store Connect privacy nutrition labels (the public summary on the App Store of what an app collects) is the user-facing contract. Purpose string (the short sentence in Info.plist shown when the app asks for camera, contacts, photos, etc.) is the per-permission consent text.

Think of it like… ingredient labels on packaged food. The user reads what is inside before they pick the box.

In software, this looks like… PocketPay does not track. Its privacy manifest declares only the data needed to operate. ATT is never prompted because the app does not need IDFA.

Why it matters: privacy violations are a faster path to a one-star review than any feature is to five stars.

  • Use it when always — the privacy manifest is mandatory for many SDKs since 2024.
  • Skip it when you cannot — these are platform requirements.
  • You pay the discipline of writing every SDK's collected data into your privacy manifest.

Common tools: App Tracking Transparency (system), Privacy Manifests (system, XML in bundle), App Store Connect privacy labels (system, per submission).

3.10 OSLog and unified logging

OSLog (unified logging) (the system logging framework that redacts dynamic strings by default) is Apple's modern logger. It treats every interpolated value as private unless you mark it public.

Think of it like… a scribe who writes "[redacted]" by default and only writes the full text when you stamp the page "ok to publish."

In software, this looks like…

// 4-line OSLog with privacy specifiers
import OSLog
let log = Logger(subsystem: "com.pocketpay", category: "auth")
log.info("user \(userId, privacy: .private) signed in from \(country, privacy: .public)")

Why it matters: logs are how secrets leak. Crash reporters upload them. Bug reports include them. Default-redact is the only sane setting.

  • Use it when logging anything that could touch user data.
  • Skip it when logging genuinely public info (build version, app start).
  • You pay the cost of marking values .public when you actually want them visible — easy once it is in your habits.

Real situation: PocketPay v2 shipped with a print("transfer ok: \(token)") line. The token went straight to Crashlytics. The fix was OSLog with Logger().info("transfer ok") and never logging the token at all.

Common tools: OSLog / Logger (system, default-redact), Sentry (crash reporter, third-party — careful with PII), Crashlytics (Google, similar).

3.11 PocketPay's data at rest — where everything lives

Secrets in Keychain. Sensitive files under full protection. Public junk in default. UserDefaults holds nothing sensitive — ever.

3.12 PocketPay's biometric unlock — the sequence

If any step fails, the app falls back to PIN. If both fail, the session is locked until the next foreground.


4. The Security Postures — Archetypes for How Much Effort to Spend

"How much security is enough?" is the wrong question. Better: which posture (an overall stance about how much you trust the device, the network, and the user) fits this product, this stage, this market? Five postures cover most apps.

4.1 Minimum viable security

The shape: TLS, Keychain for tokens, biometric unlock, no fancy extras.

  • Wins when the product is early, the user's exposure is small, and engineering time is the bottleneck.
  • Hurts when real money or real PII (personally identifiable information) (names, emails, IDs that can be tied to a real person) flows; users will notice.
  • Real example: most early-stage consumer apps follow this until traction proves out.
  • PocketPay stage: MVP with internal testers and a small daily transfer cap.
  • Common tools: Keychain, URLSession (TLS by default), LocalAuthentication.

4.2 Defence in depth

Defence in depth (layering multiple controls so any one failure does not breach the asset). Assume any one defence will fail. Add the next one anyway.

  • Wins when assets are valuable enough that one bug should not be the whole story.
  • Hurts when the team is too small to maintain layers without rotting them.
  • Real example: 1Password stacks Secret Key + master password + Keychain + E2EE backups. Signal layers protocol, key rotation, and disappearing messages.
  • PocketPay stage: once real money moves and the daily volume is over a few hundred dollars per user.
  • Common tools: any combination of Keychain, Data Protection, CryptoKit, URLSession pinning, LocalAuthentication.

4.3 Zero trust on the device

Zero trust on the device (treat the device and client as untrusted; the server validates every sensitive action). The app is a thin shell over server-enforced rules.

  • Wins when the product cannot tolerate a stolen-device or jailbroken-device disaster.
  • Hurts when every action needs a round-trip; latency rises.
  • Real example: most major retail banking apps. The client never decides if a transfer is allowed; the server does, every time.
  • PocketPay stage: once stolen-device fraud shows up as a real loss line item.
  • Common tools: OAuth 2.0 with short access tokens, biometric step-up before sensitive actions, server-side fraud scoring.

4.4 End-to-end encryption (E2EE)

End-to-end encryption (E2EE) (only the sender's and recipient's devices can read the content; the server only sees ciphertext). The server is part of the threat model.

  • Wins when message content or asset content must be private from the operator itself.
  • Hurts when the team has to build search, backup, multi-device, or moderation features — all of which become hard.
  • Real example: WhatsApp (Signal Protocol since 2016), Signal, iMessage.
  • PocketPay stage: if PocketPay ever adds messaging or attachments. Probably not for transfers.
  • Common tools: CryptoKit (modern Swift), libsodium (third-party, well-audited), the Signal Protocol as a reference.

The server is a delivery pipe. It never holds plaintext.

4.5 Hostile environment / jailbreak-aware

The shape: the app actively detects jailbreak (modifications that remove iOS sandbox restrictions on a device) signs, hooks, and runtime tampering (changing app behaviour at run time, e.g., via Frida hooks), and refuses or degrades.

  • Wins when the threat model includes high-skill attackers and high losses (banks in markets with active fraud rings).
  • Hurts when the team treats detection as a defence; it is a signal, bypassable in ~20 minutes.
  • Real example: mobile banking apps in markets with high mobile-fraud rates invest in this layer because the loss math forces them to.
  • PocketPay stage: probably never. For a small consumer app, this is theatre.
  • Common tools: DTTJailbreakDetection-class libraries (client-side, bypassable), server-side device-attestation services (stronger, more complex).

The chart says one thing: defence in depth is the sweet spot for most apps. Jailbreak-aware is rarely worth its cost.

"Pick the posture your threat model justifies, then revisit when the model changes."


5. The Decision Framework

The repeatable procedure. Run it every time you design or review a feature. Six steps, because six fits in the head.

  1. Name the asset. What is being protected? Money, identity, contacts, location.
  2. Name the attackers. Who realistically wants it? Pick from the archetypes in § 2.
  3. Map the attack surface. What inputs and channels touch this asset?
  4. Pick the dominant risk. The one that, if exploited, ends the product (not just the feature). Most features have one. Find it.
  5. Choose the smallest defence that closes it. Reach for system primitives first. Custom crypto last. If you are inventing crypto, you are doing it wrong.
  6. Decide what residual risk you accept, write it down, and revisit when the threat model changes (new feature, new SDK, new market).

Worked example: PocketPay's Send Money button

  1. Asset. The user's balance and (downstream) their bank account.
  2. Attackers. Lost-phone finder, network attacker on shared Wi-Fi, compromised SDK.
  3. Attack surface. The button itself; the /transfer endpoint; the deep link pocketpay://send?...; push notifications acknowledging payment.
  4. Dominant risk. A stolen-but-unlocked phone making a transfer in five seconds.
  5. Smallest defence that closes it. Fresh biometric required for any transfer over $200. The biometric gate uses a Secure Enclave key bound to Face ID; the server checks the signature on the request. Step-up authentication (asking the user to authenticate again right before a sensitive action) turns this from "easy" into "needs a face."
  6. Residual. Roommate-with-face during sleep is still a risk. We accept it for now and document it. We do not accept it for transfers over $1,000 — those require PIN, which a sleeping user cannot give.

The output is a one-page note saved next to the feature spec. It says: who attacks, what we picked, what we accepted. Code reviewers refer back to it. New joiners refer back to it. The future you, six months later, refers back to it.

One decision the framework forces: fresh biometric for transfers ≥ $200. One decision it explicitly leaves: the in-app history is still readable after one biometric unlock — the curious-roommate cost is real but small, and the team accepts it.

"The framework's job is not to eliminate risk. It is to make every accepted risk a written, deliberate one."


6. Three More Worked Examples

6.1 Storing a long-lived API key in the app

The wrong instinct: "we'll obfuscate it." Obfuscation (making code or data hard to read but still recoverable) slows attackers a few minutes; it does not stop them. Anything in the IPA is public; an attacker dumps the binary in fifteen minutes.

The right instinct: do not ship one. Stand up a small backend proxy. The app authenticates the user; the backend holds the API key; the app never sees it.

Common tools: AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager (backend secrets stores). For the app: build-time injection via xcconfig or CI variables — never hardcoded.

6.2 Deep-link payment confirmation

The trap: an attacker crafts pocketpay://send?to=evil&amount=200&confirm=1. They host it on a website. The user taps; the app launches and silently sends. This shape is called deep link hijacking (another app or attacker intercepts a deep link meant for your app) when paired with a shared URL scheme (a deep link prefix any app can register).

Lessons: never auto-execute on link arrival. Always require fresh user confirmation. Prefer Universal Links (deep links bound to a domain you own and verified by Apple via the Associated Domain entitlement) over URL schemes.

Don't confuse with… URL scheme — anyone can register the same name; not a trust boundary. Universal Link — verified by Apple against a domain file. Different security guarantees.

6.3 Pasteboard reading on app foreground

The 2020 story: TikTok, LinkedIn, and Reddit were caught reading the pasteboard on every keystroke or every foreground. iOS 14 added a small toast — "Pasted from <App>" — and the discovery was instant. Trust took months to recover.

The fix is small: do not read the pasteboard unless the user pastes. If you must (a bank app autofilling a one-time code), read once, only when the user taps the field.

"Every silent read is now a public read. Design as if the OS will tell on you, because it will."


7. Anti-Patterns and Common Traps

"We'll harden it later." Security debt compounds. Later usually means after the breach. Fix: run the threat model now, spend an hour, make the small choice that costs nothing today and a lot tomorrow.

Storing secrets in UserDefaults. A QA build of PocketPay was almost promoted to TestFlight with a payment token in UserDefaults. Fix: Keychain, every time. Secrets do not go anywhere else.

Hardcoding API keys in the app bundle. Anything in the IPA is public. Fix: a backend proxy or per-user token exchange. Never ship a long-lived secret you cannot rotate.

Jailbreak detection as the security strategy. Detection is bypassable in 20 minutes by anyone who cares. Fix: assume the device may be jailbroken and design accordingly. Use detection as signal, not as a wall.

Custom crypto. "We rolled our own AES wrapper" introduces a bug nobody can find. Fix: CryptoKit, every time. If you are inventing crypto, stop.

Disabling ATS to make a feature ship. "The analytics endpoint doesn't support TLS yet" is the wrong vendor, not a security choice. Fix: pick a vendor with TLS, or wait.

Using WKWebView as a trusted UI host. A web origin is not a safe boundary for a sensitive flow. JS injection, redirect tricks, and broken cookie isolation all bite. Fix: native UI for anything money- or auth-related.

Logging tokens, PII, or full request bodies. Crash reporters upload these. PII leaks faster through logs than through any deliberate bug. Fix: redact at the source — OSLog privacy specifiers, Sentry beforeSend, no full bodies.

Trusting client-side validation for authorization. The client is the attacker's territory. Disabling a button does nothing. Fix: re-check on the server, every time. The client decides what to show; the server decides what is allowed.

Failing open instead of closed. When a check errors (network down, cert weird, signature missing), the wrong default is "let it through." That is fail open (when a check errors, allowing the action by default). The right default is fail closed (when a check errors, denying the action by default). Fix: every error path on a sensitive action ends in deny.

Designing for the security team you wish you had. If you do not have one, your code is the security team. Bring help in for the hard problems (custom crypto, jailbreak-aware builds, E2EE design); do the boring work yourself.

"Theatre is expensive and useless. Real defence is cheap and quiet. Pick the second."


8. When To Tackle Security — Triggers

Security work is not a quarterly sprint. It is a set of triggers in your normal flow. When any of these fire, do the threat-model pass before you ship.

  • Adding a new third-party SDK. Supply-chain risk. Read the privacy manifest. Pin the version. Re-check the egress on the backend. PocketPay example: a new analytics SDK could phone home with the user's session id.
  • Adding a new entry point. URL scheme, Universal Link, push notification action, share extension, widget. Each is fresh attack surface. PocketPay example: a new deep link to "approve transfer" — needs fresh confirmation, no auto-execute.
  • Touching auth, session, or token code. Any change here multiplies risk. Slow down.
  • Storing a new piece of data. Ask "what protection class does this need?" before writing the file.
  • Sending data to a new endpoint. Ask "is this in scope of the threat model? does it need pinning?"
  • Onboarding a new market. Different threat models. SIM swap (an attacker convinces the carrier to move a phone number to their SIM) is a leading attack in some regions; SMS one-time-passwords are fine in others; jailbroken-device baselines vary.
  • A bug bounty or pen-test report lands. A bug bounty (a paid programme that rewards external researchers for reporting vulnerabilities) report or a responsible disclosure (reporting a vulnerability privately before public release) is a free piece of threat-model data. Read every one. If it lands as a CVE (Common Vulnerabilities and Exposures) (a public catalogue of known security flaws, each with a unique id), it is no longer just yours.
  • Before App Store submission. Privacy manifest, ATT prompts, purpose strings, privacy labels — all reviewed.
  • Anything touching the screen with sensitive data. Screen recording / screenshot prevention (system or app-level controls over what the visible screen content reveals) is worth one thought before you ship a balance or PIN screen.

"If the change touches an asset from your threat model, you tackle security now."


9. What to Read Next

A short list. No link spam. One sentence per item on what you will learn. Read with your own PocketPay-shaped app in mind.

  1. Apple Platform Security Guide (latest PDF on apple.com) — the canonical reference for Secure Enclave, Keychain, Data Protection, and how the OS isolates apps.
  2. OWASP Mobile Application Security Verification Standard (MASVS) — a checklist with levels (L1, L2, L1+R) so you can pick how much security your app actually needs.
  3. OWASP Mobile Application Security Testing Guide (MASTG) — the practical companion to MASVS, with iOS-specific test recipes.
  4. The Tangled Web by Michał Zalewski — old, still the clearest book on web threat models, half of which apply to mobile too.
  5. Threat Modeling: Designing for Security by Adam Shostack — the book on the discipline; teaches you to think attacker-first.
  6. Apple's CryptoKit documentation — small, accurate, modern; this is the API you will use for any new cryptography.
  7. Frida documentation plus a free objection tutorial — to feel what an attacker on a jailbroken device sees in your app.
  8. Apple Security Bounty researcher write-ups — to see real iOS-class bugs from people who hunt them for a living.

10. Glossary

  • App Sandbox — the OS-enforced fence that keeps each app's files and data away from other apps.
  • App Store Connect privacy nutrition labels — the public summary on the App Store of what an app collects.
  • App Tracking Transparency (ATT) — a system prompt that asks the user before the app tracks them across other apps and websites.
  • App Transport Security (ATS) — an iOS rule that blocks plaintext HTTP unless you opt out per domain.
  • AppAuth-iOS — OAuth 2.0 / OIDC client library for iOS, well-maintained.
  • Associated Domain — entitlement that pairs an app to a domain so Apple can verify Universal Links.
  • ASWebAuthenticationSession — system in-app browser for OAuth flows.
  • Asset — anything of value the attacker is after.
  • Attack surface — every place where untrusted input meets the app.
  • Attacker — any person or system trying to take an asset.
  • Authentication — the question "who are you?".
  • Authorisation — the question "what may you do?".
  • AWS Secrets Manager — backend secrets storage on AWS.
  • Biometric (Face ID / Touch ID) — system-managed face or fingerprint unlock.
  • Biometry-changed flag — a flag the OS sets when face/fingerprint data changes, invalidating biometric-bound keys.
  • Bug bounty — a paid programme that rewards external researchers for reporting vulnerabilities.
  • Burp Suite — intercepting HTTP proxy used in pen-tests.
  • Certificate pinning — refusing any TLS certificate the app was not built to trust.
  • Charles Proxy — macOS HTTP debugging proxy.
  • CommonCrypto — older C cryptography API on Apple platforms.
  • Credential — something proving identity, like a username/password pair.
  • Crashlytics — Google's crash reporter; careful with PII.
  • CryptoKit — Apple's modern Swift cryptography framework.
  • CVE (Common Vulnerabilities and Exposures) — a public catalogue of known security flaws, each with a unique id.
  • Data Protection (file protection class) — file-level encryption tied to the device passcode.
  • Deep link hijacking — another app or attacker intercepts a deep link meant for your app.
  • Defence in depth — layering multiple controls so any one failure does not breach the asset.
  • End-to-end encryption (E2EE) — only the sender and recipient devices can read content; the server sees only ciphertext.
  • Fail closed — when a check errors, deny the action by default.
  • Fail open — when a check errors, allow the action by default. (The wrong default for sensitive actions.)
  • Frida — runtime instrumentation toolkit used by attackers and pen-testers.
  • GCP Secret Manager — backend secrets storage on Google Cloud.
  • HackerOne — major bug bounty platform.
  • HashiCorp Vault — open-source backend secrets store.
  • IDFA (Identifier for Advertisers) — an OS-issued ID used for ad attribution.
  • Jailbreak — modifications that remove iOS sandbox restrictions on a device.
  • Keychain — the system store for small secrets, hardware-backed where the Secure Enclave is available.
  • Keychain access group — a shared key that lets two apps from the same team read the same Keychain items.
  • KeychainAccess — open-source Swift wrapper around Keychain Services.
  • libsodium — third-party, well-audited cryptography library.
  • LocalAuthentication — system framework for Face ID and Touch ID.
  • Man-in-the-middle (MITM) — an attacker between client and server who can read and modify traffic.
  • mitmproxy — open-source intercepting HTTP proxy.
  • MobSF — open-source mobile static and dynamic analysis tool.
  • Network.framework / NWConnection — modern lower-level networking API on Apple platforms.
  • NSFileProtectionComplete — file protection class that locks the file when the device is locked.
  • OAuth 2.0 — an authorisation protocol that issues short-lived tokens after a login.
  • objection — Frida-based dynamic analysis toolkit.
  • Obfuscation — making code or data hard to read but still recoverable.
  • OpenID Connect (OIDC) — a thin identity layer on top of OAuth 2.0.
  • OSLog / unified logging — Apple's modern logging framework, which redacts dynamic strings by default.
  • Pasteboard — the system clipboard.
  • PII (personally identifiable information) — names, emails, IDs that can be tied to a real person.
  • PIN brute force — automatically guessing every short numeric code until one works.
  • Posture — an overall stance about how much you trust the device, the network, and the user.
  • Principle of least privilege — give every component only the permissions it needs.
  • Privacy manifest — an XML file that declares what data the app or SDK collects and why.
  • Purpose string — the short sentence in Info.plist shown when the app asks for camera, contacts, photos, etc.
  • Residual risk — what remains after the chosen defence has been applied.
  • Responsible disclosure — reporting a vulnerability privately before public release.
  • Runtime tampering — changing app behaviour at run time, e.g., via Frida hooks.
  • SAMKeychain — older Objective-C wrapper around Keychain.
  • Screen recording / screenshot prevention — system or app-level controls over what the visible screen content reveals.
  • Secret — a small piece of data that must not leak — password, token, key.
  • Secure Enclave — a small chip inside the processor that holds keys the OS itself cannot read.
  • Semgrep — rule-based static analysis tool.
  • Sentry — third-party crash reporter; careful with PII.
  • Session — server-side state that lets the client skip re-authentication for a while.
  • Sign in with Apple — Apple's native identity provider.
  • SIM swap — an attacker convinces the carrier to move a phone number to their SIM.
  • Step-up authentication — asking the user to authenticate again right before a sensitive action.
  • SwiftLint — Swift linter that can host security rules.
  • Threat model — a short written list of what you protect, who attacks, and how.
  • TLS (Transport Layer Security) — the standard protocol that encrypts and authenticates network traffic.
  • Token (access / refresh / ID) — a string that proves the bearer has permission.
  • TrustKit — third-party library for certificate pinning, with reporting.
  • Universal Link — a deep link bound to a domain you own and verified by Apple via the Associated Domain entitlement.
  • URL scheme — a deep link prefix any app can register.
  • URLSession — Apple's system networking API; ATS-aware by default.
  • WKWebView — Apple's web-content view; not a trust boundary for sensitive flows.