Lessons I keep learning.

Principles I earned, one incident at a time.

The Zoom Scheduler that taught me to plan twice, ship once.

I had just moved back to the Bay Area to join DoorDash after 13 years in LA, where I graduated from CSUN. One of my first projects was standing up a fleet of Zoom Scheduler rooms across offices using Apple Configurator and Jamf. I was heads down, confident in the playbook, and eager to ship. Midway through the rollout, a step I hadn't fully thought through unenrolled a batch of devices all at once. The next morning I was re-enrolling devices by hand and coordinating with folks across sites to get everyone back online.

What I took away from that week is a lesson I've carried into every IT program since. Careful planning and change management aren't bureaucracy, they're how you protect the people who depend on you. The unglamorous steps are the ones that turn a scary change into a boring one, and boring is what good IT should feel like.

A few habits I picked up from that rollout and have reached for ever since:

  • Start with a pilot cohort of 5 to 10 devices. If something is going to go sideways, let it go sideways on machines where I can physically or remotely recover without touching anyone else's day.
  • Know exactly what the rollback looks like before I ship the change. If I can't draw the undo in one diagram, the change isn't ready to leave my laptop.
  • Name the blast radius out loud. Which sites, which devices, which humans, and what happens to them if I'm wrong. If the honest answer scares me, I slow down.
  • Write the comms for the worst day, not the happy path. Tell the affected teams what's changing, when, what they'll feel, and how to reach me when they do. A heads-up before an outage is a very different experience than a surprise one.
  • Put a real human on the other side of automation. For anything that touches an MDM, an identity provider, or an endpoint at fleet scale, I want a second pair of eyes on the plan before I hit run.
  • Schedule the window with recovery time built in. I do the change when I still have the daylight to fix it, not right before I sign off.

Balance Simplicity and Complexity.

Over the last five years I've built workflows across IT, Corporate Security, Legal, Workplace, and Procurement. Some of that work genuinely needs complexity. A lot of it doesn't. The gap between those two is where practical ITIL lives, and it's the middle ground I spend most of my time helping business partners find.

It's tempting to solve every request with a new form, a new queue, and another stack of automation rules. The better move is disciplined restraint: start with what exists, extend it thoughtfully, and introduce new orchestration only when the complexity is justified. Here is a recent example of correction from our IT Help program at OpenAI:

  • Unify the intake channel in Slack. One place to ask for help, one place to search, one place for our agent to learn. The user experience gets calmer and the data gets cleaner.
  • Simplify the Jira config. We pruned request types, fields, and workflow steps that had quietly compounded over years. The agent experience felt lighter right away, and cycle time followed.
  • Let the assistant do the first pass. Our Slack agent now handles the top of the funnel, surfacing knowledge base articles, runbooks, and self-help before a human is ever pulled in.
  • Let the agent take safe actions. With our App Engineering team, we built an assistant that can actually execute common requests like access to a system on behalf of the user, guarded by the same policies we already trust.

The simpler you make the path, the more leverage every person on your team has. Complexity is a tax you pay forever. Simplicity is a gift to everyone downstream of your decisions.

Take feedback with an open heart, apply it one habit at a time.

Giving and receiving feedback is one of the hardest, most human parts of the job. I try to start from the same place on both sides of it: be empathetic, and be open to the message, even when the delivery isn't perfect.

Not every piece of feedback is going to change my approach, and that's okay. What matters is not taking it personally, hearing it honestly, and giving yourself room to respond with curiosity instead of defensiveness. We're creatures of habit, and habits don't break overnight. The honest work is to sit with the feedback, show the people around you that you're taking real incremental steps, and let the iterations compound.

Some of the most constructive feedback I've ever gotten has come from my family. My four-year-old and my wife have a way of naming something I couldn't see at work, with a directness nobody at the office will ever match. I try to hold workplace feedback to the same standard: receive it with the same open heart I'd want at home, and give it with the same care I'd want in return.

Walking a mile in the CIO's shoes before you become one.

Robert D. Austin, Richard L. Nolan, and Shannon O'Donnell's The Adventures of an IT Leader follows a fictional business executive, Jim Barton, who is suddenly handed the CIO job at a company where he does not know the tech, does not speak the jargon, and can't tell the real problems from the political ones. It reads like a novel but every chapter is really a case study on the decisions an IT leader actually faces: budgets, vendor lock-in, security posture, org design, crisis communication, and the thousand small judgment calls in between.

I recommend it to anyone stepping into IT leadership for the first time, or to any non-IT leader who suddenly finds themselves responsible for an IT function. The book won't hand you answers, but it gives you a mental rehearsal of the rooms you are about to be in.

Key takeaways I keep coming back to:

  • Your first 90 days are for listening, not re-organizing. Barton's best moves come from walking the floor and asking questions no one has asked in years.
  • IT risk is business risk. Frame every security, infra, or vendor conversation in the language the business already speaks — revenue, trust, time, regulation.
  • The CIO's real job is translation. Between engineers and executives, between vendors and users, between "what we have" and "what the business thinks we have."
  • Crises reveal the org chart you actually have. Pay attention to who steps up during an incident, not who has the title on the wiki.

Credit to Austin, Nolan, and O'Donnell. Pick it up on Amazon →

Craft is a daily practice, not a job title.

Andrew Hunt and David Thomas's The Pragmatic Programmer is the book I recommend to anyone who wants to be better at building software, whether they write code every day or just own systems that are built on top of it. It is short on dogma and long on habits — small, portable principles that compound over a career.

I first read it as an IT admin trying to understand the engineers I supported. I come back to it every couple of years, and each time a different chapter hits differently depending on what I am wrestling with that year.

Key takeaways I keep coming back to:

  • DRY — Don't Repeat Yourself. Every piece of knowledge should live in exactly one place. Duplicated config, duplicated runbooks, and duplicated truths are how outages are born.
  • Fix broken windows. Small bits of rot, left alone, signal that rot is acceptable. Fix the tiny thing today so the culture does not slide tomorrow.
  • Stone soup and boiled frogs. Start change with something small and valuable, then let momentum carry it. And stay alert to the slow drifts that nobody notices until they are painful.
  • Program deliberately. Know why you are doing what you are doing. Guessing is fine in a sandbox, not in production.
  • Invest in your knowledge portfolio. Read, experiment, and learn one new thing regularly. Depreciating skills are the most expensive thing in tech.

Credit to Andrew Hunt and David Thomas. Pick it up on Amazon →

The 75-cent error that started modern incident response.

Clifford Stoll's The Cuckoo's Egg is an astronomer's first-person account of noticing a tiny billing discrepancy at Lawrence Berkeley National Laboratory in 1986, pulling on the thread, and realizing he is watching a foreign intelligence operation in real time. Decades later, it still reads like a thriller and still teaches the fundamentals of detection, patience, and cross-team coordination better than most modern playbooks.

I re-read it every few years because the core lesson never ages: attackers leave small, boring, accounting-style signals long before they leave dramatic ones, and the people who catch them are the ones willing to chase a detail everyone else dismissed as noise.

Key takeaways I keep coming back to:

  • Boring anomalies are the best anomalies. A 75-cent mismatch in an accounting report was the thread. If something does not add up, it does not add up.
  • Detection is a writing exercise. Stoll's logbook is the reason anyone believed him. Write down what you see, when you saw it, and what you did about it.
  • Incident response is a people problem. He had to convince the FBI, the CIA, the NSA, and his own leadership that this was real. Most of the book is the politics, not the packets.
  • Patience beats cleverness. He left the attacker in place for months, watching, rather than kicking them out on day one. The whole story depends on that restraint.

Credit to Clifford Stoll. Pick it up on Amazon →

The human job is to build the scaffolding the models climb.

Daniel Miessler's AI Unmasked: Our Work as Scaffolding reframes what knowledge workers actually do in a world where models are getting good at the "answer" part of every job. His argument is that the durable human value is not the final output — it is the context, the constraints, the judgment, and the taste that let a model produce something useful in the first place. We are the scaffolding; the models are the workers climbing it.

The piece landed for me because it matches what I see in IT every day. The teams getting real leverage out of AI are not the ones with the cleverest prompts. They are the ones who have done the unglamorous work of writing down how things actually run, where the edges are, and what "good" looks like.

Key takeaways I keep coming back to:

  • Write the context down. The policies, the constraints, the known edge cases. If it only lives in someone's head, a model cannot use it and neither can a new teammate.
  • Judgment is a deliverable. Deciding what not to do, what to escalate, and what is out of scope is work — and it is the part models lean on you for.
  • Taste compounds. The teams that consistently ship good outputs are the ones with a strong internal sense of "we would never ship that."
  • Scaffolding is leverage. Every hour spent documenting reality is an hour you get back multiplied the next time a model or a new hire needs to act on it.

Credit to Daniel Miessler. Read the article →

Turn the auditing features you already own into a tripwire.

Dane Stuckey's piece on detecting Windows endpoint compromise with System Access Control Lists is one I wish more IT and security folks would read side by side. It shows how to take SACL auditing, which already ships with Windows, and point it at the files, registry keys, and objects attackers actually touch. Instead of firehosing every log you can find, you end up with a small set of alerts that usually mean something.

The thing I love about the piece is that it is deeply practical. It is not "buy this tool." It is "here are the exact objects to watch, here is what a real alert looks like, and here is why most people skip this." It is a reminder that the best detections are usually built on primitives you already own.

Key takeaways I keep coming back to:

  • High-signal beats high-volume. A small number of well-placed SACLs on sensitive objects will out-perform a firehose of generic endpoint logs every time.
  • Detections are a joint IT + Security project. IT owns the fleet, Security owns the hypothesis. Neither one can ship this alone.
  • Audit what attackers want, not what is easy to log. Credentials, persistence locations, scheduled tasks, LSASS — start from the adversary's goals and work backwards.
  • Document the "why" next to the rule. A year from now, the on-call engineer needs to know whether a SACL alert is a real tripwire or a leftover experiment.

Credit to Dane Stuckey. Read the article →