What Cybersecurity Is Actually Like
We talked to three cybersecurity professionals. One triages 60 alerts per shift at a health system in Ohio and can tell you within four seconds whether a login anomaly is a radiologist using a personal iPad or something actually wrong. One breaks into companies for a living and spends more time writing reports about it than doing it. One presents risk dashboards to a board of directors who still think "the cloud" is a metaphor. Same industry. Very different Tuesdays.
These characters are composites, built from dozens of real accounts, interviews, and community threads. The people aren't real. The experiences are.
What you'll learn
- What cybersecurity professionals actually do day to day across SOC operations, penetration testing, and security leadership
- How much of cybersecurity is detective work and documentation versus the "hacking" most people imagine
- The real differences between defensive security, offensive security, and the management layer above both
- Whether the certifications, the on-call rotations, and the constant learning curve are worth it, from people who made different choices
What It's Like Being a SOC Analyst at a Health System
Nate
When you tell people you work in cybersecurity, what do they picture?
They picture me in a hoodie in a dark room with green text scrolling on a screen. Every time. My uncle asked me at Thanksgiving if I "hack into things." My neighbor asked if I could "check if his email was compromised." My girlfriend Phoebe's parents think I work for the CIA or something. I work at a hospital system. Three hospitals, about 40 clinics, 8,000 employees. I sit in an office park in Dublin, Ohio, next to a Chipotle. The office has fluorescent lights and a vending machine that sells Belvita crackers for $1.75. It is the least cinematic work environment you can imagine.
What I actually do is watch alerts. Our SIEM, which is Splunk, ingests logs from basically everything: firewalls, endpoint protection, email gateway, Active Directory, the VPN, the badge access system, the medical device network. All of those systems generate events. Most of the events are normal. Someone logs in, someone sends an email, a firewall blocks a port scan from Russia. Normal. But the SIEM has correlation rules that flag combinations of events that might indicate something bad. A user logging in from two geographically distant locations within 10 minutes. An account that hasn't been used in 90 days suddenly accessing a file share at 2 AM. Outbound traffic to a known command-and-control IP. Those generate alerts. And I investigate them.
How many alerts are we talking about?
On a typical day shift, I'll see between 50 and 70 alerts in my queue. My shift is 7 AM to 7 PM, three days on, four days off, then four on, three off. Rotating 12s. Of those 50 to 70 alerts, the vast majority are false positives or known benign activity. The radiologist who logs into the PACS system from his home and then from the hospital within eight minutes? That fires a "impossible travel" alert every single time, because the system sees a login from a residential IP in Westerville and then a login from the hospital IP in Columbus and calculates that the user would have needed to travel 14 miles in eight minutes. But the radiologist isn't in two places. He was logged into the VPN from home and then walked into the hospital and logged in locally and the VPN session hadn't timed out yet. That exact scenario generates about four alerts per week from just that one radiologist. His name is Dr. Ansari. I know his login patterns better than his secretary does.
So the first thing I do with every alert is context. Who is this user? What's their normal behavior? Does this activity match something I've seen before? Deshawn, the other day-shift SOC analyst, he and I have this shorthand. He'll pull up an alert and say "Ansari?" and I'll glance at the source IP and say "Ansari." Close it. Move on. That exchange takes four seconds. Four seconds times four alerts per week times 52 weeks is, what, about 14 minutes of my year spent on one radiologist's VPN habits. It's fine. It's the job. But when people ask me what I do, I don't usually lead with "I verify that the same doctor is not, in fact, in two places at once, multiple times per week."
What happens when an alert is real?
So about three weeks ago, I got an alert at 9:14 AM. Outbound DNS query to a domain that our threat intelligence feed had flagged as associated with a known info-stealer malware family. The query came from a workstation in one of the outpatient clinics. The workstation was assigned to a medical receptionist, a woman named Darlene, who I've never met but whose hostname I now know by heart because I spent the next six hours on her machine.
First thing I did was check the endpoint detection agent on that workstation. CrowdStrike Falcon. The agent showed a process execution at 9:11 AM: a PowerShell script that had been launched from a macro in a Word document. The Word document was attached to an email. The email was a fake invoice from a company that looked like one of our medical supply vendors. Darlene had opened it because she processes invoices. She does this every day. She opens Word documents from vendors. That's her job. And the phishing email was good enough that the email gateway didn't catch it. It wasn't obviously misspelled. The sender domain was one letter off from the real vendor. The attachment was a .docx, not an .exe, so the gateway let it through.
I escalated to our incident response lead, Monica, within about 12 minutes of the initial alert. She pulled the workstation off the network remotely using CrowdStrike's containment feature. That kills the machine's network connectivity but keeps it running so we can still pull forensic data. Then I started the timeline. When did the email arrive? 8:47 AM. When was the attachment opened? 9:08 AM. When did the PowerShell execute? 9:11 AM. When did the DNS query fire? 9:14 AM. Three minutes between execution and C2 callback attempt. In those three minutes, the malware had already enumerated the local filesystem and attempted to access stored browser credentials. It hadn't exfiltrated anything yet because we contained the machine at 9:26 AM, twelve minutes after the alert. Twelve minutes. That was the difference between "we caught it" and "patient data potentially exposed."
The rest of that day was documentation. I wrote the incident report: timeline, indicators of compromise, affected systems, containment actions, remediation steps. Monica reviewed it. Our compliance officer, a guy named Harris, needed a copy because any potential exposure of patient data triggers a HIPAA assessment. Darlene's workstation was reimaged. The phishing email was retroactively purged from all mailboxes in the organization. We submitted the malicious domain and file hash to our threat intel sharing group. Total time from alert to case closed: about three weeks, because the HIPAA assessment has its own timeline. Total time I spent actively working the incident: about 14 hours across multiple days. That one real alert consumed more time than the previous 200 false positives combined.
That sounds like it could be any Tuesday for you. Is it always that specific?
That's the thing about this job. It's either incredibly boring or incredibly focused, and the switch happens with no warning. I'll go three weeks where every shift is false positives and tuning correlation rules and updating documentation. Then one alert is real and suddenly I'm in the middle of something that matters, like actually matters, and I'm racing a clock that I can't see because I don't know how far the adversary has already gotten. The adrenaline is real. The boredom between the adrenaline is also real. My friend Terrell works in an ER as a paramedic and I described this pattern to him and he said "yeah, that's basically my job too." Which is funny because our work looks nothing alike and feels exactly alike in that one specific way.
How did you get into this?
I was on the help desk at this same health system for three years. Password resets, printer jams, setting up new laptops for onboarding. I did the Security+ certification on my own because I thought it might get me a raise. It didn't get me a raise but it got me noticed by Monica, who was building out the SOC and needed a second analyst. She told me later that most of the SOC analyst applicants had degrees in cybersecurity but had never touched a real network. I didn't have the degree but I'd been inside our Active Directory for three years. I knew the infrastructure. She hired me because I could look at a username and tell you whether that person was a nurse, a billing specialist, or a physician based on the naming convention and the OU they were in. You can't teach that. You get that by resetting the same people's passwords for three years.
What's yours?
The loneliness of doing your job well is that nothing happens. When I do my job perfectly, nobody knows. No breach. No headlines. No incident report. Just another quiet week where the alerts were false positives and the real threats got caught before they became real problems. My manager, Vernon, once said in a team meeting that "the best SOC shift is the one nobody remembers." And he's right. But it's a weird thing to build your career around. Phoebe asks me how my day was and I say "fine, nothing happened," and she says "that's good, right?" And it is good. It's the whole point. But try telling someone you're great at your job and your evidence is that nothing happened. It's like a goalie who never faces a shot and has to convince people the defense was worth paying for.
The other thing is, the adversary only has to be right once. I have to be right every time. Every alert, every shift, every day. That's the asymmetry of this job and nobody in any cybersecurity marketing brochure will tell you what it feels like to live inside that equation for five years. You develop this, I don't know, a hypervigilance that doesn't fully turn off. Phoebe says I check my phone too much on my days off. She's right. I do. Because the attackers don't take my days off.
What It's Like Being a Penetration Tester
Abena
People think your job is the cool one. Is it?
Parts of it are genuinely cool. I won't pretend otherwise. Two Tuesdays ago, I gained domain admin on a client's Active Directory environment through a chain of three vulnerabilities that, individually, would each be rated medium severity. A misconfigured service account, an NTLM relay, and a Group Policy preference that still had a cached administrator password from 2019. None of those alone would have gotten me anything useful. But chained together, they gave me full control of every workstation and server in a 600-person manufacturing company in about four hours. That feeling, the moment the domain admin hash cracks or the ticket gets forged and you see the C$ share on the domain controller, that's a real rush. I understand why people who aren't in this field romanticize it.
What they don't romanticize is what happens after. The engagement was five days. The hacking, the actual breaking-in part, took about a day and a half. The other three and a half days were reporting. I write the report in a Word template that my firm, Bastion Security, has used for four years. The template is 38 pages before I add anything. It has sections for executive summary, scope and methodology, findings by severity, evidence screenshots, remediation recommendations, appendices. For each finding, I have to describe the vulnerability, show proof of exploitation with screenshots, rate the business impact, and provide a specific remediation recommendation. The domain admin chain alone was about 12 pages of documentation. Each screenshot needs to be annotated. Each command I ran needs to be listed. The remediation section needs to be specific enough that their IT team can actually act on it, not just "patch your systems." More like "disable NTLM authentication on the segment between VLAN 40 and VLAN 60, rotate the service account credential for svc_backupexec, and remove the cached Group Policy preference password by running these specific PowerShell commands."
That's a lot of writing for someone whose title is basically "professional hacker."
Yeah. My colleague Jasper, he's been doing this for eight years. He told me when I started that the job is 30 percent hacking and 70 percent writing about hacking. I thought he was exaggerating. He was understating it. Some engagements, especially web application tests, are closer to 20/80. You spend two days poking at an app, find a handful of cross-site scripting vulnerabilities and maybe an insecure direct object reference, and then spend three days documenting each one with request/response pairs, remediation guidance, OWASP references, and risk ratings. The client is paying $25,000 for this engagement. They want a deliverable they can hand to their developers and their auditors. Nobody is paying $25,000 for me to say "I found some XSS, here's a screenshot, good luck."
The thing is, the report is actually the most important part. I know that intellectually. The client doesn't care about my dopamine hit when I pop a shell. They care about whether their application is secure and what they need to fix. The report is how I translate what I found into something their engineering team can prioritize and act on. My manager, Linh, reviews every report before it goes to the client. She catches things I miss. Not technical things. Communication things. She'll say "this finding makes it sound like the client is incompetent" or "you're burying the business impact at the bottom of the paragraph, move it up." She's right every time. The best pentesters I know are the ones who write the best reports, not the ones who find the most vulnerabilities.
How do you stay sharp technically when most of your time is writing?
This is the constant tension. The threat landscape changes fast. New attack techniques get published, new vulnerabilities drop, new tools come out. If I spend all my time on client work and report writing, my skills get stale. So I spend, I'd say, 5 to 8 hours a week outside of work practicing. I do Hack The Box challenges. I read vulnerability disclosures. I still compete in CTFs with my college team, although "college team" is generous because half of us have graduated and we just kept the name. My brother Kwesi thinks this is insane. He works in supply chain management. When he leaves work, he's done. He doesn't go home and practice managing supply chains. But in my field, the people I'm simulating, the real attackers, they don't stop learning. And if my skills are two years behind theirs, I'm just a report writer who uses Nmap.
The OSCP certification, which is the gold standard for pentesters, I did that last year. It's a 24-hour practical exam where you have to compromise multiple machines in a simulated network. No multiple choice. You either break in or you don't. I took the exam on a Saturday morning and finished at 4 AM Sunday. My roommate thought someone was robbing us because I was pacing the apartment at 2 AM muttering about a machine that was resisting my privilege escalation. I passed. It cost $1,599. The firm reimbursed $1,200 of it. The remaining $399 and the 300 hours of study time were on me.
What's a client interaction like?
The kickoff call is always interesting because you can hear the anxiety. We're about to simulate an attack on their network. They know we're going to find things. The IT director is thinking "please don't make me look incompetent in front of my VP." The compliance person is thinking "I need this report for our SOC 2 audit." And the CISO, if they have one, is thinking "I already know we have problems, I just need you to document them so I can get budget to fix them." That last one is the dynamic people don't realize. A lot of pentests exist not because the company doesn't know they have issues, but because the security team needs external validation to unlock funding. Jasper calls it "paying someone to tell your boss what you've been telling your boss."
The report delivery meeting is the other key moment. Linh and I present findings to the client's leadership team. I've watched a VP's face go from "this is a formality" to "wait, you accessed our customer database?" in about three seconds. That moment, when the technical risk becomes a business reality in someone's mind, that's when the report matters. One client, a regional bank, had a finding where I demonstrated that an external attacker could access their wire transfer system through a chain that started with a phishing email. The CTO was quiet for about 30 seconds after I showed the screenshot. Then he said "how fast can you come back and retest after we fix this?" They signed a retest engagement that afternoon. That doesn't happen because of my hacking skills. It happens because the report made the risk real enough that someone felt it in their stomach.
What's yours?
How often you feel like a fraud. Not impostor syndrome in the generic sense. Something more specific. I break into networks for a living, and about once a month I hit a target that I can't crack. Everything I try bounces off. The perimeter is locked down, the internal network is properly segmented, the service accounts are configured correctly. And after three days of running into walls, I have to write a report that basically says "we were unable to gain significant access during this engagement." Which is a great outcome for the client. It means their security posture is strong. But it feels terrible. It feels like I failed, even though intellectually I know that my inability to break in IS the finding.
Jasper says the day you stop feeling like you should have found more is the day you stop being good at this. I think he's right but it's a hard way to live. You're only as good as your last engagement, and on your last engagement, maybe you missed something. You don't know. That's the part that follows you home. Not "did I find the vulnerability?" but "did I miss one that a real attacker wouldn't?"
What It's Like Being a CISO at a Fintech Startup
Travis
You started as a sysadmin. How'd you end up as a CISO?
Gradually, and then suddenly. I was a Linux sysadmin at a hosting company in Boulder for about four years. This was 2011 to 2015. I got interested in security because we kept getting hit. DDoS attacks, brute force attempts, one actual compromise where an attacker got into a customer's server through an unpatched WordPress installation and used it to send phishing emails to 40,000 people. My manager at the time, a woman named Denise, asked me to "figure out the security thing" because we didn't have a security person. So I became the security person. Not because I was qualified, but because I was interested and nobody else wanted to do it.
From there it was a series of moves. Security engineer at a mid-size SaaS company. Senior security engineer at a larger one. Security architect. Manager of a small security team. And then three years ago, the CEO of this fintech, a guy named Raj Chakravarti, recruited me to be their first CISO. They'd just closed their Series C, $85 million. They had 200 employees and zero dedicated security staff. They had a firewall. They had antivirus. They had a prayer. That was their security program. Raj hired me because a potential enterprise client asked about their security posture during a sales call and nobody could answer the question.
What does a CISO actually do? Because it doesn't sound like you're doing the technical work anymore.
I'm not. That's the hardest part of this transition and nobody prepares you for it. I spent 12 years building technical skills. I can read packet captures. I can write Splunk queries. I can configure a WAF. And now I spend maybe 5 percent of my week doing anything technical. The rest is meetings, strategy, budgets, hiring, vendor evaluations, compliance audits, board presentations, and arguing with people about whether security is worth the money.
A typical week for me: Monday morning I have a one-on-one with Arjun, my security engineering lead. He manages the three-person security engineering team. We review the vulnerability scan results from the weekend, which typically show 200 to 400 findings of varying severity, and we decide what the engineers will prioritize that week. Then I have a meeting with the VP of Engineering, a woman named Suki, to discuss security requirements for two new product features that are in development. Suki and I have a good relationship but it's fundamentally adversarial. She wants to ship features fast. I want to ship features securely. Those two things are not always the same thing, and the negotiation between "this needs a security review before it goes live" and "we promised the client it would ship by March 15th" is basically my entire professional existence.
Tuesday is board prep. Every quarter I present to the board of directors. I have 15 minutes to explain our security posture, our risk, our spending, and our progress on our security roadmap. Fifteen minutes. For a topic that could fill a three-hour seminar. I've gotten good at distillation. The board doesn't want to know about NTLM relay attacks. They want to know: are we going to get breached, how much would it cost, and what are we doing to prevent it? I translate everything into dollars. A ransomware event would cost us an estimated $4.2 million in downtime, recovery, and reputational damage. Our security program costs $1.8 million per year. That's the math I present. Whether that math is convincing depends on whether anything bad has happened recently in the news. After the MOVEit breach last year, my budget request went through in one meeting. Before that, it had been stuck in review for three months.
You said your relationship with the VP of Engineering is "fundamentally adversarial." That's a strong word.
Maybe adversarial is too strong. Structurally tense. Suki is smart and she takes security seriously. But her team is measured on velocity. How many features shipped. How fast they shipped. My team is measured on things not happening. Breaches that didn't occur. Vulnerabilities that got fixed before they got exploited. Data that stayed where it was supposed to stay. You can see how those two measurement systems create friction. When Arjun tells Suki's team that they need to fix a critical vulnerability in the authentication module before it ships, and that fix will take the developer three days, Suki sees three days of missed sprint velocity. I see three days that prevented a potential authentication bypass that could expose 200,000 user accounts.
We've gotten better at this. I bought Suki and her team lunch once a quarter to do a "security retrospective" where we review what we caught and what we missed. It helps because she can see the value in concrete terms. Last quarter we caught a server-side request forgery vulnerability in a staging environment that, if it had made it to production, would have allowed an attacker to access our internal API gateway. That finding took a developer two hours to fix. The breach it could have caused would have been, I don't know, catastrophic is not too strong a word for a fintech that handles customer financial data. When I showed Suki the timeline of "we caught this here, if we hadn't it would have been here," I could see it click. That lunch cost me $340. The security review program cost about $120,000 a year in engineering time. The breach would have cost everything.
What's the hardest part of the job?
Knowing things I can't prove yet. I know we have risks that haven't materialized. I know our AWS configuration has gaps because we grew too fast and security was an afterthought for the first four years of the company. I know that the third-party vendor that processes our customer payment data has a security posture I'm not fully comfortable with, based on their SOC 2 report and some questions they couldn't answer during our vendor assessment. I know all of this. But I can't point to a breach and say "see?" Because the breach hasn't happened. So I'm asking the CEO for $600,000 to fund a cloud security remediation project based on risks that are currently theoretical. And the CEO, Raj, he's a reasonable person. He asks "what's the probability of this actually happening?" And I have to say "I don't know, but the impact if it does is severe." That answer is not satisfying to a CEO who thinks in expected value calculations. He wants probability times impact. I can give him impact. Probability in cybersecurity is almost impossible to quantify honestly, and anyone who tells you otherwise is selling you something.
What's yours?
The personal liability. In cybersecurity leadership, there's been a shift in the last few years. The SEC now requires public companies to disclose material cybersecurity incidents within four business days. The SolarWinds CISO was personally charged by the SEC for misleading investors about the company's security posture. Personally charged. Not the company. The person. That case, more than anything else in my career, changed how I think about my job. I am not just managing a security program. I am, in a very real legal sense, the person who will be accountable if something goes wrong and someone decides we didn't do enough.
Claire asks me sometimes if it's worth it. The title, the salary, the seat at the table. The answer is yes, on most days. But there's a reason I keep meticulous records of every security recommendation I've made, every budget request I've submitted, every risk acceptance decision the business has made. I have a folder in my email called "receipts." It has every email where I flagged a risk and leadership chose to accept it. I hope I never need that folder. But it exists because this is a job where, if the worst happens, the first question people ask is "who knew?" And I always knew. That's literally the job description.
Would They Do It Again?
The help desk taught me the infrastructure, and the infrastructure is what makes me good at this. So I can't actually skip it. But the pay during those years was $38,000 and I was living with a roommate at 29. The investment paid off. I make $82,000 now. But the people who got CS degrees and started at $70,000 in SOC roles without the three-year detour through password resets, I think about them sometimes.
The hacking is real and it's the reason I do this. The rush of initial access, the puzzle of chaining vulnerabilities, the satisfaction of a clean exploitation path. That's all true. But if someone can't tolerate spending 70 percent of their time writing reports, this job will make them miserable. I've seen it. Two people on my team quit in the last year. Both of them loved the technical work and hated the deliverables. They went to bug bounty programs where the writing is optional. They also went to unpredictable income. There's always a trade-off.
I could make more money as an individual contributor at a bigger company. I'd sleep better. I'd carry less liability. But I got into this field because I care about whether systems are secure, and the place where that caring translates into actual change is at the leadership table. The person who decides whether to fund the remediation project or accept the risk, that's the person who determines whether the security team's work matters. I want to be that person. Even on the days it costs me sleep.
Frequently Asked Questions About Cybersecurity Careers
What does a cybersecurity analyst actually do all day?
It depends heavily on the role. SOC analysts spend most of their time triaging security alerts, investigating potential incidents, and documenting their findings. Penetration testers simulate attacks against client networks and applications, then write detailed reports. Security engineers build and maintain defensive tools and systems. CISOs and security managers focus on strategy, budgets, compliance, and translating technical risk into business language for leadership. The common thread is that communication and documentation are at least half the job across all roles.
Is cybersecurity hard to get into?
Entry-level cybersecurity is competitive. While there is a widely cited workforce gap, most unfilled positions are at the mid-senior level. For entry-level SOC analyst roles, employers typically want a Security+ certification, networking fundamentals, and either a relevant degree or 1 to 3 years of IT experience. The most common path into cybersecurity is through adjacent IT roles like help desk or system administration.
Do you need a degree for cybersecurity?
Not strictly, but about 60 percent of job postings list a bachelor's degree as a requirement. Many employers accept relevant certifications and experience in place of a degree. Key certifications include CompTIA Security+ for entry-level, OSCP for penetration testing, and CISSP for senior and management roles.
What certifications do you need for cybersecurity?
It depends on the role. Security+ is the entry-level standard. CySA+ is valued for analyst positions. OSCP is the gold standard for penetration testing. CISSP is expected for senior roles and management. Cloud-specific certifications like AWS Security Specialty are increasingly important. Most professionals accumulate 3 to 5 certifications over their career, and continuing education credits are required to maintain most of them.