Day in the Life of a Cybersecurity Analyst: Three Real Days
Three cybersecurity professionals wrote down everything they did during one ordinary workday. A SOC analyst working the night shift on a Thursday in Phoenix. A vulnerability management analyst on a Tuesday in Seattle. A threat intelligence analyst on a Wednesday in Washington DC. None of these days involved a breach. All of them were full.
These characters are composites, built from dozens of real accounts, interviews, and community threads. The people aren't real. The experiences are.
Ronda's Thursday Night
Ronda
Alarm goes off. Blackout curtains are doing their job. My apartment is dark enough that I genuinely don't know what time it is for about four seconds, which is a strange way to live. My cat, Mango, is sitting on my chest. I feed him, make coffee, eat a bowl of cereal standing over the sink because sitting down for breakfast when it's dinnertime for the rest of the world feels absurd. The coffee is strong. It needs to be.
Shower, uniform (jeans, credit union polo that I've never seen anyone iron, running shoes because the SOC floor is concrete and my back knows it by 4 AM). Pack the backpack: laptop, charger, water bottle, gummy bears, a paperback I've been reading for three months and am 90 pages into. The drive to the operations center is 17 minutes. I listen to a cybersecurity podcast about a zero-day in Ivanti VPN appliances. This is how I learn about things that might show up in my queue tonight. Other people listen to podcasts about true crime. I listen to podcasts about software vulnerabilities. Both are about people finding things they weren't supposed to find.
Arrive at the operations center. It's a windowless room in a nondescript office building in Tempe. Three rows of desks, six monitors across the front wall showing dashboards from our SIEM (Elastic Security), our endpoint detection platform (SentinelOne), network flow analysis, and a world map with live threat intelligence feeds that honestly looks impressive but that I rarely reference because the useful data is in the query results, not the map. The map is for visitors. When the board tours the SOC, they look at the map and feel protected. When I work the SOC, I look at the Elastic query results and feel responsible.
Shift handoff with Lamar, the day shift Tier 2. He's been here since 6:30 this morning. His eyes look how mine will look at 6:30 tomorrow morning. He walks me through the day: 42 alerts processed, three escalated to him from the Tier 1 analysts, all three closed as false positives after investigation. One notable: a brute force attempt against a member-facing portal, approximately 1,200 failed login attempts from a block of IPs in Indonesia over four hours. The IPs were blocked by the WAF automatically, no member accounts were compromised, and Lamar submitted the IP block to our threat intelligence feed. He's thorough. I like working the opposite shift from Lamar because his handoff notes are complete and his documentation is clean. The Tier 1 analyst on my shift tonight is Xavier, who is 24, sharp, and asks a lot of questions. I like Xavier because his questions are good and because he'll be better than me in about two years, which is motivating and slightly terrifying.
First gummy bear. Settle into the station. Pull up the queue. Four alerts carried over from the day shift, all low severity. First one: a user account attempting to access a file share they don't have permission for. This happens about 15 times a day. Usually it's someone who was transferred to a different department and their access wasn't updated. I check the user: sure enough, Maria Delgado in member services was moved to compliance three weeks ago and her access permissions still reflect her old role. I close the alert and send a ticket to IT to update her access. The alert was not a security incident. It was an HR process gap that showed up as a security event. About 30 percent of my alerts are like this. Security tools can't distinguish between a terminated employee trying to access files they shouldn't and someone whose manager forgot to submit a transfer form.
Xavier pulls up an alert and flags it for me. Outbound traffic from an ATM controller to an external IP that isn't in our whitelist. ATM controllers should only communicate with a very specific set of IPs: the processor, the monitoring service, and our internal management network. Any traffic to an IP outside that list is automatically flagged as high severity. My heart rate goes up a little. ATM compromises are real and they're expensive. A credit union in Ohio got hit two years ago and lost $3.4 million before they caught it. I pull the packet capture. The traffic is HTTPS, port 443, going to an AWS IP range. I check the certificate: it's from a legitimate software vendor that makes ATM management software. Then I check with our ATM operations team lead, a guy named Hank, who confirms that they pushed a software update to the ATM controllers yesterday and the update phones home to the vendor's cloud service for license validation. Nobody told us. Nobody updated the whitelist. The high-severity alert that made my pulse jump was a software update that someone forgot to communicate to the security team. I add the IP to the whitelist, close the alert, and send a polite but firm email to Hank about the importance of notifying the SOC before making network changes. This email will be ignored. I will send it again in three months when the same thing happens.
Quiet period. The alert queue is empty. Xavier is working through a backlog of detection rule tuning tickets. I review the Ivanti advisory from the podcast and check whether any of our VPN appliances are affected. We use Palo Alto GlobalProtect, not Ivanti, so we're not directly impacted. But I check the indicators of compromise from the advisory against our logs anyway, because the TTPs (tactics, techniques, and procedures) used in the Ivanti exploitation could theoretically be adapted to target other VPN platforms. I find nothing. This is 40 minutes of work that results in no finding, no alert, no report. Just confidence that we're not affected. Confidence is the invisible product of the SOC. Nobody pays for confidence explicitly, but it's what keeps the credit union's 180,000 members from waking up to empty accounts.
Write my weekly threat intelligence summary. This goes to the CISO, the IT director, and the compliance team every Friday morning. I write it Thursday night because Friday mornings after a night shift I'm incoherent. The summary covers notable threat activity relevant to financial institutions: the Ivanti VPN advisory, a new phishing campaign targeting credit union members with fake balance alerts (we've seen two attempts this week, both caught by the email gateway), and a reminder that our PCI DSS assessment is in six weeks and we need to complete the pre-assessment checklist. The summary is about 600 words. It takes me an hour because every sentence needs to be accurate, concise, and accessible to a compliance officer who doesn't know what an NTLM relay is and a CISO who very much does. Writing for both audiences simultaneously is a skill that took me two years to develop and is not taught in any certification.
Second gummy bear. Midnight is when the building goes from "quiet" to "empty." The lights in the hallway outside the SOC are on motion sensors and they've stopped triggering. The janitorial staff finished at 11. It's me, Xavier, and the hum of server fans through the wall. Xavier tells me he's been studying for the GCIH certification. I tell him to focus on the incident handling phases and the tool-specific questions, because the exam weights those heavily. He asks if I think the cert is worth it. I say yes, but only if he negotiates a raise before he tells his manager he passed. Once your manager knows you have the cert, the leverage disappears. He writes this down. I appreciate that he writes things down.
Alert. SentinelOne flags a suspicious process execution on a workstation in the mortgage department. The process is PowerShell executing an encoded command. Encoded PowerShell is not always malicious, but it's suspicious enough that SentinelOne flags it as a threat and our SIEM correlates it with the user's activity. I decode the command. It's a script that queries Active Directory for all user accounts and exports them to a CSV file. That's a reconnaissance activity. In a real intrusion, this is what an attacker does after gaining initial access: they enumerate the environment to find targets. My adrenaline is up.
I check the user account. It belongs to Brett Nakamura in IT. Brett is a system administrator. I check the time: 2:40 AM. I check Brett's VPN session: he's connected from his home IP. I call Brett. He answers on the fifth ring and sounds exactly like someone who was woken up. He says he didn't run any scripts tonight. His laptop has been off since 9 PM. We have a problem.
I contain the workstation via SentinelOne, cutting its network access. I escalate to the on-call incident response lead, who tonight is Phuong. She picks up immediately, which is one of the reasons I respect Phuong. We begin the incident response process: isolate, investigate, document. Brett's account password is changed immediately. VPN session is terminated. I start pulling logs. This will take the rest of my shift and probably several days of follow-up. At 2:40 AM on a quiet Thursday, the job changed. That's how this works. Always.
Handoff to Lamar. I walk him through the incident. He asks good questions. My notes are complete enough that he can continue the investigation without me. I eat the third gummy bear. My back hurts from the concrete floor. My eyes feel like they've been staring at a monitor for twelve hours, because they have. I drive home in the sunrise, which is the one beautiful part of working nights in Phoenix. The sky is pink and orange over the Superstition Mountains and I'm too tired to appreciate it. I get home at 6:50 AM. Mango is on the couch. I text my mom "good shift" because she worries about me working nights and the text means I'm home safe. I brush my teeth. I close the blackout curtains. I check my phone one more time, instinctively, to see if Phuong has texted about the incident. She hasn't. I set my alarm for 5:10 PM and I go to sleep.
Floyd's Tuesday
Floyd
Drive to the office in Bellevue. The commute from my apartment in Capitol Hill is 35 minutes if I leave before 7:45 and an hour if I leave after. I left at 7:15. Coffee is from the Starbucks drive-through because the office coffee is bad and I've stopped pretending otherwise. My wife, Paloma, is a software developer at a different company. She works from home on Tuesdays and Thursdays. I go into the office every day because my manager, Stuart, believes that "security teams should be physically present." Stuart also works from home on Fridays. The irony is not lost on me.
Sit down, stand up (standing desk), open Tenable. Tenable is our vulnerability scanning platform. It scans our entire infrastructure, about 3,200 assets, every week and produces a report of every known vulnerability on every system. This week's scan completed at 2 AM last night. I open the results: 11,847 total findings. That number sounds apocalyptic and it is completely normal. Most of those findings are informational or low severity. The ones I care about are the criticals and highs. Today: 47 critical, 312 high. Those are my week.
Start with the criticals. Sort by CVSS score. The top finding is a critical vulnerability in Apache Struts on three application servers. CVSS 9.8. Remote code execution. This is the kind of vulnerability that ransomware groups exploit within days of a patch being available. The patch has been available for four days. Our servers are not patched. I check the change management ticket: the infrastructure team submitted a patch request on Friday, but the change advisory board meets on Thursdays, so the earliest the patch can be approved is two days from now. If approved, deployment would happen over the weekend. That means these servers will be vulnerable for a minimum of eight days after the patch was released. Eight days during which anyone scanning the internet can find these servers and exploit a known, documented, 9.8-severity vulnerability.
I escalate to Stuart. He escalates to the infrastructure director, a guy named Mel. Mel says the change advisory board process exists for a reason and he can't bypass it for every critical vulnerability. Stuart asks Mel how many critical vulnerabilities we've had this year. The answer is 14. Mel says "exactly, I can't emergency-patch 14 times." I want to say that yes, actually, you can and should emergency-patch critical remote code execution vulnerabilities, but that's Stuart's conversation to have, not mine. My job is to identify the risk. Their job is to accept it or mitigate it. They're accepting it until Thursday. I document the risk acceptance in an email and CC the CISO. The CISO replies within 20 minutes and says "I'll handle it." I don't know what "handle it" means, but the CISO outranks Mel. The Apache Struts patches are deployed by 3 PM. Change advisory board was bypassed. This took four people and six hours to accomplish something that a single infrastructure engineer could do in 30 minutes.
Team standup. Stuart, me, Gianna (the other vulnerability management analyst), and Reggie (our junior analyst who started two months ago). We review the scan results, divide the findings. I take the criticals and the top 100 highs. Gianna takes the remaining highs. Reggie takes the medium-severity findings that have been open for more than 90 days, which is our "aging vulnerabilities" backlog. The backlog has 847 items. Some of them have been open for over a year. They're on systems that the infrastructure team calls "legacy" and I call "liability." Nobody patches them because nobody owns them. The original system owners left the company or moved to other roles. The systems still run because something depends on them. Nobody knows exactly what. Patching them might break something. Not patching them might get us breached. This is the actual texture of vulnerability management that certification courses never prepare you for.
Lunch at my desk. A sandwich I packed this morning: turkey, provolone, mustard. Paloma texts me a photo of our dog, a corgi named Falafel, asleep on her laptop keyboard. I show it to Gianna, who shows me a photo of her cat asleep on her laptop keyboard. We briefly discuss whether pets actively seek out keyboards or if keyboards just happen to be warm flat surfaces in the proximity of people they want attention from. This conversation is the most relaxing part of my day.
Spend two hours reviewing the high-severity findings and determining remediation priorities. This is not technical work. This is political work. Every vulnerability I flag for remediation creates work for another team. The database team has to patch their PostgreSQL instances. The DevOps team has to update their container base images. The infrastructure team has to schedule maintenance windows. None of these teams report to me. I have no authority over them. I have a spreadsheet, a CVSS score, and the ability to write emails that sound urgent without sounding like I'm telling them how to do their jobs. This is the actual skill of vulnerability management. Not finding vulnerabilities. Tenable finds them. Getting people to fix them. That's the job.
Meeting with the compliance team about our upcoming HITRUST assessment. They need evidence that we have a vulnerability management program. I have the evidence. I have a 14-page policy document, scan schedules, remediation SLAs, exception forms, and quarterly metrics reports. What I don't have is a clean vulnerability management posture, because we have 847 aging medium-severity findings and three legacy systems that haven't been patched since 2023. The compliance team needs to present our program as "effective." I need to present our program as "honest." These are sometimes the same thing and sometimes not.
Leave the office. The drive home takes 52 minutes because it's Seattle and it's 5 PM. I call Paloma from the car and she asks how my day was. I say "I got three servers patched that should have been patched four days ago and wrote twelve emails about it." She says "that sounds productive?" and the question mark at the end is the whole thing. I can hear Falafel barking in the background. I get home at 6:07 PM. Falafel is excited. Paloma is making pasta. I put my laptop bag by the door and try not to think about the 847 aging vulnerabilities. I mostly succeed. Mostly.
Amira's Wednesday
Amira
Up before the alarm. Omar is still sleeping. I make Turkish coffee on the stove because instant coffee is an affront to the concept of mornings and I'll die on that hill. Check my phone while the coffee brews: three notifications from our threat intelligence platform (Recorded Future), two Slack messages from my team lead Sanjay, and a text from my sister Fatima asking if I'm coming to dinner Sunday. The Recorded Future alerts are about a new advisory from CISA regarding a Chinese state-sponsored threat group that's been targeting defense industrial base companies. Which is what we are. This is relevant. This is going to eat my day.
Drive to the office. 22 minutes from our apartment in Rosslyn. The office is in a building in Crystal City that looks like every other building in Crystal City: glass, concrete, badge access, and a lobby that could belong to a dentist's office or a defense contractor and you genuinely cannot tell. I badge in, drop my bag, and head to the SCIF. The SCIF (Sensitive Compartmented Information Facility) is where I do most of my analysis work when it involves classified intelligence. No phones, no personal electronics, no smart watches. I leave my phone in a locker outside. The transition from "connected" to "disconnected" is jarring every time. I go from getting notifications every three minutes to complete silence. My colleague Russell says the SCIF is the only place in America where you can't get a push notification, and he means it as a complaint, but I've come to appreciate the quiet.
Read the CISA advisory in full. The threat group, which the intel community tracks under a specific designator I can't share here, has been observed using spear-phishing emails with PDF attachments that exploit a vulnerability in a popular PDF rendering library. The targets are companies in the defense supply chain, specifically companies that manufacture components for naval weapons systems. We manufacture components for naval weapons systems. The advisory includes indicators of compromise: file hashes, sender email domains, command-and-control IP addresses, and MITRE ATT&CK technique mappings. My job for the next several hours is to determine whether any of these indicators are present in our environment and to produce an intelligence assessment for our security operations team and our program managers.
Search our email gateway logs for the sender domains listed in the advisory. Four domains. I search each one across the last 90 days of inbound email. Zero matches. Good. Search our DNS logs for the C2 IP addresses. Six IPs. Zero matches across 90 days. Good. Search our endpoint detection platform for the file hashes. Three hashes. Zero matches. The indicators are not present in our environment. This is the outcome I wanted and it took about 90 minutes of careful searching to confirm. Ninety minutes that produced a "no finding" result. The value of that "no" is enormous and completely invisible. Nobody congratulates you for not being compromised. But the work to establish that fact is real.
Team meeting with Sanjay, Russell, and our junior analyst Priscilla. We review the CISA advisory as a group. Sanjay assigns follow-up tasks: I'll write the intelligence assessment and update our detection rules. Russell will coordinate with our email security team to add the sender domains to the block list proactively. Priscilla will monitor our threat intelligence feeds for any additional indicators related to this group over the next two weeks. The meeting takes 35 minutes. We are efficient. Sanjay runs meetings the way good producers run film sets: on time, on task, no tangents.
Write the intelligence assessment. This is a structured analytic product, not a memo. It follows our template: executive summary, threat actor profile, technical analysis, indicators of compromise, recommended actions, confidence assessment. The confidence assessment is the part that took me two years to learn how to write. It requires me to state how confident I am in my analysis and what would change my assessment. "High confidence that these indicators are not present in our environment based on 90 days of log analysis. Moderate confidence that the threat actor has not previously targeted our specific segment of the defense supply chain, based on publicly available reporting. Low confidence in assessing the threat actor's future targeting intentions." Each level of confidence has to be justified. This document will be read by our CISO, our VP of programs, and potentially our government customer. Every word matters. The assessment takes me about three hours to write. It's 2,400 words. That's about 800 words per hour, which sounds slow until you realize that each sentence has to be defensible, sourced, and calibrated for an audience that includes both technical and non-technical readers.
Lunch. Late. The cafeteria downstairs serves passable chicken shawarma on Wednesdays, which is the only day I eat in the cafeteria. I sit with Russell, who is complaining about a vendor that sold us a threat intelligence feed that produces 400 alerts per day, of which approximately 12 are actionable. He says "we're paying $85,000 a year for a firehose of noise." I say "write a memo." He says "I've written three memos." We eat our shawarma and contemplate the memos.
Back to the SCIF. Update the detection rules in our SIEM to look for the behavioral patterns described in the CISA advisory, not just the specific indicators. The indicators, like IP addresses and file hashes, change frequently. The behaviors, like the initial access technique and the lateral movement pattern, are harder for the adversary to change. I write three new correlation rules that look for PDF attachments from external senders followed by process execution patterns consistent with the described exploitation chain. This is the part of my job that I find genuinely satisfying: taking intelligence and turning it into detections that will fire if the adversary tries this technique against us, even if they change their infrastructure. The rule writing takes about two hours, including testing. I test each rule against synthetic data to make sure it fires correctly and doesn't produce false positives against legitimate PDF-related activity, which happens a lot when your correlation rule is "someone opened a PDF and then PowerShell ran."
Leave the SCIF, retrieve my phone from the locker. Eight hours of notifications load simultaneously. My phone buzzes for about 45 seconds straight. There are 23 Slack messages, 14 emails, a text from Omar asking what I want for dinner, and a text from Fatima confirming dinner on Sunday. I respond to Omar ("anything, I'm tired") and Fatima ("yes, bringing dessert") and skim the Slack messages for anything urgent. Nothing is urgent. The urgent thing happened inside the SCIF, where nobody could text me about it, which is either poetic or just how classified work functions.
Home. Omar is making chicken. The apartment smells good. He asks how my day was. I say "good, did a lot of writing." He asks what I wrote about. I say "can't really talk about it." This exchange happens maybe three times a week. He's used to it. When we were dating, it bothered him. Now it's just the shape of our life. He tells me about his day at the architecture firm. He designed a lobby for a medical office building. I can hear every detail about his day. He can hear approximately 20 percent of mine. The asymmetry is the price of a clearance, and nobody mentions it in the job description.
Frequently Asked Questions About Cybersecurity Careers
What does a typical day in cybersecurity look like?
It varies significantly by role. SOC analysts work in shifts monitoring security alerts and investigating potential incidents. Vulnerability management analysts review scan results and coordinate patching with IT teams. Threat intelligence analysts research emerging threats and produce analytical reports. All roles involve substantial documentation and communication, and the ratio of routine work to genuine incidents is heavily weighted toward routine.
Do cybersecurity analysts work at night?
Many do, particularly those in SOC roles. Cyberattacks occur around the clock, so organizations with 24/7 SOC coverage staff analysts on rotating shifts, typically 12 hours. Night shift work is common at managed security services providers and large enterprises. Roles in vulnerability management, threat intelligence, and governance typically follow standard business hours.
How much of cybersecurity is sitting at a computer?
The vast majority. Most cybersecurity roles are desk-based and involve extended screen time across monitoring dashboards, log analysis, report writing, and email. Physical security assessments and on-site audits are the exception. Meetings and documentation consume a significant portion of every role.