Career DishReal jobs, real talk

Day in the Life of a Web Developer: Three Real Days

~21 min read · 3 voices

Three web developers wrote down everything they did on one ordinary workday. One is a fullstack developer at a SaaS company in Denver. One is a frontend developer at an e-commerce startup in Los Angeles. One is a senior backend engineer at a fintech company in Chicago. No dramatic days. Just the work.

These characters are composites, built from dozens of real accounts, interviews, and community threads. The people aren't real. The experiences are.

Sanjay's Tuesday

S

Sanjay

30 · TuesdayFullstack developer at a mid-size SaaS company in Denver, Colorado5 years in web dev · Remote
7:10 AM

Alarm goes off. I hit snooze once, which I tell myself is a reasonable amount. Preeti is already up. I can hear her on the other side of the room talking to someone on a call. We both work remote, and our desks are in the same room, which sounds like a recipe for disaster but honestly works fine. We bought noise-canceling headphones on the same day during our first week of this arrangement. That was two years ago and they've been the best $300 either of us has spent.

7:25 AM

Coffee. I do pour-over every morning. Weigh the beans on a small kitchen scale, grind them, heat the water to 205 degrees, bloom for 30 seconds, then a slow spiral pour over the next 3 minutes and 30 seconds. Total brew time: exactly 4 minutes. Preeti makes fun of me for timing it with a phone timer. She drinks instant coffee and claims she can't taste the difference. I know she's lying, but I also know this isn't an argument worth having. I eat a bowl of oatmeal at my desk while Slack loads.

8:00 AM

Slack is already busy. 14 unread messages in the engineering channel. Most of it is from yesterday afternoon, people in different time zones finishing their day after I logged off. I scan through. Nothing urgent. Lyra, a frontend developer who started three weeks ago, posted a question at 11 PM last night asking about our CSS naming conventions. She does this a lot. Asks questions at odd hours, long and detailed, with screenshots. I like that about her. She's thorough. I type a quick answer and link her to the internal style guide that nobody reads but that I wrote two years ago and am quietly proud of.

8:30 AM

Morning standup on Zoom. There are 9 of us on the call. Our company makes scheduling software for dental practices, about 250 employees total, with the engineering team at around 40. My team handles the patient communication features. Orin, our PM, runs the standup. He's efficient about it. Everyone gives their update in under a minute. My update: still working on the SMS appointment reminder feature, Twilio integration is 90% done, stuck on a timezone edge case. Bev, our tech lead, asks which edge case. I say Arizona. She says "oh no." She knows. Arizona doesn't observe daylight saving time, except for the Navajo Nation, which does. So a dental office in Tempe and a dental office in Window Rock, both in Arizona, can be in different time zones for half the year and the same time zone for the other half. Our timezone library handles this, but our database stores office locations by state, not by whether they're on the Navajo Nation. Bev says to use the IANA timezone database and let the office pick their timezone explicitly during onboarding. That's the right answer. It means I need to add a timezone picker to the office settings page, which is about half a day of work I wasn't planning on.

9:00 AM

I open VS Code. My branch is called feature/sms-reminders and I've been on it for about a week and a half. The backend is Node and Express, the frontend is React, and the database is PostgreSQL. The Twilio integration itself was straightforward. You call their API with a phone number, a message body, and your account credentials, and they send the text. The hard part is everything around it. When to send the reminder (24 hours before? 2 hours before? configurable?), what to say in the message, how to handle patients who opt out, and now, what timezone the office is actually in. I pull up the office settings component and start adding a timezone dropdown. There are 419 entries in the IANA timezone database. Nobody needs all 419. I filter it to US timezones, which gets me down to 29. That's better.

10:15 AM

Timezone picker is done. I wired it to the backend so when an office selects their timezone, it saves to the offices table in PostgreSQL. Then the SMS reminder scheduler reads that timezone when calculating the send time. I write a test for the Arizona case specifically. The test creates a fake office in Tempe with America/Phoenix as the timezone, schedules a reminder for 9 AM local time, and asserts that the UTC conversion is correct even when the rest of the country is on daylight saving time. The test passes. I feel a small, private sense of victory that nobody else will ever appreciate. Timezone bugs are like that. They're invisible when they work and catastrophic when they don't.

10:45 AM

Lyra pings me on Slack. She's working on a different part of the app and ran into a React state issue where a component re-renders six times on a single dropdown change. She shares her screen. I can see the problem in about 30 seconds. She's setting state inside a useEffect that depends on the state she's setting, which creates a loop. I've made this exact mistake before. Everyone who writes React long enough makes this mistake. I explain the fix and she says "oh, that makes sense, I feel dumb." I tell her that if she's not making that mistake at least once in her first month, she's not writing enough React. She laughs. I mean it.

11:30 AM

Preeti and I eat lunch together at the kitchen table, which is six feet from our desks. She's telling me about a campaign she's working on. I'm telling her about the Arizona timezone thing. She says "wait, Arizona doesn't do daylight saving time?" I say "no, and it gets worse." She holds up her hand and says "I actually don't need to know how it gets worse." Fair. I eat a sandwich and we talk about what to make for dinner. These midday breaks are my favorite part of working remote. Five minutes of not thinking about code.

12:15 PM

Back at my desk. I have a code review from Bev on an earlier pull request. She left 4 comments. Two are style nits, one is a suggestion to rename a variable from "data" to "appointmentSlots" for clarity, and one is a real issue: I forgot to handle the case where the Twilio API returns a 429 (rate limited). She's right. If a dental office has 200 appointments in a day and we try to send all the reminders at once, Twilio will throttle us. I need a queue with rate limiting. I add a note to my Jira ticket and start sketching out a simple job queue using Bull, which is a Redis-based queue library for Node. This is a bigger piece of work than I expected. Probably two days.

1:30 PM

I'm deep in the queue implementation when Orin pings me. "Hey, did you see the staging deploy?" I did not. I pull up the staging environment in Chrome. The billing page looks wrong. The payment summary table has lost its borders and the button spacing is off. Something in the CSS broke. I check the deploy log. A PR from another team merged this morning that updated a shared stylesheet. They changed the base table styles and it cascaded into the billing page. I didn't write the billing page and I didn't write the CSS change, but I'm the one who noticed it in staging. In web development, noticing things is a skill that nobody puts on a resume but everyone relies on.

2:00 PM

I message the developer who merged the CSS change. His name is Pranav. He's on a different team. I send him a screenshot of the billing page before and after and say "I think your table styles PR affected billing." He responds in about two minutes with "oh no, let me look." He finds the issue. He'd removed a border-collapse rule from the shared stylesheet that the billing table depended on. He pushes a fix and the staging deploy kicks off again. Total time from noticing the bug to the fix being deployed: about 35 minutes. That's a good turnaround. Some CSS regressions live in staging for weeks because nobody clicks through every page after every deploy. We probably should have visual regression tests, but we don't. It's on the backlog. The backlog is where good ideas go to age gracefully.

Timezone bugs are like that. They're invisible when they work and catastrophic when they don't.
Sanjay
2:45 PM

I get back to the queue work. Bull needs a Redis connection, which we already have in our infrastructure for session management. I configure the queue to process SMS sends at a rate of 10 per second, which is well under Twilio's rate limit. Each job in the queue contains the patient phone number, the message body, and the send time. Failed jobs retry 3 times with exponential backoff. I write this in about an hour. Most of it is boilerplate. The actual logic is maybe 40 lines of code. The config, the error handling, the logging, and the test setup are another 200 lines. That ratio is pretty normal. The thing that does the work is always smaller than the thing that makes sure the work gets done safely.

4:00 PM

I push my branch and open a draft PR. It's 847 lines changed across 12 files. I add Bev as a reviewer. She won't get to it until tomorrow, which is fine. I write a PR description that explains the timezone picker, the queue system, and the Arizona edge case. I include a screenshot of the timezone dropdown. Good PR descriptions save everyone time. Bad ones start a thread of clarifying questions that takes longer than the review itself.

4:30 PM

I check Slack one more time. Lyra posted a screenshot of her component working, the one with the re-render bug. She tagged me and said "fixed, thanks Sanjay." It looks good. Orin posted the sprint metrics: we're on track, 14 of 18 story points delivered with 3 days left. I close my laptop.

5:15 PM

Preeti and I go for a walk around the neighborhood. It's April and Denver is doing that thing where it's 62 degrees and sunny but there's still a patch of old snow in the shade next to the garage. She asks how my day was. I say "fine, I fixed a timezone bug and set up a message queue." She says "cool." And it is cool, in a way that's hard to explain to someone who didn't spend the morning learning that Arizona exists in two time zones simultaneously. But Preeti doesn't need the explanation. She just needs to know the day was fine. I start thinking about dinner.


Liz's Wednesday

L

Liz

26 · WednesdayFrontend developer at an e-commerce startup in Los Angeles, California2 years in web dev · Bootcamp grad · First dev job
8:40 AM

I wake up to a Slack notification from Grady, our founder. Sent at 11:47 PM last night. It says "thinking about the pet customization flow, can we add a live preview? like they see the portrait style updating in real time as they choose options?" I read it twice. I screenshot it and send it to June, my manager and the only other frontend developer. She replies with a single emoji: the skull. That means she saw it too and we'll discuss it at standup and politely explain that real-time image rendering in the browser is a different project entirely. Grady is a good founder. He's also a person who has ideas at midnight and sends them immediately with no filter. You learn to sort his Slack messages into "yes," "maybe in Q3," and "that's a different company."

9:00 AM

I make coffee in my apartment. My roommate Toni is already gone. She works in film production and leaves before 7 most mornings. I won't see her until 9 or 10 PM. Our apartment is a two-bedroom in Silver Lake that costs more than I want to think about. I eat yogurt and granola at my desk, which is a standing desk that I never stand at. It's been at sitting height for about 14 months. The top section, where I'd rest my hands if I ever raised it, has a stack of library books on it. Three are overdue. I keep meaning to return them. The desk judges me silently every morning.

9:30 AM

Standup. Our startup sells custom pet portraits. You upload a photo of your dog or cat, pick a style (Renaissance, pop art, watercolor, a few others), choose a frame, and they paint a real portrait and ship it. We're 30 people and somehow doing $3 million a year in revenue. It's one of those businesses that sounds like a joke until you see the numbers. I'm building the new product customization page, which is the page where customers upload their pet photo and pick their options. June runs standup. I give my update: image upload component is working on Chrome and Firefox but failing on Safari mobile. June asks what the error is. I say it's related to HEIC. She nods. She's been here longer than me and she already knows this is going to be a bad day.

10:00 AM

The Safari bug. Here's the problem. When iPhone users take a photo, the default format is HEIC, not JPEG. Our image upload component accepts the file, sends it to the server, and the server processes it. But Safari on iOS handles HEIC files differently than other browsers handle JPEGs. The file input reports the MIME type as "image/heif" on some iOS versions and "image/heic" on others. Our validation checks for "image/jpeg" and "image/png" and rejects anything else. So iPhone users take a photo, try to upload it, and get an error that says "unsupported file format." About 38% of our customers are on iPhones. This is not a small problem.

10:30 AM

I start reading. Stack Overflow threads, Apple developer docs, GitHub issues on the heic-convert library. The fix has two parts. First, I need to update the file validation to accept HEIC and HEIF MIME types. Second, I need to convert the HEIC file to JPEG on the client side before uploading, because our image processing pipeline on the server doesn't handle HEIC. There's a JavaScript library called heic2any that does the conversion in the browser. I install it with npm and start wiring it into the upload component. The library is 200KB, which is bigger than I'd like, but the alternative is converting server-side, and that would mean sending a 3 to 5 MB HEIC file over the network just to convert it.

11:15 AM

The conversion works in Chrome. It works in Firefox. I open Safari on my iPhone to test. The upload starts, the progress bar moves, and then nothing. The page freezes for about 8 seconds, then the conversion completes and the preview loads. Eight seconds of a frozen screen. That's not a bug, that's a user experience where 100% of people will close the tab. I check the console. The heic2any library is doing the conversion on the main thread, which blocks the UI. I need to move it to a Web Worker. I've used Web Workers exactly once before, during my bootcamp final project. I pull up the MDN docs and start reading.

12:00 PM

Lunch. I walk to a taco truck on Sunset that I go to about three times a week. Two carnitas tacos and a horchata. I sit on a bench and look at my phone. June texted me: "How's the HEIC thing going?" I reply: "Web Workers." She replies: "Fun." She's being sarcastic but also genuinely empathetic, which is a tone that only works over text when you've worked with someone for two years. I eat my tacos and try not to think about MIME types for 20 minutes. I partially succeed.

12:30 PM

Back at my desk. I set up the Web Worker. The worker runs in a separate thread, takes the HEIC file as input, does the conversion, and posts the JPEG back to the main thread. While the worker runs, I show a loading spinner and a message that says "Converting your photo..." I test it on my iPhone. The upload starts, the spinner appears, and the conversion takes about 3 seconds. The UI stays responsive. The preview loads. It works. I take a screenshot and post it in the engineering Slack channel with "HEIC upload working on Safari mobile." June reacts with three fire emojis. Grady reacts with "this is huge." It's not huge. It's a file format conversion in a Web Worker. But it unblocks 38% of our customers from using the product page, so in revenue terms, he's actually right.

1:30 PM

I start working on the frame selector. This is the other main piece of the customization page. Customers pick from 8 frame options: black, white, natural wood, walnut, gold, silver, floating, and frameless. Each frame has a different width and profile. Avi, our designer, sent the Figma mockup two weeks ago. It's beautiful. Each frame is rendered in 3D perspective with a subtle shadow. On desktop, the frames are arranged in a 4x2 grid with hover animations. On mobile, it's a horizontal scrolling carousel. I've been working from this mockup and the desktop version is about 80% done. The hover animations look close to the Figma. The mobile carousel scrolls smoothly. I'm fairly happy with it.

3:07 PM

Avi sends a Figma link in Slack with the message "small tweak to the frame selector!" I open it. He's completely redesigned the mobile layout. Instead of a horizontal carousel, the frames are now in a vertical masonry grid with different sizes based on popularity. The most popular frames are larger. There's also a new "compare" mode where you can tap two frames and see them side by side on your pet photo. I stare at it for about 30 seconds. This is not a small tweak. This is a full redesign of the mobile experience. The compare feature alone is probably two days of work. I check the sprint deadline. It's Friday. I have two days left and this "small tweak" would take two days if I dropped everything else. I message Avi: "This looks great, but the scope here is bigger than the current sprint. Can we ship the carousel version this week and do the masonry + compare in the next sprint?" He says "yeah totally, just thought I'd share the vision." Avi is a talented designer. He also lives in a world where changing a layout in Figma takes 20 minutes, which makes it hard for him to estimate that the same change in code takes 20 hours.

About 38% of our customers are on iPhones. This is not a small problem.
Liz
3:45 PM

I go back to the frame selector. I'm working on the CSS for the selected state. When you click a frame, it gets a teal border and a checkmark in the corner. I test it across Chrome, Firefox, and Safari. It works everywhere. Small victory. I add a price badge to each frame option. Three of the frames are the same price, the gold and silver are $15 more, and the floating frame is $25 more. The price displays as a small tag in the bottom-right corner of each frame thumbnail. June reviews my PR mid-afternoon and leaves two comments. One is about an alt text description I missed on the frame images, the other is about a CSS transition that's 300ms and she thinks 200ms would feel snappier. She's right on both counts.

4:30 PM

I push the fixes from June's review and the PR is approved. I merge it into the development branch. The HEIC fix and the frame selector will go out in Friday's deploy. I write a short summary in our product Slack channel explaining what's changing and tag Grady. He replies "love it" and then immediately follows up with "what about live preview tho." June and I both ignore this. We'll handle it at next week's planning meeting. Some ideas need to marinate. Some need to be gently forgotten. The skill is knowing which is which.

5:00 PM

I close my laptop and go for a run in the neighborhood. Silver Lake has these steep side streets that are good for interval training and bad for everything else. I run for 25 minutes and try to clear my head. It mostly works, except for the three minutes where I think about whether the Web Worker properly handles the case where a user cancels the upload mid-conversion. I don't think it does. I make a mental note. I'll fix it tomorrow.

7:30 PM

Toni gets home. She looks exhausted. She was on set for 13 hours. She asks what I did today and I say "I spent two hours making iPhones upload photos correctly." She says "that doesn't sound like it should take two hours." I say "it shouldn't." We order Thai food and watch a movie. I fall asleep on the couch before it ends, which is becoming a pattern I should probably address but won't.


Rex's Thursday

R

Rex

36 · ThursdaySenior backend developer at a fintech company in Chicago, Illinois10 years in web dev · Hybrid (3 days office, 2 remote)
6:30 AM

Up. I shower, get dressed, and pack lunch. It's the same lunch I bring every day: chicken, rice, and hot sauce in a glass container. My coworkers have commented on this. Harris, the junior dev I mentor, once asked me if I ever get bored of it. I said "do you get bored of brushing your teeth?" He didn't have a response to that. The truth is I figured out a meal that's cheap, healthy, and takes 10 minutes to prep the night before. Optimizing it further would mean thinking about lunch, and the whole point is that I don't think about lunch.

7:15 AM

I take the Brown Line to the office. It's a 25-minute ride. I read on my phone. Right now I'm reading a blog post about distributed systems that I bookmarked three weeks ago and finally opened. The post is about exactly the problem I'm solving at work this week, which is webhook delivery guarantees. You send a webhook to a customer's server. The server might be down. It might be slow. It might return a 500. You need to retry. But how many times? How long between retries? What if the customer's server is down for three days? This is the kind of problem that sounds simple and is actually a small nightmare.

8:00 AM

Office. We're a fintech company, about 150 people. We build a payment processing platform for SaaS companies. When one of our customers' users pays a subscription fee, that transaction flows through our system. We handle the money movement, the invoicing, the tax calculation, and the webhooks that notify the customer's backend that a payment happened. I'm on the integrations team. My tech stack is Python and Django for the backend, PostgreSQL for the primary database, Redis for caching and job queues, and everything runs on Kubernetes on Google Cloud. I sit at my desk, which is in a corner of the third floor near the window. The view is a parking garage. It's not inspiring, but the natural light is decent.

8:15 AM

Standup with my team. There are 6 of us: me, Harris, two other backend engineers, Yvonne who's our engineering manager, and Fern who's the product director. Fern gives a quick product update. One of our biggest customers is integrating our webhooks into their billing system and they've asked for guaranteed delivery with configurable retry policies. That's exactly what I'm building this week. I give my update: exponential backoff logic is about 60% done, planning to have a working prototype by end of day Friday. Yvonne asks if I need anything. I say I need 20 minutes with Harris after standup to unblock him on a migration issue. She nods.

8:45 AM

I sit down with Harris at his desk. He's 24, about a year and a half into his career. Smart, eager, asks good questions. He's trying to run a database migration on the staging environment and it keeps timing out after 30 seconds. He shows me the migration file. It's adding an index to the webhook_events table. I ask him how many rows are in that table on staging. He checks. 47 million rows. That's the problem. Adding an index to a table with 47 million rows locks the table while it builds the index. On staging, the lock timeout is 30 seconds, and 47 million rows takes a lot longer than 30 seconds to index. I show him the solution: CREATE INDEX CONCURRENTLY. It builds the index without locking the table. It takes longer, but it doesn't block reads or writes. I show him how to write the Django migration to use RunSQL with the concurrent flag. He types it up, runs it, and the migration starts. It'll take about 8 minutes. He says "that's it?" I say "that's it." He writes it down. I watch him write it in a notebook, which is something I used to do when I was starting out. I don't tell him that. I just like that he does it.

9:30 AM

I open my branch and get into the webhook retry logic. The idea is simple. When we send a webhook and it fails, we retry. The retry intervals increase exponentially: 1 minute, 5 minutes, 30 minutes, 2 hours, 8 hours, 24 hours. After 6 failed attempts over roughly 34 hours, we stop and mark the webhook as failed. The customer can see the failure in their dashboard and manually replay it. I'm writing this in Python using Celery for the task queue. Each webhook attempt is a Celery task. When it fails, it schedules the next attempt with the appropriate delay. The logic itself is about 80 lines of Python. I spend most of the morning on the edge cases. What if the customer's server returns a 301 redirect? Follow it, but only once. What if it returns a 200 but the response body says "error"? Treat it as a success, because we can only judge by HTTP status codes. What if the connection times out after 15 seconds? Count it as a failure and retry.

11:00 AM

Coffee break. I go to the kitchen on our floor and pour a cup from the drip machine. It's not great coffee but it's free and hot. Bo, our SRE, is in the kitchen microwaving something that smells like fish. Bo runs the on-call rotation and has a gift for appearing calm during incidents that would make most people sweat. I ask him how the on-call week is going. He says "quiet, which makes me nervous." This is an SRE joke. Quiet weeks mean the next incident is building up pressure somewhere. We talk about the Kubernetes cluster upgrade that's scheduled for next week. He says it should be straightforward. We both know he's saying that to convince himself as much as me.

11:45 AM

My sister Maggie calls. She wants to plan our mom's birthday dinner for next Saturday. I step into the hallway to talk. Maggie wants to do a restaurant. I say fine. She asks if I have any suggestions. I say anywhere that takes reservations. She says "you're so helpful." She's not wrong that I'm being unhelpful, but I'm also in the middle of debugging a Celery task that's not rescheduling correctly, and restaurant opinions feel like a lot to ask right now. We agree she'll pick the restaurant and I'll bring the cake. That's a fair trade. I'm better at cake than opinions.

12:00 PM

Lunch at my desk. Chicken, rice, hot sauce. I eat it in about 8 minutes while reading Harris's pull request. He's adding a new column to the webhook_events table to store the HTTP response code from each delivery attempt. The migration looks right. The model change looks right. The API serializer includes the new field. I leave one comment: he should add a database-level default value so existing rows don't need to be backfilled. He responds in about 5 minutes with a fix. I approve the PR.

1:00 PM

Afternoon. I'm writing tests for the retry logic. I use pytest and mock the HTTP calls to the customer's server. One test simulates a server that returns 500 five times in a row and then 200 on the sixth attempt. The retry system should keep going until it succeeds. Another test simulates a server that's permanently down. The system should retry 6 times and then give up. A third test simulates a server that returns 200 on the first try. No retries needed. I write 11 tests in total. They all pass. Writing tests for async code with Celery is tedious because you have to mock the task scheduling and manually advance the clock. But the alternative is deploying untested retry logic to a payment processing system, which is the kind of thing that ends careers.

2:30 PM

I do a round of manual testing in our staging environment. I set up a test webhook endpoint that returns 500 for the first 3 attempts and 200 after that. I trigger a webhook and watch the retry system work. First attempt: fails, retry scheduled in 1 minute. One minute later: fails, retry in 5 minutes. Five minutes later: fails, retry in 30 minutes. Thirty minutes later: succeeds. The dashboard shows the full delivery history with timestamps and response codes. It works. I take a screenshot for the PR description.

The thing that does the work is always smaller than the thing that makes sure the work gets done safely.
Rex
3:15 PM

I'm about to open my PR when my laptop buzzes. PagerDuty alert. The Redis cache on production is at 92% memory capacity. The threshold for alerting is 90%. Bo messages me on Slack almost immediately: "You seeing this?" I am. We both pull up the Redis monitoring dashboard in Grafana. Memory usage has been climbing steadily for about 4 hours. It was at 71% at 11 AM and it's been rising in a straight line. Something is writing data faster than the TTL is expiring it. Bo runs a Redis command to find the largest keys. There's one key that's 180 megabytes. 180 megabytes in a single Redis key. For context, most of our cache keys are under 1 kilobyte. This key is called "report_cache:quarterly_summary" and it's storing a serialized JSON blob that looks like an aggregated financial report.

3:30 PM

Bo and I trace the key back to a report generation job that was added about 6 months ago. It pre-computes a quarterly financial summary and caches it in Redis so the dashboard loads fast. The problem is that nobody set a TTL on the key, and the report runs every hour and appends to the existing data instead of replacing it. So every hour, the key gets bigger. It's been growing for 6 months. 180 megabytes of accumulated quarterly summaries, stacked on top of each other like a geological record of our company's revenue. Bo says "who wrote this?" I search the git history. The commit was from a developer who left 4 months ago. No code review comments. No tests. No TTL. A single cached key, growing forever, written by someone who's no longer here to explain it. This is the kind of thing you find in every codebase that's more than a year old. Not malice, just a reasonable decision that nobody revisited.

4:00 PM

I delete the key from Redis. Memory drops from 92% to 74% immediately. Then I open the code for the report generation job and add a TTL of 24 hours. I also fix the logic so it replaces the cache entry instead of appending. The fix is 3 lines of code. I add a comment above the cache write that says "TTL required, see incident 2026-04-07." Bo closes the PagerDuty alert. The whole thing took about 45 minutes. Not a crisis, but another 8 hours and we'd have hit 100% and Redis would start evicting random keys, which on a payment processing platform would have been a very bad afternoon.

4:30 PM

I push my webhook retry PR and my Redis fix PR. Two pull requests in one day, one planned and one unexpected. That's a pretty normal ratio. I'd guess about 30% of my workweeks are spent on things I didn't expect to be working on Monday morning. You plan the work and then the system tells you what actually needs doing. Yvonne messages me and says "thanks for jumping on the Redis thing." I say "Bo found it, I just knew where to look." She says "that's what senior means." I'm not sure she's right about that, but it's a nice thing to say.

5:00 PM

I pack up. Chicken container goes back in the bag. Laptop stays at the office. I take the Brown Line home. The train is crowded at this hour. I stand near the doors and stare out the window at the buildings going past. A woman next to me is watching a cooking video on her phone with the volume on. I put in my earbuds and listen to nothing. Sometimes the best thing about the commute home is 25 minutes of not solving problems.

6:30 PM

Home. I heat up leftover pasta and eat it standing in the kitchen, which is a bachelor habit I should break but probably won't. Maggie texts: "Mom's dinner is at Osteria Langhe, Saturday at 7. You're on cake duty." I reply "got it." I sit on the couch and think about the 180-megabyte Redis key. Six months of a cache growing silently, one hourly append at a time. Nobody noticed because it worked. The dashboard loaded fast. The reports were accurate. Everything was fine until the memory hit 92%. That's the thing about backend work. Most of what you build is invisible. It runs in a data center somewhere, processing transactions at 3 AM, retrying failed webhooks, expiring cache keys. Nobody sees it when it works. Everyone sees it when it doesn't. I turn on the TV and stop thinking about Redis.


Frequently Asked Questions

What does a web developer do on a typical day?

Most web developers spend their day writing code, reviewing pull requests, debugging issues, and attending meetings like daily standups and sprint planning. Frontend developers focus on building user interfaces, fixing cross-browser bugs, and translating design mockups into working pages. Backend developers write server logic, design database queries, build APIs, and handle infrastructure concerns like caching and deployment. Fullstack developers do both. A significant part of the day also goes to reading existing code, searching documentation, and communicating with designers, product managers, and other engineers over Slack or in video calls.

How many hours do web developers work?

Most web developers work 7 to 9 hours per day. Remote developers often have flexible schedules and may start between 8:00 and 10:00 AM, while those in hybrid or office roles typically follow a 9:00 AM to 5:00 or 6:00 PM schedule. Crunch periods around product launches or urgent bug fixes can push hours higher, but sustained overtime is less common than in other tech roles like game development. On-call rotations for production incidents exist at some companies, especially for backend and infrastructure engineers, and those weeks can mean interrupted evenings or weekends.