What Technical Writing Is Actually Like
We talked to three technical writers. One is the first TW hire at a Kubernetes tooling startup in Portland who found out a feature shipped by seeing it in the deploy channel. One has been at the same enterprise ERP company in Austin for eleven years and owns a printed list of every banned phrase he has caught in other teams' documents since 2019. One works at a 14-person health tech startup in Chicago and has 94 screenshots in a folder called "Hall of Shame." Same job title. Very different days.
These characters are composites, built from dozens of real accounts, interviews, and community threads. The people aren't real. The experiences are.
What you'll learn
- What technical writers actually do across startup, enterprise, and health tech settings, beyond "they write the manual"
- How much of the job is writing versus investigating, negotiating, and chasing engineers for information
- The real differences between being the only TW and being one of dozens, beyond team size
- What prior careers do and don't transfer into technical writing, from someone who came from engineering and someone who came from support
What It's Like Being the Only Technical Writer at a Developer Tooling Startup
Faye
What does your day actually look like?
It depends on where we are in the release cycle. When we're two weeks out from a feature drop, I'm basically a detective. I'm in the GitHub repo, reading through issues and commit messages, trying to piece together what the feature does before I can write a word about it. We ship a Kubernetes management tool, so the audience is developers and DevOps engineers. They will notice immediately if I document a flag that works differently in namespace isolation mode than I said it would. So I have to actually understand it before I write about it. Which sounds obvious, but the part nobody prepares you for is how hard it is to get that information in the first place.
Last week, we shipped a new CLI flag called --cluster-scope. It changes how namespaced resources propagate across a cluster. I'd been watching the GitHub issue for about ten days. Dmitri, the senior backend engineer who owns the feature, had written a three-sentence description in the issue. Three sentences. For something that has different behavior depending on whether you're in standard mode, namespace isolation mode, or federated mode. That's three different docs, essentially, and he wrote three sentences. I Slacked him twice to set up a walkthrough. He's always busy and he's not unresponsive, he just works on a different calendar than I do. I was still waiting for him to confirm the third mode's behavior when I saw the deployment message hit the engineering channel at 10:14 AM.
The feature had shipped. I found out from Slack the same way everyone else did.
That's a rough way to learn about a release.
I've been at this company three years and it still happens. Not every time. But often enough that I've built a version of my workflow around it. I have a draft in progress for almost every active GitHub issue labeled "doc needed." So when something ships faster than expected, I have at least a skeleton I can build from. The Dmitri situation, I had two of the three modes documented. I shipped those with a note in the third section that said the behavior in federated mode was being confirmed and would be updated within 48 hours. Which, you know. Not ideal. But "48 hours to fill one section" is better than "entire doc missing" which is the alternative.
The PM on this feature is Soo-Jin. She and I have a standing tension around timing that I don't think she fully registers as tension. She gives release dates. I get 48-hour notice. Once she told me on a Tuesday morning that a new feature was going out Thursday and she needed the release notes by Wednesday at noon. The release notes, plus the updated API reference, plus the CLI man page. I told her that was two days of work minimum. She said "you've been watching the ticket, though, right?" Like watching the ticket is the same as writing the docs. The docs aren't the ticket, Soo-Jin. The docs are what I write after I understand the thing the ticket describes, which takes time that is separate from watching the ticket.
How do you actually learn what a feature does?
Multiple sources, layered together. The GitHub issue is the starting point. Then the PR description, which is sometimes more detailed than the issue and sometimes less. Then I go into the staging environment and actually use the feature. For something like --cluster-scope, that means spinning up a test cluster, applying the flag, and observing what happens across different resource types. That takes setup time. Our staging environment has gotten better since we brought on a dedicated DevOps person, but for a long time I was setting up my own test clusters in a personal GKE account, which I was technically paying for, which is a whole other thing.
After I've run it myself, I'll Slack the engineer with specific questions. Not "can you explain this feature," because that gets a three-sentence response. More like "I'm seeing that ServiceAccounts in namespace-A aren't propagating when I run --cluster-scope with the ns-isolation flag. Is that expected?" That kind of question gets a real answer, because it tells the engineer I've already done the work and I just need confirmation, not a tutorial. Dmitri responds faster to those. He's not trying to be unhelpful. He just has no framework for how long documentation takes because he's never written it.
You were a support specialist before this. How much does that background actually matter?
It's the most useful thing I brought to this job. Support specialists know exactly where users break. Not where engineers think users will break. Where they actually break. Those are almost never the same place. When Dmitri writes documentation, he starts with the architecture. He explains how the system is designed. Which is interesting if you're another engineer. If you're a DevOps person who inherited a cluster setup they didn't build and they're trying to figure out why namespaced resources aren't propagating, they don't need the architecture. They need: here are the three things that cause this, check them in this order.
My brother Elliot is a school librarian in Eugene. We've had this conversation where he explains that good reference documentation and good instructional documentation are completely different things. He's right. Reference docs are for people who know what they're looking for and need the exact syntax. Instructional docs are for people who know what they want to do and need a path to get there. Most of our documentation is one or the other and it's important to know which one you're writing. I came from support knowing which questions users actually asked, which turns out to matter a lot for knowing which type of doc they needed.
What's yours?
You almost never know if your work helped anyone. I write docs, they go live, and... that's it. Occasionally someone mentions in Slack that the getting-started guide is solid, or a developer in a community forum links to our API reference and says it's actually good. Those moments are genuinely nice. But mostly it's invisible. If the docs are good, nobody notices. They just use them and move on. If the docs are bad, I find out from a support ticket or a frustrated forum post or Soo-Jin forwarding me an email from a customer saying the CLI reference was confusing.
My friend Becca works in customer success at a different company. She has a metric: customer health score. It goes up or down, she can see it, she has weekly check-ins where she can measure her own impact. I write something and it disappears into the product. I can look at page views in the documentation portal. Dmitri's flag docs have gotten about 340 views in the last 30 days. Is that good? Is that the right 340 people? Did they find what they needed? I have no idea. In support, every closed ticket is a visible win. In TW, you're writing for a person you'll never meet, solving a problem you'll never see resolved. That gap, between the work and any evidence the work mattered, is harder than I expected.
What It's Like Being a Senior Technical Writer at an Enterprise Software Company
Colin
You were a mechanical engineer. That's not the obvious path into technical writing.
No. The thing is, I spent eight years designing HVAC systems for commercial buildings. Ductwork, air handlers, chiller plants, all of it. And probably 40% of my actual time was writing, not designing. Specifications, procedure manuals, installation sequences, commissioning documents. Every system I designed had to be documented so that the installers, the building operators, the maintenance people could actually use it. The documentation was part of the deliverable. It wasn't separate.
When I decided to leave engineering, I looked at what I'd been doing with most of my days and I realized I'd been writing technical documentation for eight years. I just hadn't been calling it that. Took a technical communication course at Austin Community College, mostly to get the vocabulary. Applied for an internal role at Caldwell, which at the time was an ERP company I'd actually used in my engineering job. They hired me because I could read a spec sheet without needing it explained. That turned out to matter a lot.
What does the enterprise version of this job look like?
It's process-heavy in a way that startup writers sometimes can't imagine. I'm on the inventory management module, which is one piece of our ERP software that mid-size manufacturers use to run their operations. My primary contact on the engineering side is Vikram. He's been here nine years. He's good at what he does. He is not a natural communicator.
Last Tuesday, Caldwell shipped v14.2 of the inventory variance reporting module. 22 UI screens changed. Vikram sent me a Jira ticket at 8:47 in the morning that said "docs need to reflect new UI." That's it. No diff. No list of what changed. No screenshots. The release notes said "improved reporting interface and enhanced filter options." Enhanced. That's one of the words on my list. I have 312 instances of "enhanced" logged. But I digress.
The spec for the release is 400 pages. Not all of it touches the UI. Figuring out which parts changed required me to open the live demo environment, pull up the old version of the docs, and go screen by screen comparing what I documented to what I was now looking at. I spent about two hours doing that. Found 14 screens with meaningful changes. Three of those had new filter fields that weren't in the spec at all, which means they were added during development after the spec was locked, which is a thing that happens more often than Vikram would admit if you asked him directly.
How do you handle the gaps, when the product and the spec don't match?
You document what the product does, not what the spec says it was supposed to do. That seems obvious but it creates a very specific tension. The engineer wrote the spec. The engineer is now telling you the product is correct and the spec will be updated "in the next cycle." My documentation manager Lupe has been here 15 years and she told me, years ago, that the spec is a historical document by the time it's finalized. You document the software as it ships, not as it was planned.
Lupe still prints draft copies of documentation and marks them up in purple ink. Every writer on our team gets a purple-ink review before anything goes to Vikram. She's caught things I missed. She once found a procedure I'd written where step 7 would cause a cascading reprocess of all open purchase orders if you ran it before completing step 6. Not a note in the spec. Just behavior I'd missed because I'd been testing in isolation. She circled it and wrote "DATA LOSS RISK" in purple caps. I fixed it. If I'd been an engineer on that feature, I might have missed it because I'd know not to run step 7 first. When you write documentation, you have to think like someone who doesn't know what you know.
You said you think like an engineer who worries about what fails. What does that mean in practice for documentation?
When I write a warning, I mean it the way I used to mean "this bolt torque is not optional." I designed HVAC systems for commercial buildings. If I spec'd the wrong torque on a structural mount and someone installed it wrong and the wall failed, that was on my documentation. Not on the installer. My doc. I've never lost that. When I write "do not run this report while a variance audit is active or you may overwrite pending adjustments," that warning is there because I traced the exact sequence of events that leads to data corruption and I know what it costs the customer when it happens. That specificity comes from having once cared about physical consequences.
Brandon, the junior TW I'm mentoring, he came straight out of college with an English degree. He writes beautifully. His prose is better than mine. But his first draft of a warning read "exercise caution when running reports during active audit cycles." Exercise caution. That tells the user nothing. It doesn't tell them what to watch for, what the failure mode is, or how to avoid it. I asked him: what actually happens if they run the report? He said he wasn't sure. That's the gap. You can't write a useful warning until you've run the bad sequence yourself and watched what breaks.
After eleven years, how do you stay engaged with work that covers the same software you've been documenting for over a decade?
The product keeps changing, which helps. ERP software at this level is genuinely complex. The inventory module alone has 400 pages of spec per release because there's that much in it. There's always something new to learn, even if the surrounding context is familiar. But I won't pretend there aren't weeks where I'm updating the same section of the receiving workflow for the third time because we added another field to the UI and I'm very tired of looking at receiving workflows.
My wife Darlene teaches third grade. She comes home from work tired in a completely different way than I do. Her tiredness is physical and social. Mine is mental. We made a rule a few years back that we don't talk about work at dinner unless one of us is genuinely upset about something. It's helped. There's a version of this job where you take the tedium home and it follows you around, and there's a version where you close the laptop and leave it. I've mostly learned to leave it. The inventory variance reporting module does not need me at 8 PM. It will still be broken in the same ways tomorrow morning.
What's yours?
How much of the job is translation, not writing. Not translation between languages. Translation between how an engineer thinks about a system and how a user experiences it. Those are different cognitive maps. Completely different. An engineer builds a mental model from the architecture outward. A user builds a model from the task inward. "I want to run a variance report" does not start from the data model. It starts from a business need. Getting from that business need to the correct sequence of UI interactions requires a translation layer that nobody explicitly designed. That translation layer is my job.
The hard part is that to do the translation, you have to hold both maps at once. You have to understand the architecture well enough to know why the UI works the way it does, and you have to understand the user's task well enough to know what path through the UI gets them there. Those are different skills and they sometimes pull in opposite directions. An engineer who's writing the spec will explain why the report runs against a point-in-time snapshot of inventory values. A user who needs the report doesn't need to understand the snapshot architecture. They need to know: run this before noon on the first business day of the month or the numbers reflect the prior cycle. Same information, completely different presentation. That's the job. Most people think the job is writing. It's not. The writing is the last part.
What It's Like Being the Sole Technical Writer at a Health Tech Startup
Harriet
Health tech startup documentation, what is that actually like?
It's like writing instructions for a cockpit while the plane is being assembled and someone keeps adding new instruments. Clarion makes scheduling and billing software for outpatient PT clinics. The people using it are clinic managers, front desk staff, and the PTs themselves. Not technical people. People who have 11 patients scheduled for Tuesday and they need the billing integration to work before the clearinghouse submission deadline at 3 PM.
Our CEO Jasper is a former PT. He thinks about documentation from a clinic workflow perspective, which is genuinely useful. He'll look at a draft and say "a front desk person would never encounter this screen in this sequence," and he's right, because he spent years at a clinic and he knows how those days actually run. The engineering side is led by Adaeze. She's brilliant, she writes tickets in full sentences, and she draws a hard line at user documentation. Her view is that a well-designed product doesn't need a lot of explanation. She's not entirely wrong. But "well-designed" and "shipped in two weeks" are sometimes in tension, and when they're in tension, the documentation is what bridges it.
What happened last Monday?
OK so. We have a new billing integration. Medicare clearinghouse. It's a significant feature, PT clinics do a lot of Medicare billing, it's complicated. Jasper told me two weeks ago I'd have two weeks to write the onboarding guide. I had it blocked in my calendar. I was going to spend the first week learning the integration inside out, testing every edge case, and the second week writing.
That plan lasted five days. Sales demoed the feature to seven clinic owners over the weekend. This happens at startups. Sales moves fast, that's the point, and I genuinely don't begrudge them that. But three of those clinic owners asked the sales rep questions about the onboarding process, and the rep, who is very good at selling and less versed in the onboarding flow, made some commitments about how smooth the setup would be. Jasper called me at 8:30 Monday morning. Not to apologize, to coordinate. He's not a bad person. He just has a different clock than I do.
I had 36 hours to produce a complete onboarding guide for a Medicare clearinghouse integration that I had not finished testing. Adaeze was heads-down on a deadline. Our customer success person Cora, who would normally be my QA partner on onboarding docs, was handling a clinic that had a server migration issue. I spent Monday running the integration myself in the staging environment, taking screenshots at every step, noting the three places where the UI does something non-obvious, and writing as fast as I could without going so fast that I wrote something wrong.
Did you get it done?
A version of it, yes. I finished at 11:40 PM Monday. Sent it to Jasper with a note that said it covered the standard setup path and that edge cases for PECOS enrollment discrepancies and claims with secondary insurance were flagged as "coming in v1.1 of this guide." Which was my way of saying: this is not the whole guide, here's exactly what's missing, don't let anyone call that complete documentation. He sent back a thumbs-up emoji. I went to bed.
The thing I kept thinking about was entry 72 in my Hall of Shame folder. That's a screenshot from a dental software company I did freelance work for. A billing module. I was given three days to document a claims scrubbing feature. I did my best, but I didn't fully understand the sequencing around pre-authorization requirements in certain states. The doc went live. Three weeks later, a dental office in Mississippi submitted 47 claims that got rejected because of a sequencing error that my documentation had described incorrectly. The office had to resubmit everything. That took them two days and cost them money they billed at staff time. Nobody got fired. But I knew. That's in the folder.
You came from medical device sales. That's an unusual path to TW.
Sales taught me how to explain a product to someone who has absolutely no patience for the wrong level of detail. A clinic manager or a department head who's deciding whether to buy your device has about four minutes of real attention. You learn fast what level to start at and when to go deeper. That skill maps directly to onboarding documentation. You don't start at the data model. You start at: what does this person need to be able to do in the next 20 minutes. Then you back up and give them what they need to get there.
The other thing sales gave me was a tolerance for ambiguity about the product. When you're selling, you're representing the product as it is and also as it's about to be. You learn to work with incomplete information. Technical writers at startups need that. The product changes constantly. You write the docs for the thing that shipped, knowing that in six weeks Adaeze is going to come to you with a Jira ticket that says the entire claims submission flow has been redesigned and your guide is now about 40% wrong. You can't take that personally. You just update it and keep moving.
What's the hardest part of being the only TW?
No one to hand things off to and no one to catch your mistakes except you. Cora in customer success will catch errors when she uses the docs to walk through an onboarding call, which is valuable but comes after the docs have already been live for a week. Jasper occasionally reads docs with user eyes, which is also helpful. But there's no one whose entire job is reading my drafts with a critical eye before they go out. At a larger company with a TW team, there's peer review. You write something, a colleague reads it, and they catch the thing you assumed was obvious because you'd been staring at the feature for four days. I don't have that. My Hall of Shame folder is partly the result of being the only set of eyes on my own work.
My boyfriend Marco gets home from the restaurant at midnight. Some nights I'm still at my laptop when he walks in. He's a sous chef, so he understands the concept of "service never stops because you're tired." He doesn't really understand what I'm working on. When I try to explain that I'm documenting a Medicare clearinghouse integration, he asks what that means and I say it's basically the system that checks whether a claim is going to get paid before you submit it, and he says "so like a credit card preauth?" and I say yes, basically. He said the other night that every explanation I give him ends up being a food metaphor. That might be true. It might be a sales background thing. Either way, the Medicare guide has 12 screenshots and I'm not sure any of them are going to be accurate in six months.
What's yours?
How much of the job is convincing people that documentation is a discipline and not a task. At startups especially, there's a mental model where "someone writes the docs" is a thing that happens, like "someone orders lunch." It's a task. Someone does it. It gets done. And then you move on. What I am actually doing is building a set of structured explanations that have to be accurate, consistent with each other, organized around how users think rather than how engineers think, and maintained as the product changes. That's not a task. That's a system. It requires planning, review, version control, and time. None of which is obvious when you hand a writer a feature ticket and ask when the docs will be ready.
Jasper gets it more than most. He told me once that he thinks of me as the person who tests whether the product makes sense. If I can't explain it, a clinic manager can't use it, and if they can't use it, the software has failed regardless of whether it works technically. That framing is correct and I appreciate it. But I'd like it to also mean I get two weeks when I'm promised two weeks. The Monday thing is going in the folder. Entry 95. Just as a reminder to myself, not as an indictment of anyone. I've been keeping the folder since 2019. It's the only performance review that tells me the truth.