I spent a Saturday on HackTheBox. I'd been reading security write-ups for months - the kind where someone walks through compromising a machine step by step, explaining the reasoning at each stage. The write-ups made it look methodical. Find the open ports, identify the services, look for known vulnerabilities or misconfigurations, get a shell, escalate privileges, capture the flag. A process.
I picked a beginner-rated machine and assumed "beginner" meant I'd be able to make progress without too much struggle. I was wrong about what beginner means in this context.
What enumeration actually involves
The first phase is enumeration - mapping what's running on the target. I knew this conceptually. Run a port scan, see what services are listening, figure out what versions they're running. In practice, I spent the first hour not knowing what to do with the output I was getting.
The port scan came back with a few open ports. I could see that port 80 was open (HTTP), port 22 (SSH), and a couple of others I didn't immediately recognize. I knew HTTP and SSH. The others I had to look up. One of them turned out to be a service with a known vulnerability in a specific version - but I didn't know to check the version, and I didn't know that checking the version was the next step. I kept looking at the HTTP service because that's what I knew.
What I was missing wasn't technical knowledge exactly. I knew what a port was. I knew what a service was. I knew what a CVE was. What I was missing was the mental model of how these things connect - the instinct that says "this port, this service, check the version, cross-reference against known vulnerabilities." That instinct comes from repetition. You can't read your way to it.
The person who wrote the write-up I eventually looked at saw the same output I saw and immediately knew the attack path. Not because they knew more facts than me, but because they'd seen that pattern dozens of times. The open port was a signal to them. To me it was just a number.
The privilege escalation part
I eventually got a low-privilege shell on the machine - with significant help from the write-up. The next step was privilege escalation: finding a way to go from a limited user account to root. This is where I found the conceptual gap most interesting.
The technique that worked involved a misconfigured SUID binary - a program that runs with elevated permissions regardless of who executes it. The specific binary had a known way to spawn a shell, and because it ran as root, the shell it spawned was a root shell. I understood why this worked once I read the explanation. The binary runs as root, it can be made to execute arbitrary commands, therefore you can execute arbitrary commands as root. The logic is clear.
What I couldn't have found on my own was the binary. There are dozens of SUID binaries on a typical Linux system. Most of them are there for legitimate reasons and aren't exploitable. Knowing which ones to look at, and knowing the specific techniques for each one, is a body of knowledge that takes time to build. I knew the concept of SUID. I didn't know the catalog of exploitable binaries and their techniques.
What I actually took away
I finished the machine with the write-up open the whole time. It felt less like solving a puzzle and more like following a recipe I didn't fully understand. But I came away with something more concrete than I'd had before.
Reading security write-ups, I'd understood the concepts abstractly. After spending a Saturday actually doing it badly, I understood the texture of the work differently. Enumeration isn't just "scan the ports" - it's a systematic process of building a picture of the attack surface, and the skill is knowing what each piece of information implies about what to look for next. Privilege escalation isn't just "find a misconfiguration" - it's knowing the specific misconfigurations that are common and exploitable, which is a catalog you build through practice.
The gap between knowing the concepts and being able to execute is larger than I'd appreciated. This is true in most technical domains, but security is particularly unforgiving about it because the feedback loop is binary. Either you get the shell or you don't. There's no partial credit for understanding the theory.
Why this changed how I think about security requirements
I'm not going to become a security researcher. That's not the point. The point is that spending a Saturday genuinely struggling with a beginner machine gave me a more grounded intuition about what security work actually involves.
When I write specs now and I'm thinking about attack surface, I think about it differently. Not "we should make this secure" but "what does an attacker see when they enumerate this service? What version information are we exposing? What does the error message reveal about the underlying technology?" Those are questions I can ask because I spent a few hours trying to answer them from the attacker's side, badly.
It also gave me more respect for the security engineers I work with. The pattern recognition they have - the ability to look at a system and immediately see the likely attack paths - is a skill that took years to build. When they flag something as a security concern, I take it seriously in a way I might not have before I understood how much expertise goes into identifying those concerns in the first place.