How I Learned to Stop Critical Thinking and Love Security Defaults

by Washi

Most people get into hacking through curiosity.  I grew up in the early 2000s, around the infancy of the modern Internet but after the popularity of phreaking.  Websites were set up by self-taught hobbyists and professionals.  Security was never a design principle.

Default passwords, open directories, and networks were all there.  The bar to entry was low with immediate results, even for a young kid.  Armed with a few basic passwords, anyone could have a good chance of gaining unauthorized access without writing a single line of code or running any software.

One of the first significant moments I remember was at elementary school.  The vulnerability was roughly as follows: On a Windows 2000 lock screen, selecting Help and right-clicking on a button gave the choice to print the help dialogue pop-up.  After bringing up the print window, pressing F1 launched the printer's dedicated help application.  On this application, by clicking File -> Open and browsing to My Computer, it would get you into the logged-in account's desktop, bypassing any authentication.

In less than a few minutes, I had access to our school's server.  I could see, delete, or install anything I wanted.  At that moment I felt like a king.  If this was so easy to do, how come everyone wasn't doing it?  If this flaw is there, what else can be found?  I realized that most hacking wasn't from geniuses; it was just prodding things outside of expected behavior.

I used to own a PlayStation and stores sold cheat CDs that extended beyond any built-in codes for the game to use as an advantage.  While cheating my characters to max level in Final Fantasy VII or taking no damage in Spyro was cool, it would be even cooler if I could learn how to do it myself.

Then I came across a tool that could do that for PC games: Cheat Engine.

For those who don't know, Cheat Engine is a memory scanner focused on computer games.  It has a simple interface: you input a value such as your health or level number, select the scan and value types, and click Scan.  You would usually return hundreds of integers with that matched value in memory.  It was then simply a case of changing that number in the game legitimately, such as taking damage, updating to the new value in Cheat Engine, and searching again.  Values such as your X and Y coordinates on the screen weren't visible, so you would instead search for increased or decreased values repeatedly.  Eventually, you would be down to one integer.  Usually, this would take a few minutes at most.  Now you could set up your memory breakpoints and manipulate the game to your heart's content.

While it didn't seem like a huge skill back then, it fueled the beginning of a road to learning.  You could solder a mod chip onto a PlayStation board with just two wires to play backups from CDs you burned using Nero on your computer.  It was so easy.  You could pop one of the pins with a screwdriver from a Nintendo Entertainment System to bypass its region lock.  There were so many things to absorb and learn.  I asked for a specific laptop for one of my birthdays with a compatible Wi-Fi adapter model so I could run a BackTrack Live CD to go wardriving on my bike.  There never was any malicious intent; I just wanted to prove I could do it myself.

In a similar vein: When always-on Internet access wasn't an expectation with computer software, serial numbers rarely used to verify online at the point of installation, if at all (FCKGW stand up).  Instead, they relied on an internal algorithm to check validity.  These could be reverse engineered similarly to the previous PC games.  However, you would need a more substantial toolset like OllyDbg or IDA Pro (the latter being a rite of passage to crack into a full copy before the trial was up) to step through the code as needed.

Armed with one of these tools, you go at it from one of two common ways.  Reverse engineer the algorithm of what creates a valid serial number.  (For example, a well known shareware IRC client of the early 2000s used to take the name of the user, ignore the first three letters, and associate the remaining letters to its position on the alphabet combined with a hard-coded offset value.)  The other method was to create a binary patch to skip over any protection code cycles entirely.

While this is still somewhat applicable for modern software cracking, most license verification is now handled server-side, as Internet connectivity isn't as much of an issue as it once was.  Developers now do away with serial keys entirely in favor of physical accounts that have a limited amount of installs and anti-debugging measures like generating unique hardware keys that dynamically decrypt code, running code through an isolated virtual environment or class, and algorithm obfuscation all to make it harder to trace.  Despite all of these controls, for the case of Denuvo DRM: A Bulgarian hacker known as Voksi found you could use a demo copy of a game to generate a legitimate Denuvo hardware key, which could be pulled out of memory and applied to a pirated copy.  (After the crack was made public, it worked for less than a few days before it was patched.)

Skills are learned through the necessity of application.  By simply understanding how protocols work and how systems interact with each other, you have a very good grasp of how to secure your entire stack more than what best practice books or tutorials will show you.  Modern systems are so sandboxed, it's harder to accidentally break something and learn why it broke.  You don't need to set the IRQ number on hardware, edit config files to get games running, monitor your swap file, set jumpers on hard drives, or even download drivers anymore.  It's all handled for you automatically.  There's less opportunity for learning and tinkering, gatekeeping potentially a whole new generation of curious hackers from fundamental skills they need to think for themselves.

Security by design is inherently a good thing.  However, the tradeoff means people have lost appreciation to understand why and how things work.  Corporations will blindly deploy a Platform as a Service (PaaS) and configure it in such a convoluted way because they're blindly following an article from someone's blog, guessing (or worse, asking AI) without questioning why they're doing these things in a particular way.  The worst part is, built-in security defaults mean poor architecture decisions still work and are "secure" on the parameter - enough to get a tick from a big pen-testing firm or stop it showing in Shodan.  So these people think they've done a good job.  Software developers aren't immune either - while code scanners can find secret strings, it doesn't stop code being implemented in a bad way.  I've seen many corporations with CyberArk set up so badly that it may as well not be there at all.  It's all security theater.

Flat networks still exist.  A VLAN with 0.0.0.0/0 inbound may as well not be segmented at all.  No input validation when parsing client-side PHP may as well give full traversal.  Default accounts may be disabled, but do you expect the default passwords to be changed prior?  A corporation may have a SOC, but realistically how tuned are the alerts?

Don't fool yourself that a system is secure.  Especially when the IT team doesn't understand what good looks like.

Return to $2600 Index