When security professionals consider a career as a professional hacker, there’s much to consider. There’s some amazing benefits, like working on projects that you’re interested in, making your own hours, and making the world a better place through breaking things.
The job pays well if you’re talented, and you can build a full career out of a few novel findings. It isn’t uncommon for reverse-engineers or cryptographers to earn nice six figure salaries. And, much like the medical field, specialization pays more.
But with these wonderful benefits of a career as an elite hacker, comes a lot of headaches.
The Field is Very Active and Requires Constant Research
In order to be an effective breaker of things, you have to constantly read published work from colleagues and learn new techniques. Sometimes, there are simple reports like entering nothing into the field for username and password on Intel’s Management Engine gives you full access to a PC. Other’s are very complicated, like RowHammer. Often, effective hacking is an art-form of locating multiple different types of problems and chaining those together to create a full vulnerability that requires a patch. There’s a lot of reading, learning, adapting, and trying new things.
Managers See Security as a Line-Item Expense
If you’re working on a security team for a large company, security often takes a backseat to nearly everything else. It is seen as a expensive form of risk management and proactive public relations. In the eyes of (irresponsible) managers, security teams are expensive and slow down the development of new products. Neglected budgets often lead to layoffs and talented people often find many companies on their Resume/CV.
You’ll find yourself arguing about how serious a problem is and whether development time needs to be spent on addressing issues. Conflicts with managers often lead to unnecessary stress and the differences can often grow into hunting for a new job.
Vulnerability Disclosure and Response to Hostile Entities
Vulnerability disclosure to third parties is the largest wild card for researchers. Depending on the organization that you’re dealing with, responses can be as positive as interested engagement and proactively fixing an issue, to dismissive and uninterested, to actively hostile and ready to pursue legal action.
A majority of white hat researchers engage in responsible disclosure. This means that you do research into a piece of software, you find a problem, and you notify the vendor in private. The developers that maintain the software are given a time window to respond and fix the problem, once patching is completed, the vulnerability is publicized and the world is told to update. Everyone wins.
This arrangement works smoothly if the organization that you’re dealing with is not hostile and is operating ethically. Often, the reality is that companies respond extremely slowly to reports, leaving their customers vulnerable to attack. Sometimes, the company opts to do nothing and encourages the researcher to never disclose the problem.
This is where the “runway” for a vulnerability is crucial. The agreement is cordial until this time window elapses, and then the researcher publishes the vulnerability, patched or not. This forces the company to act because the vulnerability is in the wild and can be actively be exploited. This may seem irresponsible on the surface, but the runway creates a sense of urgency for the developer to actually fix problems that are reported and not stall or ignore serious security issues that their users face.
An example of responsible disclosure stretched to its limits is Broadcom’s lethargic response to QuarksLab’s findings that impact billions of cell phones and other network devices.
My project involved understanding the Linux kernel drivers, analyzing Broadcom firmware, reproducing publicly known vulnerabilities, working on an emulator to run portions of firmware, fuzzing and finding 5 vulnerabilities (CVE-2019-8564, CVE-2019-9500, CVE-2019-9501, CVE-2019-9502, CVE-2019-9503). Two of those vulnerabilities are present both in the Linux kernel and firmware of affected Broadcom chips. The most common exploitation scenario leads to a remote denial of service. Although it is technically challenging to achieve, exploitation for remote code execution should not be discarded as the worst case scenario.
The worst of these actors use threats of invoking CFAA and DMCA 1201 to silence researchers altogether, so the first time you discover that you've been trusting a defective product is when it is so widely exploited by criminals and grifters that it's impossible to keep the problem from becoming widely known.
It took Broadcom and Apple a total of eight months to respond to these findings and issue patches.
Companies are hostile to security research with alarming regularity. Here is an example of how a drone company DJI, who has (had?) a bug-bounty program, threatened a researcher with pressing charges for violating the Computer Fraud and Abuse Act for presenting the company with a vulnerability and PoC. Another hostile company River City Media sued a researcher for locating a large spam operation originating from their servers. Oracle’s Chief Security Officer also famouslywrote a hostile letter about how reverse engineering their software violates their license and they’ll sue you for reporting bugs. The famous letter is still on the Internet Archive here.
Take the Good with the Bad
Often people make career choices without considering the full scope of what the job entails and the daily stresses of your workflow. When people imagine being a hacker for hire, the endless documenting, education, paperwork, dialogue, and even legal issues overshadow the actual research.
You can somewhat mitigate this by only working with companies that have a good reputation for working with ethical hackers. It sidesteps the obvious issues like legal threats. And companies like HackerOne offer a platform where ethical hackers and companies can interact without as much uncertainty about one another.
About Derek Zimmer
Derek is a cryptographer, security expert and privacy activist. He has twelve years of security experience and six years of experience designing and implementing privacy systems. He founded the Open Source Technology Improvement Fund (OSTIF) which focuses on creating and improving open-source security solutions through auditing, bug bounties, and resource gathering and management.