So, in this section of my notes on Heap exploitation, I’ve written some misc and good techniques involved in heap exploitation

1: Freeing chunks without actually calling free

This technique requires a heap overflow bug. Whenever malloc receives a request that is too large to be serviced by an arena’s top chunk or bins, the normal arenas handle it by changing permissions on the pre-mapped heap memory but the main arena invokes the brk system call to request memory from the kernel after which malloc checks the SIZE field to find out whether the newly allocated memory is contiguous to the heap or not. If it is, malloc extends the size of the top chunk. If the SIZE of the top chunk is overwritten with a small value and a large request to malloc is made, malloc finds out that the new memory doesn’t border the end of the heap because of the fake size field of the top chunk. Since the newly allocated memory is larger, malloc starts a new heap right there by freeing the previous top chunk and moving the top chunk pointer to the newly allocated memory. This will allow us to generate libc leaks by inserting a chunk into the unsorted bin. There’s a check involved when you call a large malloc after overwriting the top_chunk SIZE field with a small value.

static void * sysmalloc (INTERNAL_SIZE_T nb, mstate av)
  assert ((old_top == initial_top (av) && old_size == 0) ||
          ((unsigned long) (old_size) >= MINSIZE &&
           prev_inuse (old_top) &&
           ((unsigned long) old_end & (pagesize - 1)) == 0));

The source code indicates that the PREV_INUSE must be set and the top chunk should end on a page boundary. To do that, overwrite the top chunk size field with the size of a page minus current allocated size with the PREV_INUSE bit set i.e. 0x1000-allocated size+1. Suppose you’ve allocated a single chunk of size 0x20. Then, you need to write 0x1000 -0x20 +1. Chunks of size 0x10 known as fencepost chunks are stored at the end of the heap after the newly allocated memory is provided to the heap. Their purpose is to ensure that forward consolidation attempts don’t result in out of bounds read.