Strncpy (rl_line_buffer + rl_point, string, l) The following snippet shows the part that is interesting for us: if (rl_end + l >= rl_line_buffer_len) The function rl_insert_text can be found in text.c:85. Now we have sufficient information to take a look into the source code to figure out what happened. The same function is called repeatedly and more and more of the data gets bytewise written. This indeed looks like the data gets written step by step, or in our case, more like unwritten. 0x5584b824ef2d → readline_internal_charloop() Id 1, stopped 0x7f0e82fd1933 in _strncpy_avx2 (), reason: BREAKPOINT We place a watchpoint at prev_size and reverse-continue until we find out where the data was written. Now we want to find out where this data was written. Since the value written is 0x3737373737373737 or “77777777” this indicates a heap-based buffer overflow of the previous chunk. This looks a lot like the information about the prev_size we received from GEF. The prev_size is located before the actual data, which is also where our pointer is pointing to. A nice diagram showing the layout of a heap chunk can be found here. However, we can also investigate the chunk by hand. This is one of the many reasons I like to use a gdbinit script like GEF. Ptr will be in RDI, and the size parameter is passed in RSI. Next, we check how parameters are passed by taking a look at the calling conventions. Let us recall realloc and its arguments: void *realloc(void *ptr, size_t size) We now place a breakpoint at realloc and then resume execution in reverse order, with the reverse-continue command. To gather some initial details about the crash, we use Valgrind to determine what happened. Both issues indicate a heap-related issue (SIGABRT) and potentially the same type of issue, like heap-based buffer-overflow or double-free (same PC). Furthermore, we see that the program counter of all crashes is the same (0x7ffff7c03615). We can see that the signal that was sent is “abort” in the first part of the filename. Here are some of the files that honggfuzz created. As mentioned in the previous blog post, some information is already embedded in the filenames of the crashes. This setup (with the respective instrumentation) was used for fuzzing and now is used to triage one of the crashes we found.Īfter some time of fuzzing, honggfuzz reported the first crashes. I adapted one of the examples to be even more straightforward. We can download the source code of GNU readline as a tar.gz file from here. We use GDB and rr for time-travel debugging to determine the root cause of the bug. As an example, we use a heap-based buffer overflow I found in GNU readline 8.1 rc2, which has been fixed in the newest release. In this blog post, we discuss how we can manually triage a crash and determine the root cause. In the last blog post, we discussed how fuzzers determine the uniqueness of a crash.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |