fuzz the linux kernel bpf verifier

Overview

INTRODUCTION

The idea comes from scannell's blog, Fuzzing for eBPF JIT bugs in the Linux kernel.

It contains three parts:

  • qemu fuzzlib
  • ebpf sample generator
  • exception handler in the linux kernel

QEMU FUZZLIB

This module is mainly used to test the linux kernel. It uses the modified syzkaller script to generate debian buster image file and all other necessary files. The modified script adds a new normal user test without a password.

NOTE: use This create-image.sh to create the buster img: ./create-image.sh --distribution buster

This module provide an interface qemu_fuzzlib_env_setup() for the caller to initialize the fuzzing environment, the prototype of the function is:

extern struct qemu_fuzzlib_env *
qemu_fuzzlib_env_setup(char *user_name, u64 user_id, char *qemu_exec_path,
			char *bzImage_file, char *osimage_file,
			char *host_id_rsa, char *listen_ip, u32 inst_max,
			u32 idle_sec, u32 inst_memsz, u32 inst_core,
			char *env_workdir, char *guest_workdir,
			char *guest_user, char *script_file, char *c_file,
			char *sample_fname, char *fuzz_db,
			int (*db_init)(struct qemu_fuzzlib_env *),
			int (*mutate)(struct qemu_fuzzlib_env *, char *));
  • user_name: the fuzzer's name, in this project, it is ebpf_fuzzer.
  • user_id: default 0.
  • qemu_exec_path: the binary absolute path to qemu, e.g. /usr/bin/qemu-system-x86_64.
  • osimage_file: the absolute path to bzImage file.
  • host_id_rsa: the id_rsa file generated by the modified script.
  • listen_ip: 10.0.2.10 recommended.
  • inst_max: how many qemu instances will be launched.
  • idle_sec: how many seconds will wait for until an qemu instance is ready for a new sample.
  • inst_memsz: the memory size for each qemu instance.
  • inst_core: the core number for each qemu instance.
  • env_workdir: the work directory of the fuzzing process.
  • guest_workdir: the work directory of the guest, normally /tmp.
  • guest_user: the user will be used to login the guest, could be test or root. We need a normal user to trigger different code paths in the kernel.
  • script_file: the script file will be uploaded to the guest and be executed in the guest. Default: default_guest.sh.
  • c_file: the C source file that will be uploaded to the guest and be compiled and executed in the guest to execute the sample and catch the exception of the sample process. Default: default_guest.c.
  • sample_fname: the sample filename.
  • fuzz_db: the fuzzing database, not used for now.
  • db_init: the callback used to initialize the database.
  • mutate: the callback used to generate new sample.

After the fuzzing environment is setup, the caller should call qemu_fuzzlib_env_run() to start the fuzzer.

The qemu_fuzzlib_env_run() function generates new sample and put it into an available qemu instance to execute, until no more sample is generated or no available qemu instance found after the idle_sec seconds.

EBPF SAMPLE GENERATOR

We need to focus on just one thing: the mutate() callback. This function is used to generate new sample, in this ebpf fuzzer, to generate new ebpf sample.

The scannell's blog give us a perfect guidance to generate ebpf samples. I recommend you to read the blog first.

In the current implementation, the sample's header and tail are known. We need to generate the sample body, which is filled by ebpf instructions. The instructions do several things:

  • get the two bpf map pointers.
  • random instructions to manipulate the INVALID_P_REG. implemented in insn_body().
  • an ALU operation on CORRUPT_REG.
  • read from CORRUPT_REG and write the value to STORAGE_REG.
  • exit

After all instructions generated, we need to print the instructions and write them to the sample file.

insn_body()

  • gen_body0(): set the SPECIAL_REG bounds.
  • gen_body1(): generate bpf instructions up to max_body_insn. Six types of instructions:
    • INSN_GENERATOR_JMP: BPF_JMP.
    • INSN_GENERATOR_ALU: BPF_ALU.
    • INSN_GENERATOR_MOV: BPF_MOV.
    • INSN_GENERATOR_LD: BPF_LD_IMM64().
    • INSN_GENERATOR_NON: BPF_REG_0 = 0.
    • INSN_GENERATOR_MAX: the last insn, INVALID_P_REG = SPECIAL_REG.

EXCEPTION HANDLER IN THE LINUX KERNEL

The first time I run the fuzzer to trigger cve-2020-8835, the guest frozen: one of the kernel threads runs into an infinite loop. Check this commit: the verifier rewrote original instructions it recognized as dead code with 'goto PC-1'.

This is a good way to detect bugs in the bpf verifier.

What else?

How to run

After compiling the clib and this project, use ./ebpf_fuzzer /path/to/config 0 to startup the fuzzer.

For the bzImage file, make sure the following config options are enabled:

CONFIG_CONFIGFS_FS=y
CONFIG_SECURITYFS=y
CONFIG_E1000=y
CONFIG_BINFMT_MISC=y

When the bzImage and buster.img are ready, test the qemu first:

Launch qemu:

/usr/bin/qemu-system-x86_64 -m 2G -smp 2 -kernel /path/to/bzImage -append 'console=ttyS0 root=/dev/sda earlyprintk=serial net.ifnames=0' -drive file=/path/to/buster.img,format=raw -net user,host=10.0.2.10,hostfwd=tcp:127.0.0.1:10021-:22 -net nic,model=e1000 -enable-kvm -nographic

Communicate with the guest

ssh -q -i /path/to/buster.id_rsa -p 10021 -o 'StrictHostKeyChecking no' [email protected] id

An example of the config file:

[
	{
		"version": "general",
		"qemu_exec_path": "/path/to/qemu-system-x86_64",
		"bzImage_path": "/path/to/bzImage",
		"osImage_path": "/path/to/buster.img",
		"rsa_path": "/path/to/buster.id_rsa",
		"idle_sec": "1800",
		"host_ip": "10.0.2.10",
		"instance_nr": "8",
		"instance_memsz": "1",
		"instance_core": "2",
		"env_workdir": "/path/to/fuzzer_workdir",
		"guest_workdir": "/tmp/",
		"guest_user": "test",
		"sample_fname": "test.c",
		"body1_len": "24",
	}
]

The body1_len is used in mutate module, it's the count of instructions to generate in gen_body1(). The larger you give, the lower valid sample rate you will get. Default value is 0x18.

FAQ

Q: When running the fuzzer, the output is 'total: 0'?
A: Try to create the buster image with ./create-image.sh --distribution buster. Check issue #1.

You might also like...
Event-driven network library for multi-threaded Linux server in C++11

Muduo is a multithreaded C++ network library based on the reactor pattern. http://github.com/chenshuo/muduo Copyright (c) 2010, Shuo Chen. All righ

Drogon: A C++14/17 based HTTP web application framework running on Linux/macOS/Unix/Windows
Drogon: A C++14/17 based HTTP web application framework running on Linux/macOS/Unix/Windows

English | 简体中文 | 繁體中文 Overview Drogon is a C++14/17-based HTTP application framework. Drogon can be used to easily build various types of web applicat

Cross-connect Linux interfaces with XDP

Cross-connect Linux interfaces with XDP redirect xdp-xconnect daemon is a long-running process that uses a YAML file as its configuration API. For exa

Embedded Linux embedding for Flutter

Embedded Linux embedding for Flutter This project was created to develop non-official embedded Linux embeddings of Flutter. This embedder is focusing

web server that will print hello world on the screen only for linux users

a simple http server side lib only for linux users Note: This lib is currently under development you can check the source code and even use it but dn'

Linux Application Level Firewall based on eBPF and NFQUEUE.
Linux Application Level Firewall based on eBPF and NFQUEUE.

eBPFSnitch eBPFSnitch is a Linux Application Level Firewall based on eBPF and NFQUEUE. It is inspired by OpenSnitch, and Douane, but utilizing modern

Free Media Player for Windows and Linux with Youtube support.

SMPLAYER SMPlayer is a free media player for Windows and Linux with Youtube support.

Dolphin is an emulator for running GameCube and Wii games on Windows, Linux, macOS, and recent Android devices.

Dolphin is a GameCube / Wii emulator, allowing you to play games for these two platforms on PC with improvements.

Realtime Client/Server app for Linux allowing joystick (and other HID) data to be transferred over a local network

netstick What is it? Netstick enables HID devices to be remotely connected between a "client" and "server" over a network connection. It allows the ke

Comments
  • total: 0, valid: 0(-nan%), crash: 0(reason: 0)

    total: 0, valid: 0(-nan%), crash: 0(reason: 0)

    This project is awesome, and I have mutated the example successfully (outfile:/tmp/test_sample.c), but I don't now why the output like these:

    [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0)

    opened by De4dCr0w 6
  • [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) AND load_prog() err

    [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0) AND load_prog() err

    Thanks to your great idea and project !

    When I run the project with the correct kernel CONFIG (including CONFIG_BPF_SYSCALL=y , CONFIG_BPF_JIT=y),

    but I still got the ouput:

    [email protected]:~/ebpf-fuzzer# ./ebpf_fuzzer /root/ebpf-fuzzer/config 0
    qemu_fuzzlib_env_setup ...done
    [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0)
    [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0)
    [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0)
    [ebpf_fuzzer]: total: 0, valid: 0(-nan%), crash: 0(reason: 0)
    

    After that, I tried to run:

    ./ebpf_fuzzer /root/ebpf-fuzzer/config 1
    

    And then got a test file in /tmp/test_sample.c

    So I compile this file in the host and it succeed.

    Finally I run the file, but I got :

    [email protected]:~/ebpf-fuzzer# ./ebpf_fuzzer ./config 1
    [email protected]:~/ebpf-fuzzer# gcc /tmp/test_sample.c -o ./test_sample
    
    [email protected]:~/ebpf-fuzzer# ./test_sample 
    update_storage_map done.
    repro failed
    

    Then I check the test_sample.c and open some fprintf() for error, and re-compile test_sample.c , which finally got:

    [email protected]:~/ebpf-fuzzer# ./test_sample 
    update_storage_map done.
    load_prog() err
    repro failed
    

    It seems that the struct bpf_insn __insns[] load failed which made the fuzzer in a abnormal state?

    The test_sample.c is :

    
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include <stdint.h>
    #include <errno.h>
    #include <assert.h>
    #include <unistd.h>
    #include <sys/types.h>
    #include <sys/wait.h>
    #include <sys/time.h>
    #include <fcntl.h>
    #include <sys/syscall.h>
    #include <sys/ioctl.h>
    #include <sys/stat.h>
    #include <sys/socket.h>
    #include <signal.h>
    #include <netinet/in.h>
    #include <arpa/inet.h>
    #include <linux/bpf.h>
    #include <linux/bpf_common.h>
    #include <sys/prctl.h>
    
    enum qemu_fuzzlib_inst_res {
    	QEMU_FUZZLIB_INST_INVALID = -1,
    	QEMU_FUZZLIB_INST_NOT_TESTED = 0,
    	QEMU_FUZZLIB_INST_VALID,
    	QEMU_FUZZLIB_INST_BOOM,
    };
    
    typedef __s8	s8;
    typedef __s16	s16;
    typedef __s32	s32;
    typedef __s64	s64;
    typedef __u8	u8;
    typedef __u16	u16;
    typedef __u32	u32;
    typedef __u64	u64;
    
    struct xmsg {
    	unsigned long		special_value;
    	unsigned long		insn_cnt;
    	struct bpf_insn		insns[BPF_MAXINSNS];
    };
    
    #ifndef BPF_JMP32
    #define	BPF_JMP32	0x06
    #endif
    
    /* ArgX, context and stack frame pointer register positions. Note,
     * Arg1, Arg2, Arg3, etc are used as argument mappings of function
     * calls in BPF_CALL instruction.
     */
    #define BPF_REG_ARG1	BPF_REG_1
    #define BPF_REG_ARG2	BPF_REG_2
    #define BPF_REG_ARG3	BPF_REG_3
    #define BPF_REG_ARG4	BPF_REG_4
    #define BPF_REG_ARG5	BPF_REG_5
    #define BPF_REG_CTX	BPF_REG_6
    #define BPF_REG_FP	BPF_REG_10
    
    /* Additional register mappings for converted user programs. */
    #define BPF_REG_A	BPF_REG_0
    #define BPF_REG_X	BPF_REG_7
    #define BPF_REG_TMP	BPF_REG_2	/* scratch reg */
    #define BPF_REG_D	BPF_REG_8	/* data, callee-saved */
    #define BPF_REG_H	BPF_REG_9	/* hlen, callee-saved */
    
    /* Kernel hidden auxiliary/helper register. */
    #define BPF_REG_AX		MAX_BPF_REG
    #define MAX_BPF_EXT_REG		(MAX_BPF_REG + 1)
    #define MAX_BPF_JIT_REG		MAX_BPF_EXT_REG
    
    /* unused opcode to mark special call to bpf_tail_call() helper */
    #define BPF_TAIL_CALL	0xf0
    
    /* unused opcode to mark call to interpreter with arguments */
    #define BPF_CALL_ARGS	0xe0
    
    /* As per nm, we expose JITed images as text (code) section for
     * kallsyms. That way, tools like perf can find it to match
     * addresses.
     */
    #define BPF_SYM_ELF_TYPE	't'
    
    /* BPF program can access up to 512 bytes of stack space. */
    #define MAX_BPF_STACK	512
    
    /* Helper macros for filter block array initializers. */
    
    /* ALU ops on registers, bpf_add|sub|...: dst_reg += src_reg */
    
    #define BPF_ALU64_REG(OP, DST, SRC)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU64 | BPF_OP(OP) | BPF_X,	\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = 0,					\
    		.imm   = 0 })
    
    #define BPF_ALU32_REG(OP, DST, SRC)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU | BPF_OP(OP) | BPF_X,		\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = 0,					\
    		.imm   = 0 })
    
    /* ALU ops on immediates, bpf_add|sub|...: dst_reg += imm32 */
    
    #define BPF_ALU64_IMM(OP, DST, IMM)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU64 | BPF_OP(OP) | BPF_K,	\
    		.dst_reg = DST,					\
    		.src_reg = 0,					\
    		.off   = 0,					\
    		.imm   = IMM })
    
    #define BPF_ALU32_IMM(OP, DST, IMM)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU | BPF_OP(OP) | BPF_K,		\
    		.dst_reg = DST,					\
    		.src_reg = 0,					\
    		.off   = 0,					\
    		.imm   = IMM })
    
    /* Endianess conversion, cpu_to_{l,b}e(), {l,b}e_to_cpu() */
    
    #define BPF_ENDIAN(TYPE, DST, LEN)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU | BPF_END | BPF_SRC(TYPE),	\
    		.dst_reg = DST,					\
    		.src_reg = 0,					\
    		.off   = 0,					\
    		.imm   = LEN })
    
    /* Short form of mov, dst_reg = src_reg */
    
    #define BPF_MOV64_REG(DST, SRC)					\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU64 | BPF_MOV | BPF_X,		\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = 0,					\
    		.imm   = 0 })
    
    #define BPF_MOV32_REG(DST, SRC)					\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU | BPF_MOV | BPF_X,		\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = 0,					\
    		.imm   = 0 })
    
    /* Short form of mov, dst_reg = imm32 */
    
    #define BPF_MOV64_IMM(DST, IMM)					\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU64 | BPF_MOV | BPF_K,		\
    		.dst_reg = DST,					\
    		.src_reg = 0,					\
    		.off   = 0,					\
    		.imm   = IMM })
    
    #define BPF_MOV32_IMM(DST, IMM)					\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU | BPF_MOV | BPF_K,		\
    		.dst_reg = DST,					\
    		.src_reg = 0,					\
    		.off   = 0,					\
    		.imm   = IMM })
    
    /* Special form of mov32, used for doing explicit zero extension on dst. */
    #define BPF_ZEXT_REG(DST)					\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU | BPF_MOV | BPF_X,		\
    		.dst_reg = DST,					\
    		.src_reg = DST,					\
    		.off   = 0,					\
    		.imm   = 1 })
    
    /* BPF_LD_IMM64 macro encodes single 'load 64-bit immediate' insn */
    #define BPF_LD_IMM64(DST, IMM)					\
    	BPF_LD_IMM64_RAW(DST, 0, IMM)
    
    #define BPF_LD_IMM64_RAW(DST, SRC, IMM)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_LD | BPF_DW | BPF_IMM,		\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = 0,					\
    		.imm   = (__u32) (IMM) }),			\
    	((struct bpf_insn) {					\
    		.code  = 0, /* zero is reserved opcode */	\
    		.dst_reg = 0,					\
    		.src_reg = 0,					\
    		.off   = 0,					\
    		.imm   = ((__u64) (IMM)) >> 32 })
    
    /* pseudo BPF_LD_IMM64 insn used to refer to process-local map_fd */
    #define BPF_LD_MAP_FD(DST, MAP_FD)				\
    	BPF_LD_IMM64_RAW(DST, BPF_PSEUDO_MAP_FD, MAP_FD)
    
    /* Short form of mov based on type, BPF_X: dst_reg = src_reg, BPF_K: dst_reg = imm32 */
    
    #define BPF_MOV64_RAW(TYPE, DST, SRC, IMM)			\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU64 | BPF_MOV | BPF_SRC(TYPE),	\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = 0,					\
    		.imm   = IMM })
    
    #define BPF_MOV32_RAW(TYPE, DST, SRC, IMM)			\
    	((struct bpf_insn) {					\
    		.code  = BPF_ALU | BPF_MOV | BPF_SRC(TYPE),	\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = 0,					\
    		.imm   = IMM })
    
    /* Direct packet access, R0 = *(uint *) (skb->data + imm32) */
    
    #define BPF_LD_ABS(SIZE, IMM)					\
    	((struct bpf_insn) {					\
    		.code  = BPF_LD | BPF_SIZE(SIZE) | BPF_ABS,	\
    		.dst_reg = 0,					\
    		.src_reg = 0,					\
    		.off   = 0,					\
    		.imm   = IMM })
    
    /* Indirect packet access, R0 = *(uint *) (skb->data + src_reg + imm32) */
    
    #define BPF_LD_IND(SIZE, SRC, IMM)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_LD | BPF_SIZE(SIZE) | BPF_IND,	\
    		.dst_reg = 0,					\
    		.src_reg = SRC,					\
    		.off   = 0,					\
    		.imm   = IMM })
    
    /* Memory load, dst_reg = *(uint *) (src_reg + off16) */
    
    #define BPF_LDX_MEM(SIZE, DST, SRC, OFF)			\
    	((struct bpf_insn) {					\
    		.code  = BPF_LDX | BPF_SIZE(SIZE) | BPF_MEM,	\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = OFF,					\
    		.imm   = 0 })
    
    /* Memory store, *(uint *) (dst_reg + off16) = src_reg */
    
    #define BPF_STX_MEM(SIZE, DST, SRC, OFF)			\
    	((struct bpf_insn) {					\
    		.code  = BPF_STX | BPF_SIZE(SIZE) | BPF_MEM,	\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = OFF,					\
    		.imm   = 0 })
    
    /* Atomic memory add, *(uint *)(dst_reg + off16) += src_reg */
    
    #define BPF_STX_XADD(SIZE, DST, SRC, OFF)			\
    	((struct bpf_insn) {					\
    		.code  = BPF_STX | BPF_SIZE(SIZE) | BPF_XADD,	\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = OFF,					\
    		.imm   = 0 })
    
    /* Memory store, *(uint *) (dst_reg + off16) = imm32 */
    
    #define BPF_ST_MEM(SIZE, DST, OFF, IMM)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_ST | BPF_SIZE(SIZE) | BPF_MEM,	\
    		.dst_reg = DST,					\
    		.src_reg = 0,					\
    		.off   = OFF,					\
    		.imm   = IMM })
    
    /* Conditional jumps against registers, if (dst_reg 'op' src_reg) goto pc + off16 */
    
    #define BPF_JMP_REG(OP, DST, SRC, OFF)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_JMP | BPF_OP(OP) | BPF_X,		\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = OFF,					\
    		.imm   = 0 })
    
    /* Conditional jumps against immediates, if (dst_reg 'op' imm32) goto pc + off16 */
    
    #define BPF_JMP_IMM(OP, DST, IMM, OFF)				\
    	((struct bpf_insn) {					\
    		.code  = BPF_JMP | BPF_OP(OP) | BPF_K,		\
    		.dst_reg = DST,					\
    		.src_reg = 0,					\
    		.off   = OFF,					\
    		.imm   = IMM })
    
    /* Like BPF_JMP_REG, but with 32-bit wide operands for comparison. */
    
    #define BPF_JMP32_REG(OP, DST, SRC, OFF)			\
    	((struct bpf_insn) {					\
    		.code  = BPF_JMP32 | BPF_OP(OP) | BPF_X,	\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = OFF,					\
    		.imm   = 0 })
    
    /* Like BPF_JMP_IMM, but with 32-bit wide operands for comparison. */
    
    #define BPF_JMP32_IMM(OP, DST, IMM, OFF)			\
    	((struct bpf_insn) {					\
    		.code  = BPF_JMP32 | BPF_OP(OP) | BPF_K,	\
    		.dst_reg = DST,					\
    		.src_reg = 0,					\
    		.off   = OFF,					\
    		.imm   = IMM })
    
    /* Unconditional jumps, goto pc + off16 */
    
    #define BPF_JMP_A(OFF)						\
    	((struct bpf_insn) {					\
    		.code  = BPF_JMP | BPF_JA,			\
    		.dst_reg = 0,					\
    		.src_reg = 0,					\
    		.off   = OFF,					\
    		.imm   = 0 })
    
    /* Relative call */
    
    #define BPF_CALL_REL(TGT)					\
    	((struct bpf_insn) {					\
    		.code  = BPF_JMP | BPF_CALL,			\
    		.dst_reg = 0,					\
    		.src_reg = BPF_PSEUDO_CALL,			\
    		.off   = 0,					\
    		.imm   = TGT })
    
    #define	__bpf_call_base 0
    #define BPF_EMIT_CALL(FUNC)					\
    	((struct bpf_insn) {					\
    		.code  = BPF_JMP | BPF_CALL,			\
    		.dst_reg = 0,					\
    		.src_reg = 0,					\
    		.off   = 0,					\
    		.imm   = ((FUNC) - __bpf_call_base) })
    
    /* Raw code statement block */
    
    #define BPF_RAW_INSN(CODE, DST, SRC, OFF, IMM)			\
    	((struct bpf_insn) {					\
    		.code  = CODE,					\
    		.dst_reg = DST,					\
    		.src_reg = SRC,					\
    		.off   = OFF,					\
    		.imm   = IMM })
    
    /* Program exit */
    
    #define BPF_EXIT_INSN()						\
    	((struct bpf_insn) {					\
    		.code  = BPF_JMP | BPF_EXIT,			\
    		.dst_reg = 0,					\
    		.src_reg = 0,					\
    		.off   = 0,					\
    		.imm   = 0 })
    
    #define	LISTENER_PORT		(1337)
    #define	LISTENER_BACKLOG	(0x30)
    #define	STORAGE_MAP_SIZE	(8192)
    #define	FUZZ_MAP_SIZE		(8192)
    
    #define	ARRAY_CNT(arr)	(sizeof(arr) / sizeof(arr[0]))
    
    #define	CORRUPT_FD_CONST	10
    #define	STORAGE_FD_CONST	11
    #define	CORRUPT_REG		BPF_REG_9
    #define	STORAGE_REG		BPF_REG_8
    #define	SPECIAL_REG		BPF_REG_7
    #define	INVALID_P_REG		BPF_REG_6
    #define	LEAKED_V_REG		BPF_REG_5
    #define	UMAX_REG		BPF_REG_4
    #define	EXTRA0_REG		BPF_REG_3
    #define	EXTRA1_REG		BPF_REG_2
    #define	EXTRA2_REG		BPF_REG_1
    #define	MAGIC_VAL1		0x4142434445464748
    #define	MAGIC_VAL2		0x494a4b4c4d4e4f40
    
    static int bpf(unsigned int cmd, union bpf_attr *attr, size_t size)
    {
    	return syscall(SYS_bpf, cmd, attr, size);
    }
    
    static int update_storage_map(int fd, unsigned long special_val)
    {
    	uint64_t key = 0;
    	unsigned long buf[STORAGE_MAP_SIZE / sizeof(long)];
    	buf[0] = special_val;
    	for (int i = 1; i < (STORAGE_MAP_SIZE / sizeof(long)); i++) {
    		buf[i] = MAGIC_VAL2;
    	}
    	union bpf_attr attr = {
    		.map_fd = fd,
    		.key = (uint64_t)&key,
    		.value = (uint64_t)&buf,
    	};
    
    	return bpf(BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr));
    }
    
    static int update_corrupt_map(int fd)
    {
    	uint64_t key = 0;
    	unsigned long buf[STORAGE_MAP_SIZE / sizeof(long)];
    	for (int i = 0; i < (STORAGE_MAP_SIZE / sizeof(long)); i++) {
    		buf[i] = MAGIC_VAL1;
    	}
    	union bpf_attr attr = {
    		.map_fd = fd,
    		.key = (uint64_t)&key,
    		.value = (uint64_t)&buf,
    	};
    
    	return bpf(BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr));
    }
    
    static int init_maps(int *corrupt_map_fd, int *storage_map_fd)
    {
    	union bpf_attr corrupt_map = {
    		.map_type = BPF_MAP_TYPE_ARRAY,
    		.key_size = 4,
    		.value_size = STORAGE_MAP_SIZE,
    		.max_entries = 1,
    	};
    	strcpy(corrupt_map.map_name, "corrupt_map");
    	*corrupt_map_fd = (int)bpf(BPF_MAP_CREATE, &corrupt_map,
    				   sizeof(corrupt_map));
    	if (*corrupt_map_fd < 0)
    		return -1;
    
    	if (update_corrupt_map(*corrupt_map_fd) < 0)
    		return -1;
    
    	union bpf_attr storage_map = {
    		.map_type = BPF_MAP_TYPE_ARRAY,
    		.key_size = 4,
    		.value_size = STORAGE_MAP_SIZE,
    		.max_entries = 1,
    	};
    	strcpy(corrupt_map.map_name, "storage_map");
    	*storage_map_fd = (int)bpf(BPF_MAP_CREATE, &storage_map,
    				   sizeof(storage_map));
    	if (*storage_map_fd < 0)
    		return -1;
    
    	if (update_storage_map(*storage_map_fd, 0) < 0)
    		return -1;
    
    	return 0;
    }
    
    static int read_map(int fd, void *buf, size_t size)
    {
    	assert(size <= (STORAGE_MAP_SIZE));
    
    	unsigned long lk[STORAGE_MAP_SIZE / sizeof(long)];
    	memset(lk, 0, sizeof(lk));
    	uint64_t key = 0;
    	union bpf_attr lookup_map = {
    		.map_fd = fd,
    		.key = (uint64_t)&key,
    		.value = (uint64_t)&lk,
    	};
    
    	int err = bpf(BPF_MAP_LOOKUP_ELEM, &lookup_map, sizeof(lookup_map));
    	if (err < 0) {
    		return -1;
    	}
    
    	memcpy(buf, lk, size);
    
    	return 0;
    }
    
    static int setup_listener_sock(int port, int backlog)
    {
    	int sock_fd = socket(AF_INET,
    				SOCK_STREAM | SOCK_NONBLOCK | SOCK_CLOEXEC,
    				0);
    	if (sock_fd < 0) {
    		return sock_fd;
    	}
    
    	struct sockaddr_in servaddr;
    	servaddr.sin_family = AF_INET;
    	servaddr.sin_port = htons(port);
    	servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
    
    	int err = bind(sock_fd, (struct sockaddr *)&servaddr, sizeof(servaddr));
    	if (err < 0) {
    		close(sock_fd);
    		return err;
    	}
    
    	err = listen(sock_fd, backlog);
    	if (err < 0) {
    		close(sock_fd);
    		return err;
    	}
    
    	return sock_fd;
    }
    
    static int setup_send_sock(void)
    {
    	return socket(AF_INET, SOCK_STREAM | SOCK_NONBLOCK, 0);
    }
    
    #define	LOG_BUF_SIZE	65536
    static char bpf_log_buf[LOG_BUF_SIZE];
    
    static int load_prog(struct bpf_insn *insns, size_t insn_count)
    {
    	union bpf_attr prog = {};
    	prog.license = (uint64_t)"GPL";
    	strcpy(prog.prog_name, "ebpf_fuzzer");
    	prog.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
    	prog.insn_cnt = insn_count;
    	prog.insns = (uint64_t)insns;
    	prog.log_buf = (uint64_t)bpf_log_buf;
    	prog.log_size = LOG_BUF_SIZE;
    	prog.log_level = 1;
    
    	int prog_fd = bpf(BPF_PROG_LOAD, &prog, sizeof(prog));
    	if (prog_fd < 0) {
    		return -1;
    	}
    
    	return prog_fd;
    }
    
    static int exec_prog(int prog_fd, int *_err)
    {
    	int listener_sock = setup_listener_sock(LISTENER_PORT, LISTENER_BACKLOG);
    	int send_sock = setup_send_sock();
    
    	if ((listener_sock < 0) || (send_sock < 0)) {
    		return -1;
    	}
    
    	if (setsockopt(listener_sock, SOL_SOCKET, SO_ATTACH_BPF, &prog_fd,
    			sizeof(prog_fd)) < 0) {
    		return -1;
    	}
    
    	struct sockaddr_in servaddr;
    	servaddr.sin_family = AF_INET;
    	servaddr.sin_port = htons(LISTENER_PORT);
    	servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
    
    	int err;
    	err = connect(send_sock, (struct sockaddr *)&servaddr, sizeof(servaddr));
    	if (err < 0) {
    		*_err = errno;
    	}
    
    	close(listener_sock);
    	close(send_sock);
    	return (err < 0) ? 1 : 0;
    }
    
    static int detect_oob(char *buf0, char *buf1, size_t size)
    {
    	char *b = &buf1[8];
    	unsigned long *_b = (unsigned long *)buf1;
    	for (int i = 0; i < 8; i++) {
    		if ((b[i] > 0x4f) || (b[i] < 0x40)) {
    			fprintf(stderr, "[1]: %lx\n", _b[1]);
    			return 1;
    		}
    	}
    
    	fprintf(stderr, "[2]: %lx\n", _b[2]);
    	return 0;
    }
    
    static int repro_xmsg(int corrupt_map_fd, int storage_map_fd, struct xmsg *msg)
    {
    	int err = 0;
    	char buf0[STORAGE_MAP_SIZE];
    	char buf1[STORAGE_MAP_SIZE];
    
    	err = update_storage_map(storage_map_fd, msg->special_value);
    	if (err < 0) {
    		fprintf(stderr, "update_storage_map err\n");
    		return -1;
    	}
    	fprintf(stderr, "update_storage_map done.\n");
    
    	err = read_map(storage_map_fd, buf0, STORAGE_MAP_SIZE);
    	if (err < 0) {
    		fprintf(stderr, "read_map err\n");
    		return -1;
    	}
    
    	/* load and execute prog */
    	int prog_fd = load_prog(msg->insns, msg->insn_cnt);
    	if (prog_fd < 0) {
    		fprintf(stderr, "load_prog() err\n");
    		return -1;
    	}
    	fprintf(stderr, "%ld, %s.\n", strlen(bpf_log_buf), bpf_log_buf);
    
    	int connect_err;
    	err = exec_prog(prog_fd, &connect_err);
    	if (err != 1) {
    		/* prog not execute successfully */
    		return 0;
    	}
    	fprintf(stderr, "exec_prog done.\n");
    
    	/* read the map again, check the content */
    	err = read_map(storage_map_fd, buf1, STORAGE_MAP_SIZE);
    	if (err < 0) {
    		fprintf(stderr, "read_map err\n");
    		return -1;
    	}
    
    	if (detect_oob(buf0, buf1, STORAGE_MAP_SIZE)) {
    		return 1;
    	}
    
    	return 0;
    }
    
    int main(int argc, char *argv[])
    {
    	struct xmsg msg;
    	int corrupt_map_fd, storage_map_fd;
    	int err;
    
    	err = init_maps(&corrupt_map_fd, &storage_map_fd);
    	if (err < 0) {
    		fprintf(stderr, "init_maps err\n");
    		return QEMU_FUZZLIB_INST_NOT_TESTED;
    	}
    	dup2(corrupt_map_fd, CORRUPT_FD_CONST);
    	dup2(storage_map_fd, STORAGE_FD_CONST);
    	close(corrupt_map_fd);
    	close(storage_map_fd);
    	corrupt_map_fd = CORRUPT_FD_CONST;
    	storage_map_fd = STORAGE_FD_CONST;
    	memset(&msg, 0, sizeof(msg));
    
    	struct bpf_insn __insns[] = {
    BPF_MOV64_IMM(BPF_REG_0, 0x0),
    BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
    BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
    BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 0xfffffffc),
    BPF_LD_MAP_FD(BPF_REG_1, 0xa),
    BPF_EMIT_CALL(0x1),
    BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0x0, 1),
    BPF_EXIT_INSN(),
    BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
    BPF_MOV64_IMM(BPF_REG_0, 0x0),
    BPF_MOV64_IMM(BPF_REG_0, 0x0),
    BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
    BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
    BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 0xfffffffc),
    BPF_LD_MAP_FD(BPF_REG_1, 0xb),
    BPF_EMIT_CALL(0x1),
    BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0x0, 1),
    BPF_EXIT_INSN(),
    BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
    BPF_MOV64_IMM(BPF_REG_0, 0x0),
    BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_8, 0),
    BPF_LD_IMM64(BPF_REG_3, 0xd9f080a750714ba2),
    BPF_ALU64_REG(BPF_DIV, BPF_REG_3, BPF_REG_7),
    BPF_MOV64_IMM(BPF_REG_0, 0x0),
    BPF_MOV64_IMM(BPF_REG_0, 0x0),
    BPF_JMP32_REG(BPF_JA, BPF_REG_3, BPF_REG_7, 1),
    BPF_EXIT_INSN(),
    BPF_ALU64_REG(BPF_MUL, BPF_REG_3, BPF_REG_7),
    BPF_ALU64_REG(BPF_MUL, BPF_REG_3, BPF_REG_7),
    BPF_MOV32_REG(BPF_REG_3, BPF_REG_7),
    BPF_MOV64_IMM(BPF_REG_0, 0x0),
    BPF_MOV64_IMM(BPF_REG_0, 0x0),
    BPF_JMP_REG(BPF_JNE, BPF_REG_7, BPF_REG_3, 1),
    BPF_EXIT_INSN(),
    BPF_ALU32_REG(BPF_RSH, BPF_REG_7, BPF_REG_3),
    BPF_MOV64_IMM(BPF_REG_0, 0x0),
    BPF_ALU32_IMM(BPF_MOD, BPF_REG_7, 0x74ea35c1),
    BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0x6effe2c3, 1),
    BPF_EXIT_INSN(),
    BPF_ALU64_IMM(BPF_MUL, BPF_REG_3, 0xfb1a558c),
    BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0x18d2ddfb),
    BPF_JMP_REG(BPF_JA, BPF_REG_3, BPF_REG_7, 1),
    BPF_EXIT_INSN(),
    BPF_ALU64_IMM(BPF_SUB, BPF_REG_7, 0xb9699376),
    BPF_JMP_REG(BPF_JSGE, BPF_REG_3, BPF_REG_7, 1),
    BPF_EXIT_INSN(),
    BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
    BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_6),
    BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_9, 0),
    BPF_STX_MEM(BPF_DW, BPF_REG_8, BPF_REG_5, 8),
    BPF_MOV64_IMM(BPF_REG_0, 0x1),
    BPF_EXIT_INSN(),
    	};
    
    	msg.special_value = 0x3695615b1b9746ab;
    	msg.insn_cnt = ARRAY_CNT(__insns);
    	memcpy(msg.insns, __insns, msg.insn_cnt * sizeof(struct bpf_insn));
    
    	err = repro_xmsg(corrupt_map_fd, storage_map_fd, &msg);
    	if (err == 1) {
    		fprintf(stderr, "repro done\n");
    		return QEMU_FUZZLIB_INST_BOOM;
    	} else if (err == 0) {
    		fprintf(stderr, "repro failed\n");
    		return QEMU_FUZZLIB_INST_VALID;
    	} else if (err == -1) {
    		fprintf(stderr, "repro failed\n");
    		return QEMU_FUZZLIB_INST_INVALID;
    	}
    }
    
    
    opened by OrangeGzY 2
Owner
zerons
zerons
pwru is an eBPF-based tool for tracing network packets in the Linux kernel with advanced filtering capabilities.

pwru (packet, where are you?) pwru is an eBPF-based tool for tracing network packets in the Linux kernel with advanced filtering capabilities. It allo

Cilium 940 Oct 4, 2022
Source-code based coverage for eBPF programs actually running in the Linux kernel

bpfcov Source-code based coverage for eBPF programs actually running in the Linux kernel This project provides 2 main components: libBPFCov.so - an ou

elastic 110 Aug 29, 2022
Linux Terminal Service Manager (LTSM) is a set of service programs that allows remote computers to connect to a Linux operating system computer using a remote terminal session (over VNC or RDP)

Linux Terminal Service Manager (LTSM) is a set of service programs that allows remote computers to connect to a Linux operating system computer using a remote terminal session (over VNC)

null 22 Sep 29, 2022
the LIBpcap interface to various kernel packet capture mechanism

LIBPCAP 1.x.y by The Tcpdump Group To report a security issue please send an e-mail to [email protected] To report bugs and other problems, contri

The Tcpdump Group 2k Oct 3, 2022
libsinsp, libscap, the kernel module driver, and the eBPF driver sources

falcosecurity/libs As per the OSS Libraries Contribution Plan, this repository has been chosen to be the new home for libsinsp, libscap, the kernel mo

Falco 116 Sep 29, 2022
This is a kernel module for FreeBSD to support WireGuard

WireGuard for FreeBSD This is a kernel module for FreeBSD to support WireGuard. It is being developed here before its eventual submission to FreeBSD 1

WireGuard 32 Sep 23, 2022
High performance in-kernel WireGuard implementation for Windows

WireGuard for the NT Kernel High performance in-kernel WireGuard implementation for Windows WireGuardNT is an implementation of WireGuard, for the NT

WireGuard 59 Sep 16, 2022
ipcbuf - test/report the size of an IPC kernel buffer

ipcbuf - test/report the size of an IPC kernel buffer Different forms of IPC utilize different in-kernel buffers, depending on a variety of possibly s

Jan Schaumann 6 Sep 7, 2022
Cosmic-fresh kernel for M21, for personal use. force pushes incoming!! you have been warned.

Linux kernel ============ This file was moved to Documentation/admin-guide/README.rst Please notice that there are several guides for kernel develop

AnuragRai 16 May 7, 2022
TLS handshake utilities for in-kernel TLS consumers

Release Notes for ktls-utils 0.5 Note well: This is experimental prototype software. It's purpose is purely as a demonstration and proof-of-concept. U

Oracle 6 Aug 23, 2022