mTCP: A Highly Scalable User-level TCP Stack for Multicore Systems

Overview

Build Status Build Status

README

mTCP is a highly scalable user-level TCP stack for multicore systems. mTCP source code is distributed under the Modified BSD License. For more detail, please refer to the LICENSE. The license term of io_engine driver and ported applications may differ from the mTCP’s.

Prerequisites

We require the following libraries to run mTCP.

  • libdpdk (Intel's DPDK package*) or libps (PacketShader I/O engine library) or netmap driver
  • libnuma
  • libpthread
  • librt
  • libgmp (for DPDK/ONVM driver)

Compling PSIO/DPDK/NETMAP/ONVM driver requires kernel headers.

  • For Debian/Ubuntu, try apt-get install linux-headers-$(uname -r)

We have modified the dpdk package to export net_device stat data (for Intel-based Ethernet adapters only) to the OS. To achieve this, we have created a new LKM dpdk-iface-kmow. We also modified mk/rte.app.mk file to ease the compilation process of mTCP applications. We recommend using our package for DPDK installation.

CCP support

You can optionally use CCP's congestion control implementation rather than mTCP's. You'll have wider selection of congestion control algorithms with CCP. (Currently this feature is experimental and under revision.)

Using CCP for congestion control (disabled by default), requires the CCP library. If you would like to enable CCP, simply run configure script with --enable-ccp option.

  1. Install Rust. Any installation method should be fine. We recommend using rustup:

    curl https://sh.rustup.rs -sSf | sh -- -y -v --default-toolchain nightly
  2. Install the CCP command line utility:

    cargo install portus --bin ccp
  3. Build the library (comes with Reno and Cubic by default, use ccp get to add others):

    ccp makelib
    
  4. You will also need to link your application against -lccp and -lstartccp as demonstrated in apps/example/Makefie.in

Included directories

mtcp: mtcp source code directory

  • mtcp/src: source code
  • mtcp/src/include: mTCP’s internal header files
  • mtcp/lib: library file
  • mtcp/include: header files that applications will use

io_engine: event-driven packet I/O engine (io_engine)

  • io_engine/driver - driver source code
  • io_engine/lib - io_engine library
  • io_engine/include - io_engine header files
  • io_engine/samples - sample io_engine applications (not mTCP’s)

dpdk - Intel's Data Plane Development Kit

  • dpdk/...

apps: mTCP applications

  • apps/example - example applications (see README)
  • apps/lighttpd-1.4.32 - mTCP-ported lighttpd (see INSTALL)
  • apps/apache_benchmark - mTCP-ported apache benchmark (ab) (see README-mtcp)

util: useful source code for applications

config: sample mTCP configuration files (may not be necessary)

Install guides

mTCP can be prepared in four ways.

DPDK VERSION

  1. Download DPDK submodule.

    git submodule init
    git submodule update
  2. Setup DPDK.

    ./setup_mtcp_dpdk_env.sh [<path to $RTE_SDK>]
    • Press [15] to compile x86_64-native-linuxapp-gcc version

    • Press [18] to install igb_uio driver for Intel NICs

    • Press [22] to setup 2048 2MB hugepages

    • Press [24] to register the Ethernet ports

    • Press [35] to quit the tool

    • Only those devices will work with DPDK drivers that are listed on this page: http://dpdk.org/doc/nics. Please make sure that your NIC is compatible before moving on to the next step.

    • We use dpdk/ submodule as our DPDK driver. FYI, you can pass a different dpdk source directory as command line argument.

  3. Bring the dpdk compatible interfaces up, and then set RTE_SDK and RTE_TARGET environment variables. If you are using Intel NICs, the interfaces will have dpdk prefix.

    sudo ifconfig dpdk0 x.x.x.x netmask 255.255.255.0 up
    export RTE_SDK=`echo $PWD`/dpdk
    export RTE_TARGET=x86_64-native-linuxapp-gcc
  4. Setup mtcp library:

    ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET
    make
    • By default, mTCP assumes that there are 16 CPUs in your system. You can set the CPU limit, e.g. on a 32-core system, by using the following command:

      ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET CFLAGS="-DMAX_CPUS=32"

    Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).

    • In case ./configure script prints an error, run the following command; and then re-do step-4 (configure again):

      autoreconf -ivf
    • checksum offloading in the NIC is now ENABLED (by default)!!!

      • this only works for dpdk at the moment
      • use ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --disable-hwcsum to disable checksum offloading.
    • check libmtcp.a in mtcp/lib

    • check header files in mtcp/include

    • check example binary files in apps/example

  5. Check the configurations in apps/example

    • epserver.conf for server-side configuration
    • epwget.conf for client-side configuration
    • you may write your own configuration file for your application
  6. Run the applications!

  7. You can revert back all your changes by running the following script.

    ./setup_linux_env.sh [<path to $RTE_SDK>]
    • Press [29] to unbind the Ethernet ports
    • Press [30] to remove igb_uio.ko driver
    • Press [33] to remove hugepage mappings
    • Press [34] to quit the tool

PSIO VERSION

  1. make in io_engine/driver:

    make
    • check ps_ixgbe.ko
    • please note that psio only runs on linux-2.6.x kernels (linux-2.6.32 ~ linux-2.6.38)
  2. install the driver:

    ./install.py <# cores> <# cores>
  3. Setup mtcp library:

    ./configure --with-psio-lib=<$path_to_ioengine>
    # e.g. ./configure --with-psio-lib=`echo $PWD`/io_engine
    make
    • By default, mTCP assumes that there are 16 CPUs in your system. You can set the CPU limit, e.g. on a 8-core system, by using the following command:

      ./configure --with-psio-lib=`echo $PWD`/io_engine CFLAGS="-DMAX_CPUS=8"

    Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).

    • In case ./configure script prints an error, run the following command; and then re-do step-3 (configure again):

      autoreconf -ivf
    • check libmtcp.a in mtcp/lib

    • check header files in mtcp/include

    • check example binary files in apps/example

  4. Check the configurations in apps/example

    • epserver.conf for server-side configuration
    • epwget.conf for client-side configuration
    • you may write your own configuration file for your application
  5. Run the applications!

ONVM VERSION

NEW: Now you can run mTCP applications (server + client) locally. A local setup is useful when only 1 machine is available for the experiment. ONVM configurations are placed as .conf files in apps/example directory. ONVM basics are explained in https://github.com/sdnfv/openNetVM.

Before running the applications make sure that onvm_mgr is running.
Also, no core overlap between applications and onvm_mgr is allowed.

  1. Install openNetVM following these instructions

  2. Set up the dpdk interfaces:

    ./setup_mtcp_onvm_env.sh
  3. Next bring the dpdk-registered interfaces up. This can be setup using:

    sudo ifconfig dpdk0 x.x.x.x netmask 255.255.255.0 up
  4. Setup mtcp library

    ./configure --with-dpdk-lib=$<path_to_dpdk> --with-onvm-lib=$<path_to_onvm_lib>
    # e.g. ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --with-onvm-lib=`echo $ONVM_HOME`/onvm
    make
    • By default, mTCP assumes that there are 16 CPUs in your system. You can set the CPU limit, e.g. on a 32-core system, by using the following command:

      ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --with-onvm-lib=$<path_to_onvm_lib> CFLAGS="-DMAX_CPUS=32"

    Please note that your NIC should support RSS queues equal to the MAX_CPUS value (since mTCP expects a one-to-one RSS queue to CPU binding).

    • In case ./configure script prints an error, run the following command; and then re-do step-4 (configure again):

      autoreconf -ivf
    • checksum offloading in the NIC is now ENABLED (by default)!!!

    • this only works for dpdk at the moment

    • use ./configure --with-dpdk-lib=$RTE_SDK/$RTE_TARGET --with-onvm-lib=$<path_to_onvm_lib> --disable-hwcsum to disable checksum offloading.

    • check libmtcp.a in mtcp/lib

    • check header files in mtcp/include

    • check example binary files in apps/example

  5. Check the configurations in apps/example

    • epserver.conf for server-side configuration
    • epwget.conf for client-side configuration
    • you may write your own configuration file for your application
  6. Run the applications!

  7. You can revert back all your changes by running the following script.

    ./setup_linux_env.sh
    • Press [29] to unbind the Ethernet ports
    • Press [30] to remove igb_uio.ko driver
    • Press [33] to remove hugepage mappings
    • Press [34] to quit the tool

Notes

Once you have started onvm_mgr, sometimes an mTCP application may fail to get launched due to an error resembling the one mentioned below:

  • EAL: FATAL: Cannot init memory
  • Cannot mmap memory for rte_config at [0x7ffff7fb6000], got [0x7ffff7e74000] - please use '--base-virtaddr' option
  • EAL: Cannot mmap device resource file /sys/bus/pci/devices/0000:06:00.0/resource3 to address: 0x7ffff7ff1000

To prevent this, use the base virtual address parameter to run the ONVM manager (core list arg 0xf8 isn't actually used by mtcp NFs but is required), e.g.:

cd openNetVM/onvm  
./go.sh 1,2,3 1 0xf8 -s stdout -a 0x7f000000000 

NETMAP VERSION

See README.netmap for details.

Tested environments

mTCP runs on Linux-based operating systems (2.6.x for PSIO) with generic x86_64 CPUs, but to help evaluation, we provide our tested environments as follows.

Intel Xeon E5-2690 octacore CPU @ 2.90 GHz 32 GB of RAM (4 memory channels)
10 GbE NIC with Intel 82599 chipset (specifically Intel X520-DA2)
Debian 6.0.7 (Linux 2.6.32-5-amd64)

Intel Core i7-3770 quadcore CPU @ 3.40 GHz 16 GB of RAM (2 memory channels)
10 GbE NIC with Intel 82599 chipset (specifically Intel X520-DA2)
Ubuntu 10.04 (Linux 2.6.32-47)

Event-driven PacketShader I/O engine (extended io_engine-0.2)

  • PSIO is currently only compatible with Linux-2.6.

We tested the DPDK version (polling driver) with Linux-3.13.0 kernel.

Notes

  1. mTCP currently runs with fixed memory pools. That means, the size of TCP receive and send buffers are fixed at the startup and does not increase dynamically. This could be performance limit to the large long-lived connections. Be sure to configure the buffer size appropriately to your size of workload.

  2. The client side of mTCP supports mtcp_init_rss() to create an address pool that can be used to fetch available address space in O(1). To easily congest the server side, this function should be called at the application startup.

  3. The supported socket options are limited for right now. Please refer to the mtcp/src/api.c for more detail.

  4. The counterpart of mTCP should enable TCP timestamp.

  5. mTCP has been tested with the following Ethernet adapters:

    1. Intel-82598 ixgbe (Max-queue-limit: 16)
    2. Intel-82599 ixgbe (Max-queue-limit: 16)
    3. Intel-I350 igb (Max-queue-limit: 08)
    4. Intel-X710 i40e (Max-queue-limit: ~)
    5. Intel-X722 i40e (Max-queue-limit: ~)

Frequently asked questions

  1. How can I quit the application?

    • Use ^C to gracefully shutdown the application. Two consecutive ^C (separated by 1 sec) will force quit.
  2. My application doesn't use the address specified from ifconfig.

    • For some Linux distros(e.g. Ubuntu), NetworkManager may re-assign a different IP address, or delete the assigned IP address.

    • Disable NetworkManager temporarily if that's the case. NetworkManager will be re-enabled upon reboot.

    sudo service network-manager stop
  3. Can I statically set the routing or arp table?

    • Yes, mTCP allows static route and arp configuration. Go to the config directory and see sample_route.conf or sample_arp.conf. Copy and adapt it to your condition and link (ln -s) the config directory to the application directory. mTCP will find config/route.conf and config/arp.conf for static configuration.

Caution

  1. Do not remove I/O driver (ps_ixgbe/igb_uio) while running mTCP applications. The application will panic!

  2. Use the ps_ixgbe/dpdk driver contained in this package, not the one from some other place (e.g., from io_engine github).

Contacts

GitHub issue board is the preferred way to report bugs and ask questions about mTCP.

CONTACTS FOR THE AUTHORS

User mailing list <mtcp-user at list.ndsl.kaist.edu>
EunYoung Jeong <notav at ndsl.kaist.edu>
M. Asim Jamshed <ajamshed at ndsl.kaist.edu>
Comments
  • mtcp&netmap error

    mtcp&netmap error

    I have tried to install mtcp with the netmap module.I have install the netmap module successfully. and compile the mtcp source successfully too.but when I run the test program in apps/examples It occurs an error,I run like this: ./epserver -p /home/ -f ./epserver.conf -N 4 ./epclient 127.0.0.1/example.txt 10000 -N 4 -c 80

    but there always an error,it says there is a Segmentation fault when I run into the mtcp_init() fuction.

    when I run the dmesg command it says like these: [ 2147.768239] epserver[6166]: segfault at 0 ip (null) sp 00007fff3f834eb8 error 14 in epserver[400000+21000] [ 2189.381506] epwget[6179]: segfault at 0 ip (null) sp 00007ffddff13548 error 14 in epwget[400000+22000]

    did I do some thing wrong.do someone have some document that make mtcp working with netmap?

    opened by yangxiaozhenzxy 26
  • dpdk0 does not appear 82599ES NIC

    dpdk0 does not appear 82599ES NIC

    Hi, mtcp team,

    I can build DPDK and mTCP successfully, but after binding Ethernet device(82599ES ) to IGB UIO module dpdk0 does not appear. The NIC: 82599ES 10-Gigabit SFI/SFP+ Network Connection

    Have any opinion about this issue? Am I missing some step?
    Thanks in advance.

    question 
    opened by kingsonstar 18
  • mTCP VMware vmxnet3  adapter support

    mTCP VMware vmxnet3 adapter support

    Hi,

    I am running ubuntu 14.04 as virtual guest on VMware ESXi, the guest is using adapter vmxnet3. I had following diff to mtcp/src/dpdk_module.c to make mTCP compile and run, but at the web server side when I run tcpdump, I see no packet coming in to server. I don't have access to VMware ESXi hypervisor, so not sure if the packet has egressed out hypervisor.

    `diff --git a/mtcp/src/dpdk_module.c b/mtcp/src/dpdk_module.c index 33d349e..666dfd3 100644 --- a/mtcp/src/dpdk_module.c +++ b/mtcp/src/dpdk_module.c @@ -57,8 +57,8 @@ /*

    • Configurable number of RX/TX ring descriptors */ -#define RTE_TEST_RX_DESC_DEFAULT 128 -#define RTE_TEST_TX_DESC_DEFAULT 128 +#define RTE_TEST_RX_DESC_DEFAULT 128 +#define RTE_TEST_TX_DESC_DEFAULT 512

      static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT; static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT; @@ -124,7 +124,7 @@ static const struct rte_eth_txconf tx_conf = { * As the example won't handle mult-segments and offload cases, * set the flag by default. */

      •   .txq_flags =                    0x0,
        
      •   .txq_flags =                    ETH_TXQ_FLAGS_NOOFFLOADS|ETH_TXQ_FLAGS_NOMULTSEGS,
        
        };

      struct mbuf_table { `

    I noticed the dpdk0 interface has empty MAC address as below:

    4: dpdk0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff <==========EMPTY MAC ADDRESS inet 10.1.72.28/24 brd 10.1.72.255 scope global dpdk0 valid_lft forever preferred_lft forever inet6 fe80::200:ff:fe00:0/64 scope link valid_lft forever preferred_lft forever

    I reviewed dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h, noticed only IXGBE and IGB adapter were supported for mTCP to retrieve and attach MAC addresses for dpdk0 in Linux world.

    do you think the empty MAC address for dpdk0 is the reason I see no packet from the server side?

    if so, adapter vmxnet3 can be added to igb_uio.h like IGB adapter to resolve the issue?

    below is the output I think relevant:

    EAL: probe driver: 15ad:7b0 rte_vmxnet3_pmd EAL: PCI memory mapped at 0x7ff74ea00000 EAL: PCI memory mapped at 0x7ff74ea01000 EAL: PCI memory mapped at 0x7ff74ea02000 PMD: eth_vmxnet3_dev_init(): >> PMD: eth_vmxnet3_dev_init(): Hardware version : 1 PMD: eth_vmxnet3_dev_init(): UPT hardware version : 1 PMD: eth_vmxnet3_dev_init(): MAC Address : 00:50:56:86:10:76 Total number of attached devices: 1 Interface name: dpdk0 Configurations: Number of CPU cores available: 4 Number of CPU cores to use: 4 Maximum number of concurrency per core: 10000 Maximum number of preallocated buffers per core: 10000 Receive buffer size: 8192 Send buffer size: 8192 TCP timeout seconds: 30 TCP timewait seconds: 0

    NICs to print statistics: dpdk0

    Interfaces: name: dpdk0, ifindex: 0, hwaddr: 00:00:00:00:00:00, ipaddr: 10.1.72.28, netmask: 255.255.255.0

    Number of NIC queues: 4

    Loading routing configurations from : /etc/mtcp/config/route.conf Routes: Destination: 10.1.72.0/24, Mask: 255.255.255.0, Masked: 10.1.72.0, Route: ifdx-0

    Destination: 10.1.72.0/24, Mask: 255.255.255.0, Masked: 10.1.72.0, Route: ifdx-0

    Loading ARP table from : /etc/mtcp/config/arp.conf ARP Table:

    IP addr: 10.1.72.17, dst_hwaddr: 00:23:E9:64:C0:03

    Initializing port 0... PMD: vmxnet3_dev_configure(): >> PMD: vmxnet3_dev_rx_queue_setup(): >> PMD: vmxnet3_dev_rx_queue_setup(): >> PMD: vmxnet3_dev_rx_queue_setup(): >> PMD: vmxnet3_dev_rx_queue_setup(): >> PMD: vmxnet3_dev_tx_queue_setup(): >> PMD: vmxnet3_dev_tx_queue_setup(): >> PMD: vmxnet3_dev_tx_queue_setup(): >> PMD: vmxnet3_dev_tx_queue_setup(): >> PMD: vmxnet3_dev_start(): >> PMD: vmxnet3_rss_configure(): >> PMD: vmxnet3_setup_driver_shared(): Writing MAC Address : 00:50:56:86:10:76 PMD: vmxnet3_disable_intr(): >> PMD: vmxnet3_dev_rxtx_init(): >> rte_eth_dev_config_restore: port 0: MAC address array not supported <=====here done:

    Checking link statusdone Port 0 Link Up - speed 10000 Mbps - full-duplex Configuration updated by mtcp_setconf(). CPU 0: initialization finished. [mtcp_create_context:1173] CPU 0 is now the master thread. [CPU 0] dpdk0 flows: 0, RX: 10(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps) [ ALL ] dpdk0 flows: 0, RX: 10(pps) (err: 0), 0.00(Gbps), TX: 0(pps), 0.00(Gbps) Thread 0 handles 1 flows. connecting to 10.1.72.17:80 [CPU 0] dpdk0 flows: 1, RX: 25(pps) (err: 0), 0.00(Gbps), TX: 2(pps), 0.00(Gbps) [ ALL ] dpdk0 flows: 1, RX: 25(pps) (err: 0), 0.00(Gbps), TX: 2(pps), 0.00(Gbps)

    opened by vincentmli 16
  • mTCP with mellanox driver

    mTCP with mellanox driver

    Hi,

    I'm having some troubles running example apps with mellanox driver. I've compiled provided dpdk with CONFIG_RTE_LIBRTE_MLX5_PMD=y and testpmd works just fine:

    # ./testpmd -c 0xff00 -n 4 -w 0000:19:00.1 -- --rxq=2 --txq=2
    EAL: Detected 64 lcore(s)
    EAL: No free hugepages reported in hugepages-1048576kB
    EAL: Probing VFIO support...
    EAL: PCI device 0000:19:00.1 on NUMA socket 0
    EAL:   probe driver: 15b3:1015 net_mlx5
    PMD: net_mlx5: PCI information matches, using device "mlx5_1" (SR-IOV: false, MPS: true)
    PMD: net_mlx5: 1 port(s) detected
    PMD: net_mlx5: MPS is enabled
    PMD: net_mlx5: port 1 MAC address is ec:0d:9a:3a:3b:5b
    USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0
    USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=203456, size=2176, socket=1
    Configuring Port 0 (socket 0)
    PMD: net_mlx5: 0xd171c0: TX queues number update: 0 -> 2
    PMD: net_mlx5: 0xd171c0: RX queues number update: 0 -> 2
    Port 0: EC:0D:9A:3A:3B:5B
    Checking link statuses...
    Done
    No commandline core given, start packet forwarding
    io packet forwarding - ports=1 - cores=1 - streams=2 - NUMA support enabled, MP over anonymous pages disabled
    Logical Core 9 (socket 1) forwards packets on 2 streams:
      RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
      RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
    
      io packet forwarding - CRC stripping enabled - packets/burst=32
      nb forwarding cores=1 - nb forwarding ports=1
      RX queues=2 - RX desc=128 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0 wthresh=0
      TX queues=2 - TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0 wthresh=0
      TX RS bit threshold=0 - TXQ flags=0x0
    

    It binds to the interface and the server stops responding on it. The epserver app on the other hand initializes without any errors but doesn't bind to any of the interfaces:

    # ./epserver -p /var/www -f epserver.conf -N 1
    Configuration updated by mtcp_setconf().
    ---------------------------------------------------------------------------------
    Loading mtcp configuration from : epserver.conf
    Loading interface setting
    EAL: Detected 64 lcore(s)
    EAL: Auto-detected process type: PRIMARY
    EAL: No free hugepages reported in hugepages-1048576kB
    EAL: Probing VFIO support...
    EAL: PCI device 0000:19:00.0 on NUMA socket 0
    EAL:   probe driver: 15b3:1015 net_mlx5
    PMD: net_mlx5: PCI information matches, using device "mlx5_0" (SR-IOV: false, MPS: true)
    PMD: net_mlx5: 1 port(s) detected
    PMD: net_mlx5: MPS is enabled
    PMD: net_mlx5: port 1 MAC address is ec:0d:9a:3a:3b:5a
    EAL: PCI device 0000:19:00.1 on NUMA socket 0
    EAL:   probe driver: 15b3:1015 net_mlx5
    PMD: net_mlx5: PCI information matches, using device "mlx5_1" (SR-IOV: false, MPS: true)
    PMD: net_mlx5: 1 port(s) detected
    PMD: net_mlx5: MPS is enabled
    PMD: net_mlx5: port 1 MAC address is ec:0d:9a:3a:3b:5b
    Configurations:
    Number of CPU cores available: 1
    Number of CPU cores to use: 1
    Maximum number of concurrency per core: 10000
    Maximum number of preallocated buffers per core: 10000
    Receive buffer size: 8192
    Send buffer size: 8192
    TCP timeout seconds: 30
    TCP timewait seconds: 0
    NICs to print statistics:
    ---------------------------------------------------------------------------------
    Interfaces:
    Number of NIC queues: 1
    ---------------------------------------------------------------------------------
    Loading routing configurations from : config/route.conf
    Routes:
    Destination: 172.16.1.0/24, Mask: 255.255.255.0, Masked: 172.16.1.0, Route: ifdx--1
    ---------------------------------------------------------------------------------
    Loading ARP table from : config/arp.conf
    ARP Table:
    IP addr: 172.16.1.10, dst_hwaddr: EC:0D:9A:3A:4B:B3
    ---------------------------------------------------------------------------------
    
    Checking link statusdone
    [dpdk_init_handle: 263] Can't open /dev/dpdk-iface for context->cpu: 0! Are you using mlx4/mlx5 driver?
    CPU 0: initialization finished.
    [mtcp_create_context:1230] CPU 0 is now the master thread.
    

    Here's my epserver.conf

    # cat epserver.conf 
    io = dpdk
    num_mem_ch = 4
    port = mlx5_1
    max_concurrency = 10000
    max_num_buffers = 10000
    rcvbuf = 8192
    sndbuf = 8192
    tcp_timeout = 30
    tcp_timewait = 0
    

    No matter what I put as the port value, I always get the same behavior. The server responds to pings and trying to connect to port 80 gives connection refused. I was unable to find any examples of mTCP+Mellanox combo so I don't really know what to put the port value to.

    Did I miss something?

    Thank you,

    Hrvoje Zeba

    opened by jhzeba 14
  • /dev/dpdk-iface does not show up

    /dev/dpdk-iface does not show up

    Hi,

    I'm trying to launch epserver but I get "Error opening dpdk-face".

    Indeed dpdk-iface.ko is loaded but there is no /dev/dpdk-iface entry. I use your dpdk version from git submodule.

    I have a mlx5 NIC. Kernel 4.15. Ubuntu 18.04.

    epserver.conf: io = dpdk num_mem_ch = 6 port = enp115s0f0 rcvbuf = 8192 sndbuf = 8192 tcp_timeout = 30 tcp_timewait = 0 stat_print = dpdk0

    Thanks, Tom

    opened by tbarbette 13
  • run mtcp in virtual machine?

    run mtcp in virtual machine?

    Hi, Can I use only one physical machine, and open two virtual machine on it. Then one virtual machine run epwget and another one run epserver. Can it work?

    question 
    opened by luohaha 13
  • Questions for future enhancement

    Questions for future enhancement

    Hi, Thanks for putting this great module. The userspace TCP is the hot cake in the market. I have been looking for one open source for sometime. I have few questions -

    1. Can you please let me know what PPS you are able to achieve with this.
    2. Do you plan to make it work for multiple processes similar to the NGINX architecture.
    3. Do you plan to integrate this with netmap.

    Thanks..Santos

    opened by nginxsantos 13
  • netmap RSS support

    netmap RSS support

    Hi,

    I build mtcp with the latest version of netmap which uses the ixgbe-5.3.7 driver.

    But the test results show nearly zero throughput. I think the problem here is the RSS.

    So how I correctly setup the RSS in the ixgbe-5.3.7 driver?

    I do not modify seeds as suggested by the readme.netmap, since the function ixgbe_setup_mrqc() in 5.3.7 driver may be not a proper place to do it. Instead, I write the seeds in function ixgbe_init_rss_key().

    fixed 
    opened by wtao0221 12
  • SSL load test tool on mtcp ?

    SSL load test tool on mtcp ?

    Hi

    the ported apachebench does not support SSL load test, I am wondering what client SSL load tool mtcp project used to evaluate SSLShader performance mentioned in the paper. I am interested to know if there is existing one or get some idea on how to port one on mtcp.

    Thanks

    opened by vincentmli 12
  • errors when compile dpdk in a new compiled kernel

    errors when compile dpdk in a new compiled kernel

    We met an unsolvable problem. At first we had to run mtcp on a Redhat 6.2 machine whose kernel version was 2.6.32-220.el6.x86_64. We compiled a new kernel version 3.2.78. Then we met the problem.

    When we tried to use the former mtcp version with dpdk-2.1.0, we met these strange problems when compiling dpdk-2.1.0 in it:

    CC [M] /home/gj/mtcp/dpdk-2.1.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.o In file included from /home/gj/mtcp/dpdk-2.1.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.c:34: /home/gj/mtcp/dpdk-2.1.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h:378: error: unknown field ‘ndo_fdb_add’ specified in initializer In file included from /home/gj/mtcp/dpdk-2.1.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.c:39: /home/gj/mtcp/dpdk-2.1.0/lib/librte_eal/linuxapp/igb_uio/compat.h:53: error: redefinition of ‘pci_intx_mask_supported’ /home/gj/mtcp/dpdk-2.1.0/lib/librte_eal/linuxapp/igb_uio/compat.h:53: note: previous definition of ‘pci_intx_mask_supported’ was here /home/gj/mtcp/dpdk-2.1.0/lib/librte_eal/linuxapp/igb_uio/compat.h:76: error: redefinition of ‘pci_check_and_mask_intx’ /home/gj/mtcp/dpdk-2.1.0/lib/librte_eal/linuxapp/igb_uio/compat.h:76: note: previous definition of ‘pci_check_and_mask_intx’ was here make[10]: *** [/home/gj/mtcp/dpdk-2.1.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.o] Error 1 make[9]: *** [module/home/gj/mtcp/dpdk-2.1.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio] Error 2 make[8]: *** [sub-make] Error 2 make[7]: *** [igb_uio.ko] Error 2 make[6]: *** [igb_uio] Error 2 make[5]: *** [linuxapp] Error 2 make[4]: *** [librte_eal] Error 2 make[3]: *** [lib] Error 2 make[2]: *** [all] Error 2 make[1]: *** [x86_64-native-linuxapp-gcc_install] Error 2 make: *** [install] Error 2

    When tried new version with dpdk-2.2.0, error gone with:

    CC [M] /home/mtcp/dpdk-2.2.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.o In file included from /home/mtcp/dpdk-2.2.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.c:35: /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h:218: error: expected declaration specifiers or ‘...’ before ‘netdev_features_t’ /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h: In function ‘netdev_set_features’: /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h:221: error: ‘features’ undeclared (first use in this function) /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h:221: error: (Each undeclared identifier is reported only once /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h:221: error: for each function it appears in.) /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h: At top level: /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h:229: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘attribute’ before ‘netdev_fix_features’ cc1: warnings being treated as errors /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h:283: error: initialization from incompatible pointer type /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h:284: error: ‘netdev_fix_features’ undeclared here (not in a function) /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.h:285: error: unknown field ‘ndo_fdb_add’ specified in initializer In file included from /home/mtcp/dpdk-2.2.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.c:42: /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/compat.h:66: error: redefinition of ‘pci_intx_mask_supported’ /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/compat.h:66: note: previous definition of ‘pci_intx_mask_supported’ was here /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/compat.h:89: error: redefinition of ‘pci_check_and_mask_intx’ /home/mtcp/dpdk-2.2.0/lib/librte_eal/linuxapp/igb_uio/compat.h:89: note: previous definition of ‘pci_check_and_mask_intx’ was here make[10]: *** [/home/mtcp/dpdk-2.2.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio/igb_uio.o] Error 1 make[9]: *** [module/home/mtcp/dpdk-2.2.0/x86_64-native-linuxapp-gcc/build/lib/librte_eal/linuxapp/igb_uio] Error 2 make[8]: *** [sub-make] Error 2 make[7]: *** [igb_uio.ko] Error 2 make[6]: *** [igb_uio] Error 2 make[5]: *** [linuxapp] Error 2 make[4]: *** [librte_eal] Error 2 make[3]: *** [lib] Error 2 make[2]: *** [all] Error 2 make[1]: *** [pre_install] Error 2 make: *** [install] Error 2

    In /home/gj/mtcp/dpdk-2.1.0/lib/librte_eal/linuxapp/igb_uio/compat.h, I added

    ifndef COMPAT_H

    define COMPAT_H

    endif

    and then solved the redefinition problem. But others are still.

    However, when we turned to the original dpdk version, dpdk-2.1.0-rc4, it succeeded without any error. What's wrong? I thought error goes from the dpdk in mtcp. Who can help me? Thank you!

    opened by Fierralin 11
  • No route to 10.10.10.222 from epwget

    No route to 10.10.10.222 from epwget

    Hi,

    I am getting No route to epserver in epwget client. I am using the below route.conf and arp.conf. Please do let know of I have done anything wrong

    route.conf

    ROUTES 1 10.10.10.222/32 port0

    arp.conf

    ARP_ENTRY 1 10.10.10.222/32 00:0c:29:74:12:9a

    This is the error in epwget client

    [mtcp_create_context:1352] CPU 0 is in charge of printing stats. [GetOutputInterface: 28] [WARNING] No route to 10.10.10.222 CPU 1: initialization finished. [GetOutputInterface: 28] [WARNING] No route to 10.10.10.222 Thread 1 handles 5000 flows. connecting to 10.10.10.222:80 [GetOutputInterface: 28] [WARNING] No route to 10.10.10.222

    Thanks, Mohan

    opened by kcmohanprasad 11
  • A bug by static variables in RunMainLoop()

    A bug by static variables in RunMainLoop()

    Dear contributors,

    RunMainLoop() in mtcp/src/core.c:761 is supposed to be executed per core, but has static variables, "static uint16_t len, static uint8_t *pktbuf" at line 787 and 788 thereby being not thread-safe and occurring unexpected problems. I think just removing the keyword "static" will be okay. Please consider any changes to fix the problem. Thank you

    opened by taehyunkim1527 0
  • Macronix MX98715 packet driver returns always MAC address 00:00:00:00:00:00 ?

    Macronix MX98715 packet driver returns always MAC address 00:00:00:00:00:00 ?

    I know, problems with the packet driver are not primarily a problem of mtcp, but I need to set the MAC address manually, and I have no source for the packet driver (may be I can disassemble it with "Sourcer", but then I still have no clue of it). Is there a way to set the MAC address manually (e.g. in the mtcpcfg file) ? These MX98715 PCI cards (3 of them) are integrated in the (Watchguard FB1000) mainboard, and I have no chance to replace them. The original Linux kernel (2.0.3.3) which ran with the Firebox does support the MX98715 cards, but I replaced the internal flash memory content with DOS. I am able to transfer files with MSKERMIT and the help of the serial port to and from the Watchguard, but I have not enough free space to install a whole development environment, so if there a chance to modify the source code of mtcp for the above described purpose, I have to do this elsewhere. I've found also an issue entry #51 with a similar problem, but I am not sure if this can be solved in exactly the same manner.

    opened by NoNameNoHonor 0
  • mtcp make issue

    mtcp make issue

    Hi experts,

    I got the issue at last step for mtcp build. When I run make for mtcp after configure successfully, I got the error " /usr/include/x86_64-linux-gnu/bits/string_fortified.h:106:10: error: ‘__builtin_strncpy’ output may be truncated copying 1023 bytes from a string of length 1023 [-Werror=stringop-truncation] 106 | return __builtin___strncpy_chk (__dest, __src, __len, __bos (__dest)); "

    I tried to update my gcc to gcc 10.3.0, still the same error, please anyone saw the same issue before?

    My environment is: os: Ubuntu 20.04.4 LTS kernel : 5.15.0-43-generic compiler: gcc 10.3.0 dpdk : by default 18.05

    opened by jimmylikes 2
  • DHCP lease time bug

    DHCP lease time bug

    Expected behavior:

    1. Run dhcp.exe
    2. Get valid configuration from server
    3. Use any program from mtcp package until lease time run out.
    4. Optionally get warning that lease time is running out, but continue to run until then.

    Current behavior:

    1. Run dhcp.exe
    2. Get valid configuration from server
    3. Use any program until lease time is less than 1 hour, then mtcp refuses to run.

    Default lease time of some dhcp servers is one hour (i.e. dnsmasq) what might make some confusion.

    opened by harvald 0
  • How to send requests to epserver

    How to send requests to epserver

    I am currently running mTCP with netmap. My epserver seems to start fine but I am a bit confused on how to query it from a client. I am planning to use wrk as a http request generator from another node on the same network, but I'm not sure how to what IP address or port I have to address it too. (Especially since my NIC loses its ip address once insert the custom network driver). Any help would be much appreciated.

    Regards, Suhas

    opened by sulaimansuhas 0
Releases(v2.1)
Owner
null
🌱Light and powerful C++ web framework for highly scalable and resource-efficient web application. It's zero-dependency and easy-portable.

Oat++ News Hey, meet the new oatpp version 1.2.5! See the changelog for details. Check out the new oatpp ORM - read more here. Oat++ is a modern Web F

Oat++ 6k Jan 4, 2023
LAppS - Lua Application Server for micro-services with default communication over WebSockets. The fastest and most vertically scalable WebSockets server implementation ever. Low latency C++ <-> Lua stack roundtrip.

LAppS - Lua Application Server This is an attempt to provide very easy to use Lua Application Server working over WebSockets protocol (RFC 6455). LApp

null 48 Oct 13, 2022
A simple tcp/ip stack

pip A simple TCP/IP stack, just like lwIP, but pip focus only parse IP Packet and output IP Packet, basically realize no memory copy 一个简单的TCP/IP协议栈实现,

plumk 27 Jan 7, 2023
TCP/IP for Casio fx-9860 graphical calculators (with SLIP support, uIP stack)

fxIP TCP/IP stack and IRC client for Casio fx-9860/9750 calculators YouTube video of fxIP's IRC client, connecting to irc.libera.chat YouTube video of

Tobias Mädel 204 Dec 14, 2022
C++ TCP/IP and SSH stack with bounded run time and no dynamic memory allocations

Static Network Stack TCP/IP stack with all-static allocations designed for bare metal (no operating system) embedded applications with minimal footpri

Andrew Zonenberg 24 Jul 22, 2022
an easy implementation of a multi-process tcp server and a multi-thread tcp client

一个TCP多进程服务器-多线程客户端的简单实现。 客户端类似Apache ab的测试功能,能够通过向某一个ip端口发送指定并发量和总数量的tcp短连接;服务端处理tcp短连接,每来一条消息就打印一条log。 使用cmake编译,建议在vscode里编译,或者命令行 # 终端进入目录 mkdir bu

adin 1 Nov 28, 2021
Warp speed Data Transfer (WDT) is an embeddedable library (and command line tool) aiming to transfer data between 2 systems as fast as possible over multiple TCP paths.

WDT Warp speed Data Transfer Design philosophy/Overview Goal: Lowest possible total transfer time - to be only hardware limited (disc or network bandw

Facebook 2.7k Dec 31, 2022
FreeModbus is a Modbus ASCII/RTU and Modbus TCP implementation for embedded systems

FreeModbus is a Modbus ASCII/RTU and Modbus TCP implementation for embedded systems. It provides an implementation of the Modbus Application Protocol

Mahmood Hosseini 22 Oct 11, 2022
A simple tcp tunnel on c using sockets Right now it only supports linux systems

A simple tcp tunnel on c using sockets Right now it only supports linux systems build BY MAKE mkdir build make cd build ./tunnel.o <localport> <rem

notaweeb 8 Sep 20, 2021
Pipy is a tiny, high performance, highly stable, programmable proxy written in C++

Pipy is a tiny, high performance, highly stable, programmable proxy. Written in C++, built on top of Asio asynchronous I/O library, Pipy is extremely lightweight and fast, making it one of the best choices for service mesh sidecars.

null 538 Dec 23, 2022
WAFer is a C language-based software platform for scalable server-side and networking applications. Think node.js for C programmers.

WAFer WAFer is a C language-based ultra-light scalable server-side web applications framework. Think node.js for C programmers. Because it's written i

Riolet Corporation 693 Dec 6, 2022
A tiny example how to work with ZigBee stack using JN5169 microcontroller

Hello NXP JN5169 ZigBee World This is a tiny example how to work with ZigBee stack using JN5169 microcontroller. The example implements a smart switch

Oleksandr Masliuchenko 25 Jan 1, 2023
BSAL(Bluetooth Stack Abstract Layer)软件包是由 RT-Thread 针对不同 蓝牙协议栈接口实现的,目前支持的 协议栈有:nimble,realtek 等协议栈

BSAL (Bluetooth Stack Abstract Layer)软件包是由 RT-Thread 针对不同 蓝牙协议栈接口实现的,目前支持的 协议栈有:nimble,realtek 等协议栈。

The packages repositories of RT-Thread. 14 Sep 19, 2022
Apache Thrift is a lightweight, language-independent software stack for point-to-point RPC implementation

Apache Thrift Introduction Thrift is a lightweight, language-independent software stack for point-to-point RPC implementation. Thrift provides clean a

The Apache Software Foundation 9.5k Jan 7, 2023
Ultra fast and low latency asynchronous socket server & client C++ library with support TCP, SSL, UDP, HTTP, HTTPS, WebSocket protocols and 10K connections problem solution

CppServer Ultra fast and low latency asynchronous socket server & client C++ library with support TCP, SSL, UDP, HTTP, HTTPS, WebSocket protocols and

Ivan Shynkarenka 958 Jan 3, 2023
A modern C++ network library for developing high performance network services in TCP/UDP/HTTP protocols.

evpp Introduction 中文说明 evpp is a modern C++ network library for developing high performance network services using TCP/UDP/HTTP protocols. evpp provid

Qihoo 360 3.2k Jan 5, 2023
Mongoose Embedded Web Server Library - a multi-protocol embedded networking library with TCP/UDP, HTTP, WebSocket, MQTT built-in protocols, async DNS resolver, and non-blocking API.

Mongoose - Embedded Web Server / Embedded Networking Library Mongoose is a networking library for C/C++. It implements event-driven non-blocking APIs

Cesanta Software 9k Jan 1, 2023
High performant TCP server for rtl-sdr

About Key features Share available RF bandwidth between several independent clients: Total bandwidth can be 2016000 samples/sec at 436,600,000 hz One

dernasherbrezon 121 Nov 28, 2022
TCP tunnel powered by epoll

Feature Dual Stack Async DNS Non-blocking IO Zero Copy Build git clone https://github.com/zephyrchien/ZTUN cd ZTUN mkdir build && cd build cmake .. ma

zephyr 15 Jun 3, 2022