A unit testing framework for C

Related tags

Debug check
Overview

Check

Linux Build Status OSX Build Status Windows Build Status

Table of Contents

About

Check is a unit testing framework for C. It features a simple interface for defining unit tests, putting little in the way of the developer. Tests are run in a separate address space, so Check can catch both assertion failures and code errors that cause segmentation faults or other signals. The output from unit tests can be used within source code editors and IDEs.

See https://libcheck.github.io/check for more information, including a tutorial. The tutorial is also available as info check.

Installing

Check has the following dependencies:

  • automake-1.9.6 (1.11.3 on OS X if you are using /usr/bin/ar)
  • autoconf-2.59
  • libtool-1.5.22
  • pkg-config-0.20
  • texinfo-4.7 (for documentation)
  • tetex-bin (or any texinfo-compatible TeX installation, for documentation)
  • POSIX sed

The versions specified may be higher than those actually needed.

autoconf

$ autoreconf --install
$ ./configure
$ make
$ make check
$ make install
$ sudo ldconfig

in this directory to set everything up. autoreconf calls all of the necessary tools for you, like autoconf, automake, autoheader, etc. If you ever change something during development, run autoreconf again (without --install), and it will perform the minimum set of actions necessary. Check is installed to /usr/local/lib by default. ldconfig rebuilds the linker cache so that newly installed library file is included in the cache.

cmake

$ mkdir build
$ cd build
$ cmake ../
$ make
$ CTEST_OUTPUT_ON_FAILURE=1 make test

Linking

Check uses variadic macros in check.h, and the strict C90 options for gcc will complain about this. In gcc 4.0 and above you can turn this off explicitly with -Wno-variadic-macros. In a future API it would be nice to eliminate these macros.

Packaging

Check is available packaged for the following operating systems:

Packaging status

Comments
  • RFC: Support arbitrary tagging and selection of testcases.

    RFC: Support arbitrary tagging and selection of testcases.

    Please consider these diffs for adding the ability selectively run tests based on arbitrary sets of tags.

    The requirement is to provide a mechanism that will allow us to exclude test cases that need a long time to run from our normal unit test runs which we want to keep fast. We could of course use the existing mechanism for selecting by suite but currently we use suites to subdivide tests based on the functional area being tested so that a user can choose to just run the suite for one area.

    In effect we want two orthogonal ways of selecting tests - by speed and by functional area.

    We could subdivide the suites into long-running and slow running suites for each functional area but that would force the user (or some external test definition) to list all the suites according to the speed or the functional area that they want to test. Its also possible to envisage further orthogonal criteria that we might want in the future which will mean further lists will have to be maintained.

    The alternative that we propose here is to allow an optional list of arbitrary tags (strings) to be provide when a testcase is being added to a suite and then to allow the srunner to take an include and exclude list of tags to allow it to filter testcases. (We would then register come testcases with a "SLOW_RUNNING" tag and run all tests excluding this tag in our normal UT).

    opened by cdentyou 29
  • ck_assert_str_*: fix segmentation fault while comparing to NULL

    ck_assert_str_*: fix segmentation fault while comparing to NULL

    Often a string need to be compared to NULL in tests. And in some cases compared string could be NULL so it must not lead to segmentation fault. Comparing NULL to NULL with *_eq macro must be true. In any other situations comparing to NULL must be false. *_le and *_ge macros must return false because NULL could not be lesser or greater than string. So returning true in *_le or *_ge with both arguments passed as NULL has no sense.

    opened by 9fcc 23
  • "No messaging setup" in START_TEST

    Continuing http://sourceforge.net/p/check/bugs/111/: Running make check in an autotools-based project causes all tests to fail and check_msg.c:80: No messaging setup to be logged to test-suite.log. I found http://sourceforge.net/p/check/bugs/74/ which doesn't apply because I don't use Windows and http://sourceforge.net/p/check/mailman/message/2750522/ which doesn't contain any replies.

    Minimal working example is provided at https://github.com/krichter722/check-no_messaging_setup-issue based on check's check_money example. I'm also providing a travis build result at https://travis-ci.org/krichter722/check-no_messaging_setup-issue/builds/107650108 which reproduces the issue!

    I didn't manage to upload the project as a zip because travis didn't let me. I think it would be good to archive it here and for me to keep by repository list clean in the future.

    opened by krichter722 23
  • something is garbling the output in the log files

    something is garbling the output in the log files

    EDIT: jump to reproduction steps using check_money example!

    Note: ignore all other comments(of mine) until that one, to save your time :)


    testing latest git master local/check 0.12.0.r36.gd6c1ffe-1

    $ cat do_cd_command.log 
    Running suite /src/filemanager
    do_c100%: Checks: 4, Failures: 0, Errors: 0
    ome:0: Passed
    do_cd_command.c:147:P:Core:test_empty_mean_home:1: Passed
    do_cd_command.c:147:P:Core:test_empty_mean_home:2: Passed
    do_cd_command.c:147:P:Core:test_empty_mean_home:3: Passed
    Results for all suites run:
    100%: Checks: 4, Failures: 0, Errors: 0
    PASS do_cd_command (exit status: 0)
    

    or

    $ cat compare_directories.log 
    Running suite /src/filemanager
    comp33%: Checks: 6, Failures: 4, Errors: 0
    compare_directories.c:183:F:Core:test2:0: Assertion 'right_panel->marked == (int) 1' failed: right_panel->marked == 0, (int) 1 == 1
    compare_directories.c:202:F:Core:test3:0: Assertion 'left_panel->marked == (int) 1' failed: left_panel->marked == 0, (int) 1 == 1
    compare_directories.c:291:F:Core:loopytest:1: FAILed for _i=1 as follows:
    left  panel:somefile.txt size:0 mtime:1557859191
    right panel:somefile.txt size:0 mtime:1557859190
    result  : left: unmarked, right: unmarked
    expected: left: unmarked, right: unmarked
    ------
    
    compare_directories.c:291:F:Core:loopytest:2: FAILed for _i=2 as follows:
    left  panel:somefile.txt size:1 mtime:1557859190
    right panel:somefile.txt size:0 mtime:1557859190
    result  : left:   marked, right:   marked
    expected: left: unmarked, right: unmarked
    ------
    
      marked
    expected: left: unmarked, right: unmarked
    ------
    
    Results for all suites run:
    33%: Checks: 6, Failures: 4, Errors: 0
    FAIL compare_directories (exit status: 1)
    

    so this part is from just a few lines before, repeated:

      marked
    expected: left: unmarked, right: unmarked
    ------
    

    or,

    $ cat examine_cd.log 
    Running suite /src/filemanager
    exam100%: Checks: 1, Failures: 0, Errors: 0
    assed
    Results for all suites run:
    100%: Checks: 1, Failures: 0, Errors: 0
    PASS examine_cd (exit status: 0)
    
    opened by ghost 17
  • Fix START_TEST to look like valid C code.

    Fix START_TEST to look like valid C code.

    Instead of exporting the defined name as a bare function, export a struct that has a pointer to the function, but also its name, file and line number where it is defined.

    Store that information into a new struct TTest.

    After this commit, START_TEST(testname) will create three definitions:

    • testname_fn: The actual function;
    • testname_ttest: A struct TTest with the information about it;
    • testname: A pointer to testname_ttest.

    Functions tcase_add_test() and friends are updated to take a TTest * argument rather than a TFun and separate name. The runners are updated to find that information inside the linked tc->ttest. The call to tcase_fn_start() is moved from the defined functions to the runners (both the "fork" and the "nofork" one) which call it just before invoking the test function.

    A nice side-effect is that END_TEST is now optional, though the empty #define is kept for backwards compability.

    Tested:

    • make check still passes.
    • Removing END_TEST from test cases still produces valid code that builds and passes tests.

    This PR should address this TODO item.

    NOTE: I'm not 100% sure I like the name TTest but I wonder what would be a better name for that item... I thought of TFunc but that's too similar to TFun. What would you call that? A test fixture? A test function? A test method? Clarifying the terminology would be helpful here...

    Cheers! Filipe

    opened by filbranden 16
  • make check fails tests on Alpine Linux

    make check fails tests on Alpine Linux

    Building on Alpine Linux (musl libc/busybox based distro) is successful but make check fails:

    http://tpaste.us/JMml

    ps I had to add diffutils as busybox diff seems not suported.

    opened by clandmeter 15
  • a test that's supposed to fail now passes with latest git master - because no checks are run

    a test that's supposed to fail now passes with latest git master - because no checks are run

    I was running a test using Arch Linux's version of check (built in 2017) which yielded this (correct) output:

    $ ./go
    make  do_cd_command examine_cd exec_get_export_variables_ext filegui_is_wildcarded get_random_hint compare_directories
    make[1]: Entering directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    make[1]: 'do_cd_command' is up to date.
    make[1]: 'examine_cd' is up to date.
    make[1]: 'exec_get_export_variables_ext' is up to date.
    make[1]: 'filegui_is_wildcarded' is up to date.
    make[1]: 'get_random_hint' is up to date.
      CCLD     compare_directories
    make[1]: Leaving directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    make  check-TESTS
    make[1]: Entering directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    make[2]: Entering directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    PASS: examine_cd
    PASS: exec_get_export_variables_ext
    PASS: do_cd_command
    FAIL: compare_directories
    PASS: filegui_is_wildcarded
    PASS: get_random_hint
    ============================================================================
    Testsuite summary for /src/filemanager
    ============================================================================
    # TOTAL: 6
    # PASS:  5
    # SKIP:  0
    # XFAIL: 0
    # FAIL:  1
    # XPASS: 0
    # ERROR: 0
    ============================================================================
    See tests/src/filemanager/test-suite.log
    ============================================================================
    make[2]: *** [Makefile:912: test-suite.log] Error 1
    make[2]: Leaving directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    make[1]: *** [Makefile:1020: check-TESTS] Error 2
    make[1]: Leaving directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    make: *** [Makefile:1129: check-am] Error 2
    Running suite /src/filemanager
    comp33%: Checks: 6, Failures: 4, Errors: 0
    compare_directories.c:183:F:Core:test2:0: Assertion 'right_panel->marked == (int) 1' failed: right_panel->marked == 0, (int) 1 == 1
    compare_directories.c:202:F:Core:test3:0: Assertion 'left_panel->marked == (int) 1' failed: left_panel->marked == 0, (int) 1 == 1
    compare_directories.c:291:F:Core:loopytest:1: FAILed for _i=1 as follows:
    left  panel:somefile.txt size:0 mtime:1557859191
    right panel:somefile.txt size:0 mtime:1557859190
    result  : left: unmarked, right: unmarked
    expected: left: unmarked, right: unmarked
    ------
    
    compare_directories.c:291:F:Core:loopytest:2: FAILed for _i=2 as follows:
    left  panel:somefile.txt size:1 mtime:1557859190
    right panel:somefile.txt size:0 mtime:1557859190
    result  : left:   marked, right:   marked
    expected: left: unmarked, right: unmarked
    ------
    
      marked
    expected: left: unmarked, right: unmarked
    ------
    
    Results for all suites run:
    33%: Checks: 6, Failures: 4, Errors: 0
    FAIL compare_directories (exit status: 1)
    
    

    (see that duplicated part? but that's another issue, ignore that for now:

      marked
    expected: left: unmarked, right: unmarked
    ------
    

    ) then I upgraded to latest git built from master branch (0.12.0.r36.gd6c1ffe-1) in hopes that the issue with duplicating some parts of the previous output or something, got fixed, and now the test passed(and it shouldn't have!):

    $ ./go
    make  do_cd_command examine_cd exec_get_export_variables_ext filegui_is_wildcarded get_random_hint compare_directories
    make[1]: Entering directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    make[1]: 'do_cd_command' is up to date.
    make[1]: 'examine_cd' is up to date.
    make[1]: 'exec_get_export_variables_ext' is up to date.
    make[1]: 'filegui_is_wildcarded' is up to date.
    make[1]: 'get_random_hint' is up to date.
      CCLD     compare_directories
    make[1]: Leaving directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    make  check-TESTS
    make[1]: Entering directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    make[2]: Entering directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    PASS: examine_cd
    PASS: do_cd_command
    PASS: exec_get_export_variables_ext
    PASS: filegui_is_wildcarded
    PASS: get_random_hint
    PASS: compare_directories
    ============================================================================
    Testsuite summary for /src/filemanager
    ============================================================================
    # TOTAL: 6
    # PASS:  6
    # SKIP:  0
    # XFAIL: 0
    # FAIL:  0
    # XPASS: 0
    # ERROR: 0
    ============================================================================
    make[2]: Leaving directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    make[1]: Leaving directory '/home/user/build/1packages/4used/mc-git/makepkg_pacman/mc/src/mc/tests/src/filemanager'
    
    

    ./go is:

    rm compare_directories{,.log} ; make check || cat compare_directories.log                                                                                        
    

    the test itself is a lil crazy and work in progress, I'm not sure I should paste it here hmm...

    to be applied on latest mc master:

    diff --git a/tests/src/filemanager/Makefile.am b/tests/src/filemanager/Makefile.am
    index 19a9ba955..eb328bd4d 100644
    --- a/tests/src/filemanager/Makefile.am
    +++ b/tests/src/filemanager/Makefile.am
    @@ -31,7 +31,8 @@ TESTS = \
     	examine_cd \
     	exec_get_export_variables_ext \
     	filegui_is_wildcarded \
    -	get_random_hint
    +	get_random_hint \
    +	compare_directories
     
     check_PROGRAMS = $(TESTS)
     
    @@ -49,3 +50,7 @@ get_random_hint_SOURCES = \
     
     filegui_is_wildcarded_SOURCES = \
     	filegui_is_wildcarded.c
    +
    +compare_directories_SOURCES = \
    +	compare_directories.c
    +
    
    /*
       src/filemanager - tests for comparing directories (ie. F9-c-c)
    
       Copyright (C) 2011-2019
       Free Software Foundation, Inc.
    
       Written by:
       Slava Zanko <[email protected]>, 2011, 2013
    
       This file is part of the Midnight Commander.
    
       The Midnight Commander is free software: you can redistribute it
       and/or modify it under the terms of the GNU General Public License as
       published by the Free Software Foundation, either version 3 of the License,
       or (at your option) any later version.
    
       The Midnight Commander is distributed in the hope that it will be useful,
       but WITHOUT ANY WARRANTY; without even the implied warranty of
       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
       GNU General Public License for more details.
    
       You should have received a copy of the GNU General Public License
       along with this program.  If not, see <http://www.gnu.org/licenses/>.
     */
    
    #define TEST_SUITE_NAME "/src/filemanager"
    
    #include "tests/mctest.h"
    
    #include "src/vfs/local/local.c"
    
    #include "src/filemanager/midnight.c"
    
    //#include "src/filemanager/ext.c"
    
    #include "src/filemanager/cmd.c"
    
    #ifndef MC_CONFIGURE_ARGS
    #error "config.h not included? should not happen"
    #endif
    
    
    //src: https://stackoverflow.com/questions/6943862/is-there-a-a-define-for-64-bit-in-gcc/6943917#6943917
    #include <limits.h>
    #if ( __WORDSIZE == 64 )
    #define BUILD_64   1
    #endif
    
    #ifdef BUILD_64 //src: https://stackoverflow.com/questions/2467418/portable-way-to-deal-with-64-32-bit-time-t/2467501#2467501
      #define TT_MOD "ll"
    #else
      #define TT_MOD ""
    #endif
    
    #define showstate(state, panel) \
      (state & panel == panel ? "  marked" : "unmarked")
    /* --------------------------------------------------------------------------------------------- */
    /* mocked functions */
    
    
    /* --------------------------------------------------------------------------------------------- */
    #define ERRORSTRCAP 4096
    char * fail_msg=NULL; // when unset, it uses default assertion msg!
    
    //TODO: make it a function with va_args? too lazy. (because ",##__VA_ARGS__" is gcc extension aka non-standard so that it eats up the "," when no args!)
    #define setfailmsg(fmt, ...) \
      do { \
        if (NULL == fail_msg) { /* shouldn't be needed here, but just in case it's forgotten in setup() - ok change of plans! */ \
          fail_msg=calloc(1,ERRORSTRCAP); \
        } \
        int size=snprintf(fail_msg, ERRORSTRCAP, fmt, ## __VA_ARGS__ ); \
        if ((size >= ERRORSTRCAP) && (NULL != fail_msg)) { \
          fail_msg[ERRORSTRCAP-1]='\0'; \
        } else if ((size<=0)||(NULL == fail_msg)) { \
          fail_msg="Couldn't create fail msg!"; \
        } \
      } while(0)
    
    static void
    setup (void)
    {
        str_init_strings (NULL);
    
        vfs_init ();
        init_localfs ();
        vfs_setup_work_dir ();
    
        mc_global.mc_run_mode = MC_RUN_FULL;
        left_panel = g_new0 (WPanel, 1);
        left_panel->cwd_vpath = vfs_path_from_str ("/leftdir");
        left_panel->dir.size = DIR_LIST_MIN_SIZE; // 128
        left_panel->dir.list = g_new0 (file_entry_t, left_panel->dir.size);
        left_panel->dir.len = 0;
        //left_panel->type=view_listing; //probably not needed
        //left_panel->active=1;
        //current_panel=left_panel; // probably not needed for anything!
    
        right_panel = g_new0 (WPanel, 1);
        right_panel->cwd_vpath = vfs_path_from_str ("/rightdir");
        right_panel->dir.size = DIR_LIST_MIN_SIZE;
        right_panel->dir.list = g_new0 (file_entry_t, right_panel->dir.size);
        right_panel->dir.len = 0;
        //right_panel->type=view_listing; //probably not needed
        //fail_msg=malloc(ERRORSTRCAP);
        setfailmsg("No fail message was set!");
    }
    
    static void
    teardown (void)
    {
      free(fail_msg); fail_msg=NULL;
        vfs_shut ();
        str_uninit_strings ();
    }
    
    /* --------------------------------------------------------------------------------------------- */
    
    /*
    Rules: file/symlink names must be the same in both pannels, sizes can differ, mtimes can differ, file type can differ(normal or symlink, or etc)
    
       file 0b
    syml >=1b
    
     */
    void compare(enum CompareMode mode) {
      compare_dir(left_panel, right_panel, mode);
      compare_dir(right_panel, left_panel, mode);
    }
    
    //make only one file (with name 'fn') in one of the panels('whichpane'=left or right) with 'size' bytes and 'mtime'
    #define SPAWNFILE(whichpane, fn, size, mtime) \
      do { \
        whichpane##_panel->dir.len=1; \
        whichpane##_panel->dir.list[whichpane##_panel->dir.len - 1].fname = (char *) fn; \
        whichpane##_panel->dir.list[whichpane##_panel->dir.len - 1].st.st_size = size; \
        whichpane##_panel->dir.list[whichpane##_panel->dir.len - 1].st.st_mtime = mtime; \
      } while (0)
    
    // the only file in one of the panels('whichpane'=left or right) is either marked or unmarked ('yesno'=1 or 0, TRUE or FALSE)
    #define ASSERT_MARKED(whichpane, yesno) \
      mctest_assert_int_eq(whichpane##_panel->marked, (int) yesno); //ie. 1=marked, 0=unmarked
    
    //no files are marked(aka selected ie. with yellow color) in either of the panels !
    #define ASSERT_NONE_MARKED() \
      do { \
        ASSERT_MARKED(left,0); \
        ASSERT_MARKED(right,0); \
      } while (0)
    
    /* *INDENT-OFF* */
    START_TEST (test1)
    /* *INDENT-ON* */
    {
        /* given */
        SPAWNFILE(left, "somefile2.txt", 0, 1557859190);
        SPAWNFILE(right, "somefile2.txt", 0, 1557859190);
    
        /* when */
        // 0=_("&Quick"), 1=_("&Size only"), 2=_("&Thorough") from function compare_dirs_cmd() in file src/filemanager/cmd.c
        compare(compare_thourough);
    
        /* then */
        ASSERT_NONE_MARKED();
    }
    /* *INDENT-OFF* */
    END_TEST
    /* *INDENT-ON* */
    
    /* *INDENT-OFF* */
    START_TEST (test2)
    /* *INDENT-ON* */
    {
        /* given */
        SPAWNFILE(left, "somefile2.txt", 0, 1557859190);
        SPAWNFILE(right, "somefile2.txt", 0, 1557859191);
    
        /* when */
        // 0=_("&Quick"), 1=_("&Size only"), 2=_("&Thorough") from function compare_dirs_cmd() in file src/filemanager/cmd.c
        compare(compare_thourough);
    
        /* then */
        ASSERT_MARKED(left,0);
        ASSERT_MARKED(right,1);
    }
    /* *INDENT-OFF* */
    END_TEST
    /* *INDENT-ON* */
    
    /* *INDENT-OFF* */
    START_TEST (test3)
    /* *INDENT-ON* */
    {
        /* given */
        SPAWNFILE(left, "somefile2.txt", 0, 1557859191);
        SPAWNFILE(right, "somefile2.txt", 0, 1557859190);
    
        /* when */
        // 0=_("&Quick"), 1=_("&Size only"), 2=_("&Thorough") from function compare_dirs_cmd() in file src/filemanager/cmd.c
        compare(compare_thourough);
    
        /* then */
        ASSERT_MARKED(left,1);
        ASSERT_MARKED(right,0);
    }
    /* *INDENT-OFF* */
    END_TEST
    /* *INDENT-ON* */
    
    struct one_panelfile //ie. one file on any one/single panel, what does this file consist of? metadata-wise
    {
      const char *filename;//seen in panel
      const char *symlinkpointsto;//full path(rel/abs) of where the symlink points to, or if not a symlink then NULL! so this is used to know if it's a file or a symlink!
      off_t     st_size; // size of file (or size of symlink, if symlink; ie. not size of the file that the symlink points to!)
      //mode_t    st_mode;        /* File type and mode */
      time_t mtime; // st_mtim.tv_sec - aka the modification time eg. 1557859190 (run: `date -d @1557859190`) - note: can't use st_mtime here (it must be #define-d somewhere!)
      //mark_state selected_which;
    };
    
    typedef enum {
      NONE_PANEL=0, //no files are selected aka marked, in both panels
      RITE_PANEL=1, // the file on the right panel is selected (yes, there can be only one file on the right panel, such are these tests composed)
      LEFT_PANEL=2, // the file on the left panel is selected
      BOTH_PANEL=3 // the file on both panels is selected
    } file_marked_on; //made to be bit OR-ed
    //typedef unsigned int mark_state; // 0=none
    typedef uint8_t mark_state; // 0=none
    
    static struct one_panelfile const panel_possibilities[] =
    {
      {"somefile.txt", NULL, 0, 1557859190},
      {"somefile.txt", NULL, 0, 1557859191},
      {"somefile.txt", NULL, 1, 1557859190},
    };
    
    //#define MAX_CRAZIES 12
    #define MAX_CRAZIES (sizeof(panel_possibilities) / sizeof(struct one_panelfile))
    
    static mark_state const matrix[MAX_CRAZIES][MAX_CRAZIES]= { // matrix[left_panel][right_panel] where each panel goes through panel_possibilities ie. in a double for, the results of their marking is this matrix as: which panel(left, right, both or none) got their file(since there's only one file per panel always, in these tests) marked aka selected(with yellow color)
      {NONE_PANEL, RITE_PANEL},
      {LEFT_PANEL, NONE_PANEL},
    };
    
    static char* const matrix_descriptions[MAX_CRAZIES][MAX_CRAZIES]= {
      {"same filename,0 bytes size,mtime", "same filename, 0 bytes size, more recent mtime on right panel"},
      {"same filename,0 bytes size, more recent mtime on left panel", "same filename, 0bytes size, mtime"},
    };
    
    /* *INDENT-OFF* */
    START_TEST (loopytest)
      /* *INDENT-ON* */
    {
      // pre-check:
      ck_assert_int_eq(left_panel->marked, 0);
      ck_assert_int_eq(right_panel->marked, 0);
      ck_assert_int_ge(_i, 0);
      ck_assert_int_le(_i, MAX_CRAZIES);
      /* given */
      SPAWNFILE(left, panel_possibilities[_i].filename, panel_possibilities[_i].st_size, panel_possibilities[_i].mtime);
      SPAWNFILE(right, panel_possibilities[0].filename, panel_possibilities[0].st_size, panel_possibilities[0].mtime);
      /* when */
      compare(compare_thourough);
      /* then */
      //sanity/assumptions check:
      ck_assert_int_ge(left_panel->marked, 0);
      ck_assert_int_le(left_panel->marked, 1);
      ck_assert_int_ge(right_panel->marked, 0);
      ck_assert_int_le(right_panel->marked, 1);
    //  ck_assert_int_eq(MAX_CRAZIES, 2);//temp hardcoded, remove me TODO:
      if (1 == left_panel->marked) {
        ck_assert_int_eq(2, (left_panel->marked << 1) );
      }
      //
      const mark_state expected=matrix[_i][0];
      _ck_assert_int(expected, >=, 0);
      ck_assert_int_le(expected, BOTH_PANEL);
      const mark_state current=( (left_panel->marked << 1) | (right_panel->marked) );
      setfailmsg("FAILed for _i=%d as follows:\n\
    left  panel:%s size:%jd mtime:%08" TT_MOD "d\n\
    right panel:%s size:%jd mtime:%08" TT_MOD "d\n\
    result  : left: %s, right: %s\n\
    expected: left: %s, right: %s\n\
    ------\n", _i,
          panel_possibilities[_i].filename, panel_possibilities[_i].st_size, panel_possibilities[_i].mtime,
          panel_possibilities[0].filename, panel_possibilities[0].st_size, panel_possibilities[0].mtime,
          showstate(current, LEFT_PANEL),
          showstate(current, RITE_PANEL),
          showstate(expected, LEFT_PANEL),
          showstate(expected, RITE_PANEL)
          );
      //ck_assert_msg(current == expected, matrix_descriptions[_i][0]);
      ck_assert_msg(current == expected, fail_msg);
    }
    /* *INDENT-OFF* */
    END_TEST
    /* *INDENT-ON* */
    
    /* --------------------------------------------------------------------------------------------- */
    
    int
    main (void)
    {
        int number_failed;
    
        Suite *s = suite_create (TEST_SUITE_NAME);
        TCase *tc_core = tcase_create ("Core");
        SRunner *sr;
    
        tcase_add_unchecked_fixture (tc_core, setup, teardown);
    
        /* Add new tests here: *************** */
        tcase_add_test (tc_core, test1);
        tcase_add_test (tc_core, test2);
        tcase_add_test (tc_core, test3);
        tcase_add_loop_test(tc_core, loopytest, 0, MAX_CRAZIES);
        /* *********************************** */
    
        suite_add_tcase (s, tc_core);
        sr = srunner_create (s);
        srunner_set_log (sr, "compare_directories.log");
        srunner_run_all (sr, CK_ENV);
        number_failed = srunner_ntests_failed (sr);
        srunner_free (sr);
        return (number_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE;
    }
    
    /* --------------------------------------------------------------------------------------------- */
    

    it's my first try with check... so expect me to not know a lot! :D

    Either I'm doing something wrong(or not doing something), or there's a change or bug somewhere. Any ideas?

    opened by ghost 13
  • check build fails: ‘CK_SUBUNIT’ undeclared

    check build fails: ‘CK_SUBUNIT’ undeclared

    I've downloaded check-0.12.0 and tried to build the project in Debian testing (Buster), but the build process failed with the following error:

    ../../src/check_print.c: In function ‘srunner_fprint_summary’:
    ../../src/check_print.c:59:22: error: ‘CK_SUBUNIT’ undeclared (first use in this function); did you mean ‘CK_SILENT’?
         if(print_mode == CK_SUBUNIT)
                          ^~~~~~~~~~
                          CK_SILENT
    compilation terminated due to -Wfatal-errors.
    

    Apparently CK_SUBUNIT is only defined if subunit is enabled. However, I've tried building check by enabling subunit (configure --enable-subunit) and the build still fails.

    opened by ruimaciel 13
  • [windows] Implement pthread_mutex for Windows

    [windows] Implement pthread_mutex for Windows

    When compiling for Windows, pthread libraries may not be available. For Win32 applications in which an external implementation of pthread is not available, we build libcompat with the pthread_mutex.c module, which provides a simple implementation of pthread_mutex for the Win32 API.

    opened by walac 11
  • Floating point macros cause SIGSEGV on Windows Server 2012 R2 using MiNGW

    Floating point macros cause SIGSEGV on Windows Server 2012 R2 using MiNGW

    When attempting to setup Check to build and test in AppVeyor for MinGW and Cygwin it was found that at least MinGW fails to run check_check_export. When run it will raise a SIGSEGV. Attaching a debugger, the failure was found here:

    #0 strlen
    #1 vsnprintf
    #2 _ck_assert_failed  check_check_sub.c:959
    #3 test_ck_assert_double_eq
    

    Namely, the vsnprintf(buf, BUFSIZ, msg, ap) call was leading to the failure, where the msg was Assertion ‘%s’ failed: %s == %.*lg, %s == %.*lg.

    Removing the double and ldouble tests (floating tests) allows check_check_export to pass.

    Note that the same tests pass on a Windows 7 64 bit machine but fail on AppVeyor. Notes about MinGW and the AppVeyor Windows VM:

    • MinGW compiles for 32 bit, not 64 bit.
    • The Windows VM runs Windows Server 2012 R2, 64 bit.
    opened by brarcher 11
  • added make target coverage

    added make target coverage

    which allows overview of coverage after make check or failure if a certain coverage isn't reached.

    Summary of our conversation from https://github.com/krichter722/check-old/pull/1:

    @brarcher:

    • Adding these files to .gitignore might be better as a separate commit
    • It may be better to check if the gcovr tool is installed and at the expected version in the configure script. Further, it may be useful to configure the coverage target as a configure option.
    • Is there a way to perform this same check using gcov directly in a few lines, or is gcovr necessary? Do you happen to know if gcovr is currently being distributed in a package for any linux distro?
    • There is a configure options, --enable-gcov, which enables the necessary gcov flags. Perhaps the coverage target should depend on the check target.

    @krichter722:

    There is a configure options, --enable-gcov, which enables the necessary gcov flags.

    I don't understand what it does and it's not documented in configure --help.

    Perhaps the coverage target should depend on the check target.

    That alone doesn't do it since the test compilation needs to be done with CFLAGS='-g -O0 --coverage'.

    Is there a way to perform this same check using gcov directly in a few lines, or is gcovr necessary?

    As far as I understand (this is the first time I use gcov and gcovr) one needs to visualize the results anyway. Furthermore I added a customization of gcovr at https://github.com/krichter722/gcovr which allows gcovr to return a non-zero code if a certain coverage isn't met. I need to add a check that this customization is installed. Is that possible with gcov only?

    Adding these files to .gitignore might be better as a separate commit

    Done.

    opened by krichter722 10
  • Deeper integration in CLion for detailed information

    Deeper integration in CLion for detailed information

    Summary: I am using CLion for the development of my C project, and I am using check as my unit testing framework of choice. Currently, when running my unit tests in CLion, I am just shown what would essentially also be printed to the console when running the test binary - under the Run tab:

    /home/user/Documents/project/build/tests
    Running suite(s): suite1
     suite2
     suite3
     suite4
    100%: Checks: 43, Failures: 0, Errors: 0
    
    Process finished with exit code 0
    

    However, I would like to see some more insight in the the unit tests that have been run such as execution time, pass/fail, directly in the test runner tab. So, something similar to this:

    image

    It might also be worth mentioning that I am using a GNU Makefile, not a CMake file. Is it possible to provide a deeper integration in CLion?

    opened by dominikheinz 1
  • Add assert functions for wide character and string of wide characters

    Add assert functions for wide character and string of wide characters

    General problem

    The messages are made in ASCII by using printf or gnu_printf (in the case of #ifdef _WIN32). In consequence, printing a message for wide characters (especially for strings, because the code could print for a single character) requires to go "deep" in the library. To solve this problem, functions could be done for wchar_t and wchar_t* (with wprintf).

    Possible basic functions for lambda users

    • single character
      • ck_assert_wchar_eq
      • ck_assert_wchar_ne
      • ck_assert_wchar_lower (by using iswlower)
      • ck_assert_wchar_upper (by using iswupper)
      • ck_assert_wchar_ddigit (decimal)
      • ck_assert_wchar_xdigit (hexadecimal)
      • ck_assert_wchar_ascii
      • etc.
    • string
      • ck_assert_wstr_eq
      • ck_assert_wstr_ne
      • ck_assert_wstr_lower
      • ck_assert_wstr_upper
      • ck_assert_wstr_len
      • ck_assert_wstr_startswith
      • ck_assert_wstr_endswith
      • ck_assert_wstr_contains
      • etc.
    opened by kevin2kevin2 0
  • Add assert functions for ASCII character

    Add assert functions for ASCII character

    There could be functions for ASCII character (char):

    • ck_assert_char_eq
    • ck_assert_char_ne
    • ck_assert_char_lower (by using islower)
    • ck_assert_char_upper (by using isupper)
    • ck_assert_char_ddigit (decimal)
    • ck_assert_char_xdigit (by using isxdigit)
    • ck_assert_char_ascii (by using isascii)
    • etc. (see #include <ctype.h>)

    It is simple and could be useful.

    opened by kevin2kevin2 0
  • Check Installation on Windows 64 Bit

    Check Installation on Windows 64 Bit

    Bug Report

    Check library version- 0.15.2 Configurations: Cygwin Setup version 2.919 (64 Bit) Link for Setup Executable: cygwin link Compiler being used: GCC g++ Operating System: Windows 10 Pro Version 10.0.19044 Build 19044

    Trying to follow Readme MD instructions for installing and getting started with Check.

    Installed check via the Cygwin Installer Setup Manager and using Cygwin verified that the following dependencies were installed. This is the output of the drive terminal for my dependancies

    automake --version : automake (GNU automake) 1.16.5

    autoconf --version: autoconf (GNU Autoconf) 2.71

    libtool --version: libtool (GNU libtool) 2.4.7

    pkg--config --version: 1.8.0

    texinfo --version: command not found

    POSIX [sed --Version] : sed (GNU sed) 4.8 packaged by Cygwin 4.8-1

    Commands following where set my check-master/ReadME.md

    autoconf

    $ autoreconf --install
    $ ./configure
    $ make
    $ make check
    $ make install
    $ sudo ldconfig
    

    Issue occurred during command make check Output:

    =================================================================== Testsuite summary for Check 0.15.2

    TOTAL: 9

    PASS: 8

    SKIP: 0

    XFAIL: 0

    FAIL: 1

    XPASS: 0

    ERROR: 0

    =================================================================== See tests/test-suite.log Please report to check-devel at lists dot sourceforge dot net

    Test-suit.log file output: test-suite.log

    opened by Jorge-spicymexican 3
  • Enhancement: Adding Check test discovery for CMake

    Enhancement: Adding Check test discovery for CMake

    Hi,

    Been using Check standalone with Makefiles but have recently switched to using CMake to build my C projects.

    I noticed there is no Check equivalent of Google Tests's gtest_discover_tests or gtest_add_tests. These are useful for making CMake aware of the individual tests that are registered with the test runner, as otherwise CMake considers a single test runner to be a single unit test. So no matter how many test cases and suites you have in the runner, ctest says only 1 test succeeds/fails.

    It's possible to run by suite with CK_RUN_SUITE, by case with CK_RUN_CASE or by using tags and CK_(INCLUDE|EXCLUDE)_TAGS, but these are environment variables so in the CMakeLists.txt, I would still have to do something like

    # let's say there are two test cases with a few tests each, case_1 and case_2, in one suite
    add_executable(test_runner [source files])
    # i chose to link dynamically against Check, i.e. checkDynamic.dll on Windows,
    # libcheck.so on *nix. dug through the check-targets.cmake and realized i could do
    # something like find_package(Check PATHS $ENV{CHECK_ROOT}), where CHECK_ROOT
    # can be set from env to locate the install path, so i could use the aliased targets.
    target_link_libraries(test_runner Check::checkShared [other libs])
    # register case_1, case_2 as individual "tests" with CMake
    add_test(
        NAME test_case_1
        COMMAND sh -c "CK_RUN_CASE=case_1 && $<TARGET_FILE:test_runner>"
    )
    add_test(
        NAME test_case_2
        COMMAND sh -c "CK_RUN_CASE=case_2 && $<TARGET_FILE:test_runner>"
    )
    

    Note that this would only work on a *nix-like platform since I used sh. I also can't just pass the line passed to sh directly since CMake then tries to look for that string as an executable name. The other option, which is wrapping this into a shell script, is of course still a manual approach. To get things to work on Windows I would have to do something like

    if(WIN32)
        add_test(
            NAME test_case_1
            COMMAND cmd /c "set CK_RUN_CASE=case_1 && $<SHELL_PATH:$<TARGET_FILE:test_runner>> && set CK_RUN_CASE="
        )
        # more Windows stuff
    else()
        # *nix stuff
    endif()
    

    I couldn't actually get this to work; cmd was complaining that the syntax was wrong (works fine when copied directly into the command line). Either way, both of these approaches are hacky af and Check test granularity is only down to the case level.

    gtest_discover_tests seems difficult to replicate given the current design of Check, since this just reads the output from using the --gtest_list_tests argument since Google Test runners can accept some CLI arguments. gtest_add_tests on the other hand scans source files for tests using a regex match, which although in the Google Test case has some shortcomings, might work just fine for Check test cases. After all, all the tests have the form START_TEST(test_name) { /* test body */ } END_TEST.

    Could this be an enhancement added to Check to better support its integration with CMake? Currently, besides the CMake fumbling, the only other hacky workaround I can think of would be to split tests into individual cases and suites with their own main by using a macro or something, and fixtures, setup functions, etc. would have to be shared through header files.

    However, maybe I'm missing something that Check already supports for CMake OOTB. If that's the case, please let me know.

    opened by phetdam 0
  • Wrong binaries on Windows

    Wrong binaries on Windows

    On Windows I get the following binaries after Build:

    /bin/manual-link/checkDynamic.dll
    /bin/manual-link/checkDynamic.pdb
    /debug/bin/checkDynamic.dll
    /debug/bin/checkDynamic.pdb
    /debug/lib/check.lib
    /debug/lib/checkDynamic.lib
    /debug/lib/manual-link/compat.lib
    /lib/check.lib
    /lib/checkDynamic.lib
    /lib/compat.lib
    

    2 Questions: What is compat.lib for?

    Why checkDynamic.lib? This is installed even if I'm only building a static library. The way I understand this it is only used for linking into the DLL. In this case, this is most likely wrong.

    opened by Thomas1664 2
Releases(0.15.2)
  • 0.15.2(Aug 9, 2020)

  • 0.15.1(Jul 22, 2020)

    This release addresses a few bugs related to warnings from Check's macros.

    • Fix warning in ptr macros with pointer to integer cast Issue #284

    • Fix various warnings in Check's unit tests Issue #283

    • Replace gnu_printf with printf in format __attribute__ Issue #282

    • Fix warnings from Check's macros: "warning: too many arguments for format" Issue #274

    • Fix format specifiers that do not match the argument types Issue #271

    Downloads

    Source code(tar.gz)
    Source code(zip)
    check-0.15.1.tar.gz(756.00 KB)
  • 0.15.0(Jun 21, 2020)

    This release adds mutual exclusion support for Windows.

    • Define CK_ATTRIBUTE_FORMAT for GCC >= 2.95.3, to make use of ‘gnu_printf’ format attribute Issue #249

    • Refactor tests to fix signed - unsigned conversions Issue #249

    • Refactor some Check internals to use proper interger types Issue #250

    • Implement missing mutual exclusion for Windows hosts Issue #257

    Downloads

    Source code(tar.gz)
    Source code(zip)
    check-0.15.0.tar.gz(756.29 KB)
  • 0.14.0(Jan 26, 2020)

    This release adds support for CMake's FetchContent.

    Changes:

    • Add support for FetchContent in CMake Issue #238
    • Rename CMake project from 'check' to 'Check' Issue #232
    • Fix for checking for wrong tool when building docs in Autotools Issue #231
    • Fix compiler warning with printf format Issue #233

    Downloads

    Source code(tar.gz)
    Source code(zip)
    check-0.14.0.tar.gz(752.73 KB)
  • 0.13.0(Oct 22, 2019)

    This release improved CMake support with a few other minor improvements.

    Changes:

    • configure: optional build documentation Issue #206 (GitHub)

    • missing <unistd.h> in some files Issue #196 and Issue #186 (GitHub)

    • Various documentation improvements

    • END_TEST is now optional, as how START_TEST works has been redone Issue #158

    • Various CMake related changes:

      • Support exporting Check to other projects with CMake 3 Issue #185
      • Shared library support for Visual Studio Issue #220
      • Fix wrong library filename Issue #226
      • Add support for CMake package registry Issue #227
      • CMake build type can now be debug or release Issue #228
      • Add checkmk to CMake build.

    Downloads

    Source code(tar.gz)
    Source code(zip)
    check-0.13.0.tar.gz(752.95 KB)
  • 0.12.0(Oct 20, 2017)

    This release of Check adds or improves support for a few Windows compilers as well as adding a new API for configuring the maximum error message size.

    Changes:

    • Fix out-of-tree builds with CMake. Issue #86

    • Fix issue found with Clang regarding invalid suffix on a literal Issue #110

    • Check now responds to a few errors in a more clear way when it cannot run tests. PR #122, #123

    • Fix missing pid_t definition in check.h on Windows Issue #78

    • The maximum message size of check assertions is now configurable. Issue #127

    • Check support added for Visual Studios 2010, 2012, 2013, 2015, and 2017 both for x86/64 and ARM. PR #129, Issue #125

    • Changed license of example CMake files to BSD (was previously LGPL). Issue #131

    • Fix issue with floating point macros on MinGW Issue #101

    Downloads

    Source code(tar.gz)
    Source code(zip)
    check-0.12.0.tar.gz(746.13 KB)
  • 0.11.0(Dec 17, 2016)

    This release of Check adds several new macros for comparing different types of data, as well as bug fixes and other improvements.

    Changes:

    • Avoid issue in unit test output checking where a shell's built-in printf command does not work properly, but the printf program itself is correct.
    • Emit only valid XML characters in XML logging (assumes ASCII encoding). Bug #103
    • Add LGPL header to files where it was missing; update FSF address in LGPL headers Bug #110
    • Strip timestamps from examples using filterdiff if available. This allow build output to be reproducible. Bug #112
    • Use double slash for regular expressions in checkmk for better Solaris support.
    • Improve CMake build files for better Visual Studio 2015 support. Pull Request #19
    • Fix potential SIGSEGV in Check related to the disk filling up during a test. Pull Request #21
    • Support added for applying tags to test cases and selectively running test cases based on tags. Pull Request #44
    • Macros for comparing memory regions (ck_assert_mem_eq, ck_assert_mem_ne) have been added. Pull Request #64
    • Macros for comparing floating point numbers have been added. Pull Request #69
    • Macros for comparing string, but allowing for NULL (ck_assert_pstr_eq, ck_assert_pstr_ne) have been added. Pull Request #80
    • Macros for checking if a pointer is NULL or not have been added. Pull Request #87

    Downloads

    Source code(tar.gz)
    Source code(zip)
    check-0.11.0.tar.gz(735.87 KB)
  • 0.10.0(Dec 24, 2015)

    In addition to a few bug fixes and improvements, the handing of Check when compiled without fork() has changed slightly. Several API calls in the past would intentionally result in errors when they required fork() to make sense. However this has been changed to instead ignore the call. This should help improve unit test interoperability between *nix and Windows.

    Changes:

    • CMake on MinGW and MSVC was unable to find time related types because time.h was not included. This header is now included for the checks.
    • If the test runner process catches a SIGTERM or SIGINT signal the running tests are now also killed.
    • If Check is compiled without support for fork(), the behavior of functions which require fork() to be useful have been changed. Functions that attempt to set CK_FORK mode are no-ops, check_fork() returns in failure, and check_waitpid_and_exit() exits in failure.
    • Add space around operators in assert messages for readability.
    • Use mkstemp() if available instead of tmpfile() or tempnam().
    • Fix issue with string formatting in ck_assert(), where using the % operator would be interpreted as a string formatter
    • In nofork mode, the location of a failed assertion within a test case was lost if that test case has a checked teardown fixture (even if that fixture function is empty). This is now fixed.

    Downloads

    Source code(tar.gz)
    Source code(zip)
    check-0.10.0.tar.gz(751.09 KB)
Owner
null
Minimal unit testing framework for C

MinUnit Minunit is a minimal unit testing framework for C/C++ self-contained in a single header file. It provides a way to define and configure test s

David Siñuela Pastor 455 Dec 19, 2022
A lightweight unit testing framework for C++

Maintenance of UnitTest++, recently sporadic, is officially on hiatus until 26 November 2020. Subscribe to https://github.com/unittest-cpp/unittest-cp

UnitTest++ 510 Jan 1, 2023
🧪 single header unit testing framework for C and C++

?? utest.h A simple one header solution to unit testing for C/C++. Usage Just #include "utest.h" in your code! The current supported platforms are Lin

Neil Henning 560 Jan 1, 2023
UT: C++20 μ(micro)/Unit Testing Framework

"If you liked it then you "should have put a"_test on it", Beyonce rule [Boost::ext].UT / μt | Motivation | Quick Start | Overview | Tutorial | Exampl

boost::ext 950 Dec 29, 2022
Simple Unit Testing for C

Unity Test Copyright (c) 2007 - 2021 Unity Project by Mike Karlesky, Mark VanderVoord, and Greg Williams Welcome to the Unity Test Project, one of the

Throw The Switch 2.8k Jan 5, 2023
A modern, C++-native, header-only, test framework for unit-tests, TDD and BDD - using C++11, C++14, C++17 and later (or C++03 on the Catch1.x branch)

Catch2 v3 is being developed! You are on the devel branch, where the next major version, v3, of Catch2 is being developed. As it is a significant rewo

Catch Org 16k Jan 8, 2023
A modern, C++-native, header-only, test framework for unit-tests, TDD and BDD - using C++11, C++14, C++17 and later (or C++03 on the Catch1.x branch)

Catch2 v3 is being developed! You are on the devel branch, where the next major version, v3, of Catch2 is being developed. As it is a significant rewo

Catch Org 16k Jan 8, 2023
The fastest feature-rich C++11/14/17/20 single-header testing framework

master branch Windows All dev branch Windows All doctest is a new C++ testing framework but is by far the fastest both in compile times (by orders of

Viktor Kirilov 4.5k Jan 5, 2023
A testing micro framework for creating function test doubles

Fake Function Framework (fff) A Fake Function Framework for C Hello Fake World! Capturing Arguments Return Values Resetting a Fake Call History Defaul

Mike Long 551 Dec 29, 2022
Googletest - Google Testing and Mocking Framework

GoogleTest OSS Builds Status Announcements Release 1.10.x Release 1.10.x is now available. Coming Soon Post 1.10.x googletest will follow Abseil Live

Google 28.7k Jan 7, 2023
C++ Benchmark Authoring Library/Framework

Celero C++ Benchmarking Library Copyright 2017-2019 John Farrier Apache 2.0 License Community Support A Special Thanks to the following corporations f

John Farrier 728 Jan 6, 2023
A C++ micro-benchmarking framework

Nonius What is nonius? Nonius is an open-source framework for benchmarking small snippets of C++ code. It is very heavily inspired by Criterion, a sim

Nonius 339 Dec 19, 2022
test framework

Photesthesis This is a small, experimental parameterized-testing tool. It is intended to be used in concert with another unit-testing framework (eg. C

Graydon Hoare 11 Jun 2, 2021
A simple framework for compile-time benchmarks

Metabench A simple framework for compile-time microbenchmarks Overview Metabench is a single, self-contained CMake module making it easy to create com

Louis Dionne 162 Dec 10, 2022
The C Unit Testing Library on GitHub is a library designed for easy unit testing in C

The C Unit Testing Library on GitHub is a library designed for easy unit testing in C. It was written by Brennan Hurst for the purpose of providing a J-Unit-like testing framework within C for personal projects.

null 1 Oct 11, 2021
A unit testing framework for C

Check Table of Contents About Installing Linking Packaging About Check is a unit testing framework for C. It features a simple interface for defining

null 926 Jan 2, 2023
CppUTest unit testing and mocking framework for C/C++

CppUTest CppUTest unit testing and mocking framework for C/C++ More information on the project page Slack channel: Join if link not expired Getting St

CppUTest 1.1k Dec 26, 2022
Minimal unit testing framework for C

MinUnit Minunit is a minimal unit testing framework for C/C++ self-contained in a single header file. It provides a way to define and configure test s

David Siñuela Pastor 455 Dec 19, 2022
A lightweight unit testing framework for C++

Maintenance of UnitTest++, recently sporadic, is officially on hiatus until 26 November 2020. Subscribe to https://github.com/unittest-cpp/unittest-cp

UnitTest++ 510 Jan 1, 2023
🧪 single header unit testing framework for C and C++

?? utest.h A simple one header solution to unit testing for C/C++. Usage Just #include "utest.h" in your code! The current supported platforms are Lin

Neil Henning 560 Jan 1, 2023