An extremely fast FEC filing parser written in C

Related tags

CLI c parser fec elections
Overview

FastFEC

A C program to stream and parse FEC filings, writing output to CSV. This project is in early stages but works on a wide variety of filings and will benefit from additional rigorous testing.

Usage

Once you've downloaded the latest release or built a binary (see below), you can run it as follows:

Usage: fastfec [flags] <id, file, or url> [output directory=output] [override id]
  • [flags]: optional flags which must come before other args; see below
  • <id, file, or url> is either
    • a numeric ID, in which case the filing is streamed from the FEC website
    • a file, in which case the filing is read from disk at the specified local path
    • a url, in which case the filing is streamed from the specified remote URL
  • [output directory] is the folder in which CSV files will be written. By default, it is output/.
  • [override id] is an ID to use as the filing ID. If not specified, this ID is pulled out of the first parameter as a numeric component that can be found at the end of the path/URL.

The CLI will download or read from disk the specified filing and then write output CSVs for each form type in the output directory. The paths of the outputted files are:

  • {output directory}/{filing id}/{form type}.csv

You can also pipe the output of another command in by following this usage:

[some command] | fastfec [flags] <id> [output directory=output]

Flags

The CLI supports the following flags:

  • --include-filing-id / -i: if this flag is passed, then the generated output will include a column at the beginning of every generated file called filing_id that gets passed the filing ID. This can be useful for bulk uploading CSVs into a database
  • --silent / -s : suppress all non-error output messages
  • --warn / -w : show warning messages (e.g. for rows with unexpected numbers of fields or field types that don't match exactly)

The short form of flags can be combined, e.g. -is would include filing IDs and suppress output.

Examples

fastfec -s 13360 fastfec_output/

  • This will run FastFEC in silent mode, download and parse filing ID 13360, and store the output in CSV files at fastfec_output/13360/.

Local development

Build system

Zig is used to build and compile the project. Download and install the latest version of Zig (>=9.0.0) by following the instructions on the website (you can verify it's working by typing zig in the terminal and seeing help commands).

Dependencies

The following libraries are used:

  • curl (needed for the CLI, not the library)
  • pcre (only needed on Windows)

Installing these libraries varies by OS:

Mac OS X

Ensure Homebrew is installed and run the following brew command to install the libraries:

brew install pkg-config curl

Ubuntu

sudo apt install -y libcurl4-openssl-dev

Windows

Install vcpkg and run the following:

vcpkg integrate install
vcpkg install pcre curl --triplet x64-windows-static

Building

From the root directory of the repo, run:

zig build

On Windows, you may have to supply additional arguments to locate vcpkg dependencies and ensure the msvc toolchain is used:

zig build --search-prefix C:/vcpkg/packages/pcre_x64-windows-static --search-prefix C:/vcpkg/packages/curl_x64-windows-static --search-prefix C:/vcpkg/packages/zlib_x64-windows-static -Dtarget=x86_64-windows-msvc

The above commands will output a binary at zig-out/bin/fastfec and a shared library file in the zig-out/lib/ directory. If you want to only build the library, you can pass -Dlib-only=true as a build option following zig build.

Time benchmarks

Using massive 1464847.fec (8.4gb) on an M1 MacBook Air

  • 1m 42s

Testing

Currently, there's only C tests for specific parsing/buffer/write functionality, but we hope to expand unit testing soon.

To run the current tests: zig build test

Scripts

python scripts/generate_mappings.py: A Python script to auto-generate C header files containing column header and type mappings

Comments
  • feat: remove non-wheel artifacts for PyPI publish

    feat: remove non-wheel artifacts for PyPI publish

    Description

    The beta release failed because the GitHub actions step that downloads artifacts and then uploads them to PyPI will upload ALL artifacts, even those from other jobs. We only want Wheel files to trickle through. We won't know if this works until we merge to dev

    opened by freedmand 6
  • fix: f99 text as a csv field and floats to two decimals

    fix: f99 text as a csv field and floats to two decimals

    Instead of populating a separate f99 text file, this change produces the f99 text in the F99 CSV as intended. It also incorporates changes to provide floats to two decimals of precision.

    opened by freedmand 5
  • Cross-compiled Python package/distribution

    Cross-compiled Python package/distribution

    Description

    This PR adds a script to cross-compile wheels for the FastFEC Python package along with a GitHub actions workflow to generate wheels for all relevant OS's. This approach subverts the typical setup.py script in favor of a make_wheels.py script that automatically constructs the wheels for each OS that is based on https://github.com/ziglang/zig-pypi/blob/main/make_wheels.py

    Note that there are many commits in this PR. I tried to get it working with cibuildwheel first, which would be the de facto way to do cross-platform builds, but each build took over 30 minutes and there were OS-specific issues that were hard to debug. This approach mirrors the way the actual FastFEC package is built and should be just as stable.

    Also if for whatever reason the wheel does not work, setup.py will automatically run as backup when pip install fastfec happens in the future.

    To verify this PR works, I launched a Windows VM and confirmed that the Python library could be installed. I also launched AWS Linux x86_64 and aarch64 (Gravitron) instances and confirmed both could install the Python wheel.

    Jira ticket

    https://arcpublishing.atlassian.net/browse/ELEX-141

    Test steps

    • Go here https://github.com/washingtonpost/FastFEC/actions/runs/1668364069
    • Click the artifact file to download it
    • Extract the zip contents of the artifact
    • Find the path to the .whl file inside that has your desired architecture
    • Run pip install {path to the the desired whl file}
    • Run the attached test.py and it should print JSON-esque output and not error!

    test.py:

    from fastfec import FastFEC
    from io import BytesIO
    
    file = b""""HDR","FEC","5.1","Navision AVF","3.00","^","","000",""
    "F3XN","C00397000","South Dakota Women Vote!","1120 Connecticut Avenue NW","Ste 1100","Washington","DC","20036","","","TER","","","","20041001","20041122","61571.00","0.00","61571.00","61571.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","61571.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","61571.00","61571.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","2004","61775.00","61775.00","61775.00","0.00","61750.00","25.00","61775.00","0.00","0.00","61775.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","61775.00","61775.00","0.00","0.00","204.00","204.00","61571.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","61775.00","61775.00","61775.00","0.00","61775.00","204.00","0.00","204.00","Caroline C. Fines","20041201","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00","0.00"
    "SB22","C00397000","PTY","","207 East Capitol #103","PO Box 737","Pierre","SD","57501","","Transfer to Affiliate","","","20041019","61144.66","","","","","","","","","","","","","","","","SB22-2533","","","","","","","South Dakota Democratic Party ","","","","",""
    "SB22","C00397000","PAC","","805 15th St., NW #400","","Washington","DC","20005","","Transfer to Affiliate","","","20041019","426.34","","","","","","","","","","","","","","","","SB22-2532","","","","","","","EMILY's List Federal Operating ","","","","",""
    """
    
    with BytesIO(file) as f:
        with FastFEC() as fastfec:
            for form, line in fastfec.parse(f):
                print("GOT", form, line)
    
    :handshake: review in progress 
    opened by freedmand 4
  • feat: refactor python bindings, add line callback

    feat: refactor python bindings, add line callback

    This is a significant refactor of the Python bindings to be more library-driven, with the purpose of pushing a Python package to the Python package index (PyPI) as soon as possible. In the process, FastFEC was modified to provide custom line callback functionality for the sake of providing a convenient Python API that mimics other popular/inspirational packages such as fecfile.

    The main file that drives this PR is python/fastfec.py. See this file for more detailed comments on how the API works. The top-level setup.py file is provided as a proof-of-concept that the Python package can be built automatically (not as a necessarily viable end result yet).

    Test Steps

    If you haven't already, ensure you can build the fastfec library by running zig build (the steps in the README outline how to do this, but tl;dr brew install zig --head if you don't yet have zig installed). This will ensure the latest shared library file is discoverable by Python (which will allow all the new changes to work).

    You can test the library by running a Python REPL (python) in the root directory of the repo. Also, make sure you have a .fec file in this directory for testing (in the following commands, it is assumed you have 13360.fec downloaded from https://docquery.fec.gov/dcdev/posted/13360.fec in this root directory — but you can sub in whatever .fec file you have handy or want to test).

    To test the line by line parsing, run:

    from python.fastfec import FastFEC
    
    with open('13360.fec', 'rb') as f:
        with FastFEC() as fastfec:
            for form, line in fastfec.parse(f):
                print("GOT", form, line)
    

    This should print line information for each line in the passed in .fec file.

    To test the file to output file parsing, run:

    from python.fastfec import FastFEC
    
    with open('13360.fec', 'rb') as f:
        with FastFEC() as fastfec:
            fastfec.parse_as_files(f,'python_output')
    

    This should return 1 to indicate success and output .csv files in the python_output directory corresponding to a successful parse.

    To test the file to output file custom parsing, run:

    import os
    from pathlib import Path
    from python.fastfec import FastFEC
    
    # Custom open method
    def open_output_file(filename, *args, **kwargs):
        filename = os.path.join('custom_python_output', filename)
        output_file = Path(filename)
        output_file.parent.mkdir(exist_ok=True, parents=True)
        return open(filename, *args, **kwargs)
    
    with open('13360.fec', 'rb') as f:
        with FastFEC() as fastfec:
            fastfec.parse_as_files_custom(f, open_output_file)
    

    This should return 1 to indicate success and output .csv files in the custom_python_output directory corresponding to a successful parse.

    :bow: changes requested 
    opened by freedmand 4
  • Remove external deps (Curl) and refactor release process

    Remove external deps (Curl) and refactor release process

    Description

    Per https://github.com/washingtonpost/FastFEC/issues/33, this project is trying to do more than it needs to by bundling Curl. This PR undoes that dependency in order to achieve fully functional cross-compilation with Zig. It also fixes some broken aspects of the release process. Specifically, this PR does the following:

    • removes Curl from the dependencies
    • adds a --print-url flag that will output docquery URLs for a given filing id (along with an example command involving curl)
    • refactors the CLI into a testable library
    • bundles PCRE as sources rather than specifying it as includes (which breaks on cross-platform compilation)
    • adds Windows compatibility to file path construction
    • README updates to reflect new usage / some clean-up
    • Refactors the release process to only need an Ubuntu runner (everything is cross-compilable now)
    • Refactors the release GitHub actions to be more reusable
    • Adds a workflow test that the generated mappings are up-to-date with the JSON mappings files
    • LICENSE update to reflect PCRE's BSD license
    opened by freedmand 3
  • Python client `SEGFAULT`s instead of calling `CustomWriteFunction`/`CUSTOM_WRITE`s in `parse_as_files`/`parse_as_files_custom`

    Python client `SEGFAULT`s instead of calling `CustomWriteFunction`/`CUSTOM_WRITE`s in `parse_as_files`/`parse_as_files_custom`

    It seems like calls of context->customWriteFunction are going amiss. I'm seeing SEGFAULTs coming without any evidence that the custom open function or write callback are ever called.

    I've tried to demonstrate that the custom function passed in is called via print statements followed by stdout flushes, and bybreakpoints, but have seen no evidence that the FFI is behaving as one would hope. The problem persists with calls to parse_as_files, as well.

    To recreate, one can run:

    import smart_open
    from fastfec import *
    
    
    if __name__ == "__main__":
        headers = {'headers': {'User-Agent': 'Mozilla/5.0'}}
        with smart_open.open(f'http://docquery.fec.gov/dcdev/posted/1606847.fec', 'rb', transport_params=headers) as f:
            with FastFEC() as fastfec:
                fastfec.parse_as_files(f, "some_output_directory", include_filing_id='1606847')
    

    On at least revision 460d0c4, built on MacOS and run on MacOS.

    As I understand it, somewhere in writer.c's call of the custom function being handed in from the python client, there seems to be something going wrong.

    The issue presented here is the smallest bit I could get to fail easily without getting rid of, for example, the use of smart_open (in the event that that's causing problems), but I'd ideally be able to use fastfec.parse_as_files_custom in a more general case, with other file-like objects. This is simply the smallest failing case I could demonstrate.

    I'll keep looking at this as time and priority allow, but I figured a GH issue might be helpful in this instance.

    opened by james-clemer-actblue 3
  • Fix custom write function segfault

    Fix custom write function segfault

    This is just a PR-back of this PR in our fork

    Below is the original PR description reproduced for convenience.

    One caveat is, the git history here isn't as clean as might be hoped for in this repository. If there are any org-wide contribution guidelines or implicit norms that this PR breaks, let me know. I'm happy to edit the PR, or else reproduce it as a PR from a different fork &c.

    Addresses this issue.

    We call client.py via parse_as_files_custom.

    In turn parse_as_files_custom takes an argument, open_function, and passes it in to utils.provide_write_callback, which handles writing to various files and uses open_function to create new file handles for various files.

    utils.provide_write_callback uses a type-factory from ctypes and deems it the CUSTOM_WRITE type.

    CUSTOM_WRITE represents, on the C side of things, CustomWriteFunction.

    CUSTOM_WRITE = CFUNCTYPE(None, c_char_p, c_char_p, POINTER(c_char), c_int)
    
    ...
    
    typedef void (*CustomWriteFunction)(char *filename, char *extension, char *contents, int numBytes);
    

    But! Calling any python CustomWriteFunction induces a SEGFAULT!

    So, I wrote a CustomWriteFunction in C, and passed that in. This, and some heavy printf-debugging, led me to notice that contents is not a NULL-terminated char*. That made me wonder if somewhere along the way, there was machinery in ctypes that assumed it was, since NULL-terminated char*s are the defacto str type in C-land.

    It turns out, in the docs for ctypes.c_char_p, we're told that c_char_p:

    Represents the C char* datatype when it points to a zero-terminated string. For a general character pointer that may also point to binary data, POINTER(c_char) must be used. The constructor accepts an integer address, or a bytes object.

    Doing so means that calls to:

    #! python
    
    import smart_open
    from fastfec import *
    
    
    if __name__ == "__main__":
        headers = {'headers': {'User-Agent': 'Mozilla/5.0'}}
        with smart_open.open(f'http://docquery.fec.gov/dcdev/posted/1606847.fec', 'rb', transport_params=headers) as f:
            with FastFEC() as fastfec:
                fastfec.parse_as_files(f, "some_output_directory", include_filing_id='1606847')
    

    no longer SEGFAULT.

    opened by james-clemer-actblue 2
  • Add support for version 8.4

    Add support for version 8.4

    Description

    This PR brings the mappings.json file up to date with the current version in the fecfile python library. The most noteworthy change is supporting version 8.4 of the .fec file format. Here is the corresponding commit to the fech-source library. It appears as though the only differences between versions 8.3 and 8.4 are the addition of the lobbyist_registrant_pac_3 and lobbyist_registrant_pac_4 fields to the F1, coming right after leadership_pac.

    This PR also also includes a fix to version 2 of the schedule A, added by this commit.

    Link to Jira Ticket

    Test Steps

    After Screenshot(s)

    Before Screenshot(s)

    opened by esonderegger 2
  • NE-1284: create python wrapper for FastFEC

    NE-1284: create python wrapper for FastFEC

    Description

    create python wrapper for FastFEC

    Jira Ticket

    https://arcpublishing.atlassian.net/browse/NE-1284

    Test Steps

    • Checkout branch and set up per README as necessary
    • Go here https://s3.console.aws.amazon.com/s3/buckets/elex-fec-test?region=us-east-1&prefix=test-architecture/test-filings/&showversions=false, find a filing that is more than 0B of data AND for which there isn't a corresponding output folder here https://s3.console.aws.amazon.com/s3/buckets/elex-fec-test?region=us-east-1&prefix=test-architecture/test-fastfec-output/&showversions=false and then use that filing number in the command below, which you should run from the root of the repo:
    python /python/fastfec.py -f "[FILING NUMBER GOES HERE]" -i "s3://elex-fec-test/test-architecture/test-filings" -o "s3://elex-fec-test/test-architecture/test-fastfec-output"
    

    After running the command, make sure you get something like this in the console:

    ➜  fastFEC git:(python-ctypes) ✗ python /Users/foremanH/Projects/fastFEC/python/fastfec.py -f "1375137" -i "s3://elex-fec-test/test-architecture/test-filings" -o "s3://elex-fec-test/test-architecture/test-fastfec-output"
    Filing ID is 1375137
    Input file is s3://elex-fec-test/test-architecture/test-filings/1375137
    Output file is s3://elex-fec-test/test-architecture/test-fastfec-output/1375137
    Parsing (py)
    Parsed; status 1
    1.2386590242385864e-07
    4.439614713191986e-06
    7.579103112220764e-06
    

    And make sure the output for that filing appears here: https://s3.console.aws.amazon.com/s3/buckets/elex-fec-test?region=us-east-1&prefix=test-architecture/test-fastfec-output/&showversions=false Screen Shot 2021-11-19 at 7 21 30 PM

    :hand: ready for review multiple :eyes: 
    opened by hs4man21 2
  • some normal/to be expected errors crash the c process; it would be better if they bubbled up to python

    some normal/to be expected errors crash the c process; it would be better if they bubbled up to python

    When running the fastfec from Python, some errors piped to stderr such as fprintf(stderr, "Unknown type (%c) in %s\n", type, ctx->formType); will exit 1 and cause the program to crash. For errors in fec.c, it would be a better user experience if those errors bubbled up to python where they could be caught and give a more informative error message.

    opened by avitalb 1
  • BUG: (maybe?) Missing trailing commas from output

    BUG: (maybe?) Missing trailing commas from output

    Not sure if this is a bug or not. If I run fastfec 878160 and I look at the resulting output/878160/SA11D.csv, then I see this:

    form_type,filer_committee_id_number,transaction_id,back_reference_tran_id_number,back_reference_sched_name,entity_type,contributor_organization_name,contributor_last_name,contributor_first_name,contributor_middle_name,contributor_prefix,contributor_suffix,contributor_street_1,contributor_street_2,contributor_city,contributor_state,contributor_zip_code,election_code,election_other_description,contribution_date,contribution_amount,contribution_aggregate,contribution_purpose_descrip,contributor_employer,contributor_occupation,donor_committee_fec_id,donor_committee_name,donor_candidate_fec_id,donor_candidate_last_name,donor_candidate_first_name,donor_candidate_middle_name,donor_candidate_prefix,donor_candidate_suffix,donor_candidate_office,donor_candidate_state,donor_candidate_district,conduit_name,conduit_street1,conduit_street2,conduit_city,conduit_state,conduit_zip_code,memo_code,memo_text_description,reference_code
    SA11D,C00477828,C7168136,,,CAN,,Clarke,Hansen,,,,2900 E Jefferson Ave,Apt C4,Detroit,MI,482074242,P2012,,2013-06-30,565.73,565.73,,,,,,H0MI13398,Clarke,Hansen,,,,H,MI,13,,,,,,,,"* In-Kind: In-kind, web hosting and phone services, to be reimbursed"
    

    It looks to me that this is missing the required trailing comma that separates the memo_text_description and (the missing) reference_code value. If I try to load this with a pyarrow csv reader with the given 45 column names, it gets mad because it only sees 44 values in the row. You can replicate with pd.read_csv(path, engine="pyarrow"). Other CSV parsers such as vanilla pandas (pd.read_csv(path)) and vaex are more forgiving and just fill in NA for the missing reference_code values, so perhaps that is why this hasn't been caught before.

    If I look at at the resulting output/878160/SB17.csv, it's a similar story: there is one less trailing comma than there should be to separate the missing last value.

    However, if I look at output/878160/F3S.csv, then this looks correct. I'd guess this is because the last value in that row are non-missing:

    form_type,filer_committee_id_number,date_general_election,date_day_after_general_election,a_total_contributions_no_loans,b_total_contribution_refunds,c_net_contributions,a_total_operating_expenditures,b_total_offsets_to_operating_expenditures,c_net_operating_expenditures,a_i_individuals_itemized,a_ii_individuals_unitemized,a_iii_individuals_total,b_political_party_committees,c_all_other_political_committees_pacs,d_the_candidate,e_total_contributions,transfers_from_other_auth_committees,a_loans_made_or_guarn_by_the_candidate,b_all_other_loans,c_total_loans,offsets_to_operating_expenditures,other_receipts,total_receipts,operating_expenditures,transfers_to_other_auth_committees,a_loan_repayment_by_candidate,b_loan_repayments_all_other_loans,c_total_loan_repayments,a_refund_individuals_other_than_pol_cmtes,b_refund_political_party_committees,c_refund_other_political_committees,d_total_contributions_refunds,other_disbursements,total_disbursements
    F3S,C00477828,2012-11-06,2012-11-07,3120.73,0.00,3120.73,2153.17,3340.65,-1187.48,1500.00,55.00,1555.00,0.00,1000.00,565.73,3120.73,0.00,0.00,0.00,0.00,3340.65,0.00,6461.38,2153.17,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,2153.17
    
    opened by NickCrews 1
  • Only parse Schedule A itemizations

    Only parse Schedule A itemizations

    Hi! Thanks for this great utility.

    I only care about the Schedule A itemizations. In some cases of multi gig .FEC files, the non-schedule A entries can take up more than half of the file, and so really slow down parsing.

    Can we add some options to only parse particular itemizations?

    In the meantime, I do this, do you see any problems with it? Like are schedule A itemizations always going to come before other schedules?

    # filter_fec.sh
    
    # We only want the individual contributions from an FEC file. We don't want
    # the other itemizations, they can be gigabytes and slow parsings
    
    # From the FEC file format documentation:
    
    # The first record of every electronic file that is submitted to the FEC must be an
    # HDR record that precedes the main body of the ASCII CSV (comma separated values) data.
    # The second record will be a "cover" record for the particular filing, (for example,
    # a F3 or and F3X record for a FEC-3 or FEC-3X electronic report). An unlimited number
    # of Schedule records (examples: SA, SB, SC/ ...) can follow the first two records of
    # an FEC electronic report file. (Electronic fi les are usually assigned the file
    # suffix ".fec".)
    
    # So as soon as we see a line starting with "SB", "SC", or "SD", we stop.
    # From https://stackoverflow.com/a/8940829/5156887
    awk '{if(/^SB|^SC|^SD/)exit;else print}'
    

    and use it as curl https://docquery.fec.gov/dcdev/posted/13360.fec | filter_fec.sh | fastfec 13360

    enhancement 
    opened by NickCrews 4
  • Incorrect number of trailing commas when last field(s) are empty

    Incorrect number of trailing commas when last field(s) are empty

    FastFEC export seems to be missing a trailing comma in lines that have one or more empty items at the end of a row.

    Using homebrew version of fastfec on a M1 MacBook Pro running macOS Montery 12.4.

    For example, you can reproduce this by running fastfec 876050 fastfec_output/ and checking the header.csv (should be an additional trailing comma after report_number 002), SB28A.csv (42 fields in line items vs 43 in header) or SB23.csv (43 fields in line items vs 44 in header)

    header.csv:

    record_type,ef_type,fec_version,soft_name,soft_ver,report_id,report_number,comment
    HDR,FEC,8.0,Microsoft Navision 3.60 - AVF Consulting,1.00,FEC-840327,002
    
    enhancement 
    opened by afischer 7
  • Building from source no longer links with Homebrew PCRE

    Building from source no longer links with Homebrew PCRE

    Attempting to build from source with brew install --build-from-source fastfec no longer links with Homebrew PCRE, but the system-provided one instead.

    This seems to be due to a change in Zig 0.9.0 (Homebrew/homebrew-core@72b36e94fb1495399e518bee15000bb4c9daf64e).

    ❯ brew install --quiet --build-from-source fastfec
    ==> zig build -Dvendored-pcre=false
    🍺  /usr/local/Cellar/fastfec/0.0.4: 6 files, 982.8KB, built in 11 seconds
    ==> Running `brew cleanup fastfec`...
    Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
    Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).
    ❯ brew linkage fastfec
    System libraries:
      /usr/lib/libSystem.B.dylib
      /usr/lib/libcurl.4.dylib
      /usr/lib/libpcre.0.dylib
    

    I've poked at this for a bit, but I don't use Zig so I'm unsure how to get this to ignore the system libpcre. Passing --search-prefix doesn't work. I'd appreciate it if you could take a look. Thanks!

    opened by carlocab 8
Releases(0.1.9)
Owner
The Washington Post
The Washington Post
The KISS file manager: CLI-based, ultra-lightweight, lightning fast, and written in C

CliFM is a CLI-based, shell-like (non-curses) and KISS terminal file manager written in C: simple, fast, and lightweight as hell

leo-arch 819 Jan 8, 2023
A simple header-only C++ argument parser library. Supposed to be flexible and powerful, and attempts to be compatible with the functionality of the Python standard argparse library (though not necessarily the API).

args Note that this library is essentially in maintenance mode. I haven't had the time to work on it or give it the love that it deserves. I'm not add

Taylor C. Richberger 1.1k Jan 4, 2023
A simple to use, composable, command line parser for C++ 11 and beyond

Clara v1.1.5 !! This repository is unmaintained. Go here for a fork that is somewhat maintained. !! A simple to use, composable, command line parser f

Catch Org 648 Dec 27, 2022
CLI11 is a command line parser for C++11 and beyond that provides a rich feature set with a simple and intuitive interface.

CLI11: Command line parser for C++11 What's new • Documentation • API Reference CLI11 is a command line parser for C++11 and beyond that provides a ri

null 2.4k Dec 30, 2022
Lightweight C++ command line option parser

Release versions Note that master is generally a work in progress, and you probably want to use a tagged release version. Version 3 breaking changes I

null 3.3k Dec 30, 2022
A simple to use, composable, command line parser for C++ 11 and beyond

Lyra A simple to use, composing, header only, command line arguments parser for C++ 11 and beyond. Obtain License Standards Stats Tests License Distri

Build Frameworks Group 388 Dec 22, 2022
Argument Parser for Modern C++

Highlights Single header file Requires C++17 MIT License Quick Start Simply include argparse.hpp and you're good to go. #include <argparse/argparse.hp

Pranav 1.5k Jan 1, 2023
A simple header-only C++ argument parser library. Supposed to be flexible and powerful, and attempts to be compatible with the functionality of the Python standard argparse library (though not necessarily the API).

args Note that this library is essentially in maintenance mode. I haven't had the time to work on it or give it the love that it deserves. I'm not add

Taylor C. Richberger 896 Aug 31, 2021
⛳ Simple, extensible, header-only C++17 argument parser released into the public domain.

⛳ flags Simple, extensible, header-only C++17 argument parser released into the public domain. why requirements api get get (with default value) posit

sailormoon 207 Dec 11, 2022
Tiny command-line parser for C / C++

tinyargs Another commandline argument parser for C / C++. This one is tiny, source only, and builds cleanly with -Wall -pedantic under C99 and C++11 o

Erik Agsjö 7 Aug 22, 2022
Elf and PE file parser

PelfParser PelfParser is a very simple C++ library for parsing Windows portable executable files and Executable and Linkable Format files, it only sup

Rebraws 1 Oct 29, 2021
A math parser made in 1 hour using copilot.

An entire math parser made with Copilot Copilot wrote 91% of the code in this, amazing isn't it? It supports all normal mathematical expressions excep

Duckie 4 Dec 7, 2021
A parser for InnoDB file formats

Introduction Inno_space is a parser for InnoDB file formats. It parse the .ibd file to human readable format. The origin idea come from Jeremy Cole's

Zongzhi Chen 86 Dec 19, 2022
JSONes - c++ json parser & writer. Simple api. Easy to use.

JSONes Just another small json parser and writer. It has no reflection or fancy specs. It is tested with examples at json.org Only standart library. N

Enes Kaya ÖCAL 2 Dec 28, 2021
A simple parser for the PBRT file format

PBRT-Parser (V1.1) The goal of this project is to provide a free (apache-lincensed) open source tool to easily (and quickly) load PBRT files (such as

Ingo Wald 195 Jan 1, 2023
Zinit is a flexible and fast Zshell plugin manager

zinit Note: Sebastian Gniazdowski, the original zinit dev, deleted zdharma randomly. This is a reliable fork / place for the continuation of the proje

null 1.6k Dec 31, 2022
Flexible and fast Z-shell plugin manager that will allow installing everything from GitHub and other sites.

ZINIT News Zinit Wiki Quick Start Install Automatic Installation (Recommended) Manual Installation Usage Introduction Plugins and snippets Upgrade Zin

z-shell 26 Nov 15, 2022
CfgManipulator is a fast and powerful tool for working with configuration files for the C++ language

CfgManipulator is a fast and powerful tool for working with configuration files for the C++ language. It can read, create strings and sections, change the value of a string and much more.

Sanya 2 Jan 28, 2022
CLIp is a clipboard emulator for a command line interface written in 100% standard C only. Pipe to it to copy, pipe from it to paste.

CLIp v2 About CLIp is a powerful yet easy to use and minimal clipboard manager for a command line environment, with no dependencies or bloat. Usage Sy

A.P. Jo. 12 Sep 18, 2021