Boost.Multiprecision

Overview

Boost Multiprecision Library

ANNOUNCEMENT: Support for C++03 is now removed from this library. Any attempt to build with a non C++11 conforming compiler is doomed to failure.

Master Develop
Drone Build Status Build Status
Github Actions Build Status Build Status

The Multiprecision Library provides integer, rational, floating-point, complex and interval number types in C++ that have more range and precision than C++'s ordinary built-in types. The big number types in Multiprecision can be used with a wide selection of basic mathematical operations, elementary transcendental functions as well as the functions in Boost.Math. The Multiprecision types can also interoperate with the built-in types in C++ using clearly defined conversion rules. This allows Boost.Multiprecision to be used for all kinds of mathematical calculations involving integer, rational and floating-point types requiring extended range and precision.

Multiprecision consists of a generic interface to the mathematics of large numbers as well as a selection of big number back ends, with support for integer, rational and floating-point types. Boost.Multiprecision provides a selection of back ends provided off-the-rack in including interfaces to GMP, MPFR, MPIR, TomMath as well as its own collection of Boost-licensed, header-only back ends for integers, rationals, floats and complex. In addition, user-defined back ends can be created and used with the interface of Multiprecision, provided the class implementation adheres to the necessary concepts.

Depending upon the number type, precision may be arbitrarily large (limited only by available memory), fixed at compile time (for example 50 or 100 decimal digits), or a variable controlled at run-time by member functions. The types are expression-template-enabled for better performance than naive user-defined types.

The full documentation is available on boost.org.

Support, bugs and feature requests

Bugs and feature requests can be reported through the Gitub issue tracker (see open issues and closed issues).

You can submit your changes through a pull request.

There is no mailing-list specific to Boost Multiprecision, although you can use the general-purpose Boost mailing-list using the tag [multiprecision].

Development

Clone the whole boost project, which includes the individual Boost projects as submodules (see boost+git doc):

git clone https://github.com/boostorg/boost
cd boost
git submodule update --init

The Boost Multiprecision Library is located in libs/multiprecision/.

Running tests

First, build the B2 engine by running bootstrap.sh in the root of the boost directory. This will generate B2 configuration in project-config.jam.

./bootstrap.sh

Now make sure you are in libs/multiprecision/test. You can either run all the tests listed in Jamfile.v2 or run a single test:

../../../b2                        <- run all tests
../../../b2 test_complex           <- single test
Comments
  • karatsuba multiplication in cpp_int

    karatsuba multiplication in cpp_int

    Until now, cpp_int uses O(n**2) grade school multiplication which is highly optimized for not so small numbers. Note that there are other multiplication algorithms like Toom-3, Toom-4, FFT but for simplicity, karatsuba is chosen.

    The karatsuba cutoff is set to 100 limbs which is experimentally determined, although the limit could be as low as 10 limbs (as per GMP) but to avoid deep recursion depth 100 limbs is chosen.

    There is ~10% reduction in runtime for numbers falling beyond the cutoff value. This can be improved further I guess because python's karatsuba is around three times faster. I have tinkered around and I suspect that the cpp_int backend is slow as the algorithmic steps used are somewhat similar to that of python's.

    Compiling & Testing:

    1. Manually tested using randomly generated numbers as high having up to a million digits.
    2. Unit test passed
    3. No warnings were seen when compiled using -Wall using clang++ (v6.0.0) and g++(v7.3.2).
    opened by madhur4127 87
  • complex number type

    complex number type

    Is there any work or reference for a complex number type based on a gmp_float or mpfr_float? I believe that I cannot just use e.g. g++ or clang std::complex<T>.

    opened by jtravs 68
  • Complete Standalone

    Complete Standalone

    Consolidates all of the open standalone based PRs for testing. Big ticket item in this one not reflected in the other PRs is boost.lexical_cast workarounds. Once all the current tests pass I will add a standalone test-suite to the Jamfile.

    opened by mborland 59
  • GitHub Actions and YAML script

    GitHub Actions and YAML script

    This draft pull request starts off with a YAML script which performs an echo of hello world. This draft PR is intended to identify interest in putting the multiprecision CI on GitHub Actions in YAML.

    I can handle the YAML code hopefully myself based on the work in Math 476

    Is this a desired feature? Anything that needs to be specially accounted for?

    opened by ckormanyos 54
  • Incorrect results for a couple of real cpp_bin_float functions:ℝ→ℝ

    Incorrect results for a couple of real cpp_bin_float functions:ℝ→ℝ

    As requested I am opening another issue for problems with following functions, but only for cpp_bin_float real valued functions:

    • sin, cos, tan with 7e7 ULP error
    • acos with 10000 ULP error
    • erfc with 20000 ULP error
    • lgamma with 70000 ULP error (edited)
    • tgamma with 10000 ULP error
    • fma with 2e5 ULP error

    I prepared std::vector<std::string> suspects = {……} so that only these arguments are tested. So I'm sorry that this test program is a little longer. I can push it somewhere via git if you want.

    @jzmaddock said Yes please. Note however that most of those functions have regions that are ill-conditioned. I'm surprised about erf though.

    So I do a domain check in the code below, but that's just the domain check via mathematical definition of the function. E.g. acos in (-1,1) range. But if there are some wider ill-conditioned cpp_bin_float ranges then you tell me what are those :)

    The last couple tests are done on MPFR types, to show that they have no problems, even when I reduce the tolerance to 4 ULP.

    This test is done on g++-10 and Boost 1.71

    #include <boost/core/demangle.hpp>
    #include <boost/multiprecision/float128.hpp>
    #include <boost/multiprecision/mpfr.hpp>
    #include <boost/multiprecision/cpp_bin_float.hpp>
    #include <iostream>
    #include <string>
    #include <vector>
    
    // these suspects that I found here were generated with 207 (semi-random) bits, cpp_bin_float<62> that is.
    std::vector<std::string> suspects =
    {
    "-99.99051044261190803828277871209611083521312159772951785429282742447111501770387897764914459929030895961457916310906347977088468929912075917569999182149214394708068576189674558918341062963008880615234375",
    "-99.23390959446626271324034474274442251884471237189555648094657059216860706607230983323106368424801631883935026863224574760130807307771805691417887408046384289874060347091955236464855261147022247314453125",
    "-98.9601713316877088337177445119470740578579166310219652750338812970411599649857990095222187693902055061636695929602719448563570542635141882380804268308514937244231568502783602525596506893634796142578125",
    "-98.273813022182876335674961215978389185880158614907235514650235867554610737475054142604330810869605796963554540686259410168473190820224120735175598211787517991498697167429554610862396657466888427734375",
    "-98.1129673249056279651265099215966864727359600855606270412253136392329234530313794427136637055560221800456208046576431258117530821299909173847892873020493030816573110097777998817036859691143035888671875",
    "-97.389369764346678086481891064814826474831092962567795459730881505082734698449439138689254963656701672933811803557867845743694253575567006314050085743567864815344037321409587093512527644634246826171875",
    "-95.81857221869257960935742769742992913057780233194018961846245753236156690762799658755131314850200998334040944803550122736063078540309929170741814952093887546176985414714266653390950523316860198974609375",
    "-92.41884548834482351580942729503268036653297534861723287901036113912045650063200397203449032796157971583666591400004045975989150366281884624802662149806202205909629043834030426296521909534931182861328125",
    "-91.1061877200439407432384615462896541336997550990506594144915374189322511915338633133119583121624515660791329437500227666881849937355906834387488978756948848299924981208874896765337325632572174072265625",
    "-89.88901025192110738330790618249721807309639606773637313381141814624705166266602211251904155822175375028221083070595793995144725917986725146484809181158549739264727873599980512153706513345241546630859375",
    "-89.5353901456268146780894922808504683043625998859211699302844699006615833786787620155903405554436561863769757162466938403503598928733212929892185960036262953160544897368566807926981709897518157958984375",
    "-82.1735520704620443988984953466372698685913582823295765947054351112482828444196668754236587930583613773862145085498288866696315110855282598447092727081999454104808966459216890143579803407192230224609375",
    "-77.4823796790540815922431050434539860561416780588691652960638531675828303613178820006138086077402642358169796090547340159455478668941626522124178433128328640096781076973542212726897560060024261474609375",
    "-76.99175113943655525504236511759596519945485974092039507047416560034531419571148155966846401745208762372457862829213996244244865281367459385377136496207263084289203203258722396640223450958728790283203125",
    "-74.61255333976712083091879675534822922304761149616407008730070078988643083933215168414352880237541830345893825991799418214718663934337361280342675580186924961102572229076912435630219988524913787841796875",
    "-72.3148104470242378111131793549028166185391566893773764561918781174851603967015597324961371802863065721050633633189851030237069974043738081738156836369644443643193987281136969613726250827312469482421875",
    "-65.6435462795664662509771573366666549161813140877948752939617569007726832786509545361560456448812909391572172265680384625360761114969384119631392422215792126281679041976957478254917077720165252685546875",
    "-58.1194637727400116282382861620192645168456286573655292854115447804934531033530825341470365817522667175865233641660823248322056381392871614104021682035650111738424306029315857813344337046146392822265625",
    "-46.00377253908153163234835861938988189011724937250550191704727440078450345533379528244871632668899992466976676697133833647564925859881415224025606188892635798714896011585295809709350578486919403076171875",
    "-44.4018537144116355553957796775986930181198953013070477698158368990850213417187435014923789688670795285241686594695650800353491215038601488478780249088234035183091197307447828279691748321056365966796875",
    "-42.178867533615277746638335956312648788666574058754668089030698639143496696692892419393689512323455361357823651939047406700172638876986456164277459200305282980007944015898857514912378974258899688720703125",
    "-37.463814793675275461340264701880657547498904997758234907382436644440387185383888069692093701283418496171812743486767600476345326829809773758398740227733327893774352423950091406368301250040531158447265625",
    "-36.687894664533810937311533733623204481210731875314969097117453723702620271139130018679051998429836998807714049077719477378387717033200220973951746780027503998765359138900521429604850709438323974609375",
    "-28.64389272279336971899833313469911474439779953887602880275372619043693669817718560048422360489311115432830897163631478540521766172994972680472028686375514700446394400490390808045049197971820831298828125",
    "-28.0049139189336880614936157837694875795078481812590699169249409778955821694343725412388088298559665974798192998478183283974952859016032716608150750782873554208841915258432209157035686075687408447265625",
    "-26.27180763553655086974675201041863634842370354443985880608654015680918608411304588751537996354581477114994399729425244435651959022879036487160686066081892033421442411711410613861517049372196197509765625",
    "-25.19589620543179814909094172303432814133713508768451885201830602238490586082017718958403502211165076000022756274322670476508649605244189795114035788739250985098123980510109731767443008720874786376953125",
    "-25.13274460509201434480266058335104919728710461006166448957435054060930768882685992519312501100646264865618197931948935269332644328253058412301382409241950447072178054241931022261269390583038330078125",
    "-24.3953393891403751990434066978288339384375573172866562212210641433035459991358198899245580781841692398778762926576347126875939333921008840292697330648373870679197772393109744371031410992145538330078125",
    "-21.09522851245033636306402211225949554094174191622852088230859853405429426073319491152815162110789708135363180731077163134358817681937892456607698197182937716368487934825992624610080383718013763427734375",
    "-18.99938574552839199324805802310536938840417940528040264994230879127715234997780930686638661213255462498806211545865983339025364050801922290465869751649067992947152172522606861093663610517978668212890625",
    "-16.1054921663428825424335275871979183815357076427275834397442882246577003003947573289675799854994087290464911637642925175788912463196112911507347146483208575619937297229711248292005620896816253662109375",
    "-15.73606255635348930406430675898903077327778525123481926630706457480564768230418298855932815114879596305648746566826709238793635624816065807957821140335105770187602847176577824939158745110034942626953125",
    "-13.844342695544957837434025971838094757130400709693375931013384200015240921794309640808969294106787118008983305677558784686668901577262469676705839997599836797663452880868817373993806540966033935546875",
    "-12.0108256663021959292221821504924285077679894795415928058870037494381092873899810180946465666595655424900842675906679066861586687873299236208804084527982847696754575839150902538676746189594268798828125",
    "-10.00031013274038195290023947364801980860573336384093125993165793242441917674719101901475578381172139328538478579821165445665430869130966779506355287008010477201210008946219431891222484409809112548828125",
    "-8.00002606729166538824643828246571667601957528380422187224925264792051768683947293015968599171373933233014603623537359584513425170094804665935239384252086272957156520835297897065174765884876251220703125",
    "-6.00136682225492754251204662283943131239849406671099109862729681998415655896338341898554616420963838467993260300566362082682688543052932950197434420055032149531569485301218946915469132363796234130859375",
    "-4.03935176182968248850425274573197268040942586626526838633754467790994461640466129194118835598340118294709330113984308095784822410899225766569451949983317611166111493758990036440081894397735595703125",
    "-3.0240920288930329988980374744242432174547781269616617810591667842849794114087483433483690367847145810986439948789823331633666322045350227369058242087843354551669750041043016608455218374729156494140625",
    "-2.45703434083386998275925378730707853886429974961703191606644158840534757638881189433235828818780863495008551032003814937228673471251368171994648347271862826902977250398407704778946936130523681640625",
    "-2.45702409322290866567069073111226123857093891190759763630180185440854570661779988634018746783189304594656392773180501945370983941166634220688688556283277257129456228046393562181037850677967071533203125",
    "-1.68169675859976659383108338072526819755655584974521789613087401183387429879714920504893731220566066433092139915659379637884440972895014577991321577305253685830077614582478418014943599700927734375",
    "-0.8282564029863776126578656468289904548301926008360850725607385349535211657160850878019333716199903358537998263735956441731642451949396509520995591712819820502493317615488876981544308364391326904296875",
    "-0.32136064705709597454683625174723961593146173639012134904467758003590583919808930814050545965484123350604583749902317747452315051081556469722141906597624489046183260398947822977788746356964111328125",
    "0.46721644973365601360133602238548895907862098406622072706034309566800215546218584935514980265179575405374460581236738760590882160438856238290914946721340585134019107726999209262430667877197265625",
    "0.999846935115256985099472087619410059034039136463602141692454832980536542150610828088069585167055187023555569404437106928970332167071435310425941385031029895284759856366463282029144465923309326171875",
    "0.99988767201274738245735086136368521485585806605382659983192152902696406932008939599756249510654260590532728278579290850296943869294248260415662522058393880540656806221733177153510041534900665283203125",
    "0.999890658691280253459563000177154110549643009678046768723093809280489994853890555289525263351943778362525321014124545828049510733153266589198126845022007746534298266993801007629372179508209228515625",
    "0.999890866630185914587046276951054077223882235976803551094669700127799173507869538140351795204645068157857366025341530654308963466229817646946295966864834393016125968500773524283431470394134521484375",
    "0.9998935834632539389432532416679314303104944098437646169936959501208480776770549549396436304932167485457590024069924603839254637269127683893329896050425928226423888389717831159941852092742919921875",
    "0.9998937078809612793196456117613076238607542194482800109271476030303818189373028927817720019485705963490501809249485198842139120807378682861591436713538045611728011241581270951428450644016265869140625",
    "0.9998938266238835248172237531591128278247832584171582274981000092413549795680158553547781577679123233133582668576009454540078156352860532955582650147430441578411119962765951640903949737548828125",
    "1.31807073864746745220380666020953425708782839063532147724215402784222028983184986695417973418596170725423236711281442200052112002936531018527445826575889803824494228567942855079309083521366119384765625",
    "2.05118616373376822351059060334033552537053819234188264727312593977696266313461209366740347658511550317741748782578284208029622842257369906721179886195782958121548722463245439939782954752445220947265625",
    "14.81907944221615795927300817825273323995258820785195324332346982881369733699449486066221176276821157545838038881434799007345102544573284121878574555603790032496692996577536405311548151075839996337890625",
    "28.86382646919547451704802146558570204667176129200120383736328754193446432280602654739983984395786925252355041424685510746821564041731449426350230758472895022277715515457430228707380592823028564453125",
    "37.714117388111989956041102504146323181250085280470547904334284311710392522527934075592357284175540227603002107141232187606083931450205311188420619775073138932840188797257496844395063817501068115234375",
    "37.755692600976890509060126539791454024057262140474811481640356032121125141911161464415653798084260615297277298486172253135984357874602234962374017127404099967470652021717114621424116194248199462890625",
    "43.1803311978604564169383897713532689475466446393217746344408051982016068191582531106463642440364694513352131182034723647935400394961893836912807402147128875424808569505330524407327175140380859375",
    "44.8136269782289928625179989261586968859704136307989349216436641020178036768807849394357169505890377033019777761450383475357716008285913212762440013978080310193437274524086433302727527916431427001953125",
    "47.7466833592842141724124786289501525735591596956904479800788632710915981013637380490409808488674608936498347095426906680888360693051845194282363447716043526277730768558882346042082644999027252197265625",
    "48.5373479167760837984751589435425712042885137507272816756492359179219111789370782710753352832578889227710125612304041008705021198073669031963387872820418281736599974696133585894131101667881011962890625",
    "50.2654237499756733391222254122805320113428095281996770142576041928720328847038073008540751164256335141210504687992703586601047641219781397261165193341597270449729040198150187279679812490940093994140625",
    "66.475153220534004851292526233200206437375861594380699566663661143294973948983537723870961684073064116387026367864722840661735428670419848958023413614398867727169062380454533922602422535419464111328125",
    "81.6813903871233164206231157958374137736575469952408038962124876646551135758586852641465094333999801948744979304163225462391891136623683615941281815737538089523643580758260895890998654067516326904296875",
    "90.5055529204410608252721620806883201091798030906746366296219309300593773762484102001340104622297142486139344072193634452555957932797466287389178621999435495602305545848054180169128812849521636962890625",
    "91.1061928073599934918157143693445377704705143712968984369886992676466700842560738586081200771883133480499146932659993234293192141302496948118607650536006329505291090331553505166084505617618560791015625",
    "91.73096013967822637902416462228509184659994913333375698315287016094450586596998400386341289481248526500038762543132207961366660355580285937087410175116695986753667657609412344754673540592193603515625",
    "96.006219145174968389135648559819121058721694089272516452196945753282582422246476865909482984419546829495310684769521880110885806790842600551509460451587668161459176996430642248014919459819793701171875",
    "97.389368411290298402830640798173079547959579858387300352046564554983962377373474417232814245397912161320988898124674543550781585423318179803285572760545009906273106192742261555395089089870452880859375",
    "98.0989550614355490188345206791598701919776512387165808297572266422581898406437029731615350650185511553954839433926255993281461118980368400993687322616830094970143127941497596111730672419071197509765625",
    "98.96015456301280320862779505809196791836643290291948125688893156142059072001012434950259767595461620194995593174473903568075803746371943076148585708991150589920737790095017771818675100803375244140625",
    "99.215809318714254781755417250857229813252158084760395373122363660557432217805933289849573751176675695362239479172525090760908689972963504911406316481201661289408832988812037001480348408222198486328125",
    "99.526174807950546775689149433642956463205915643028662653550636461692965655300301728228212687122980789586022357026446091170053596694888030662864493749047536713843442601756805743207223713397979736328125"
    };
    
    std::vector<std::string> fma_suspects =
    {"-65.6435462795664662509771573366666549161813140877948752939617569007726832786509545361560456448812909391572172265680384625360761114969384119631392422215792126281679041976957478254917077720165252685546875",
    "-21.09522851245033636306402211225949554094174191622852088230859853405429426073319491152815162110789708135363180731077163134358817681937892456607698197182937716368487934825992624610080383718013763427734375",
    "1.31807073864746745220380666020953425708782839063532147724215402784222028983184986695417973418596170725423236711281442200052112002936531018527445826575889803824494228567942855079309083521366119384765625",
    "37.755692600976890509060126539791454024057262140474811481640356032121125141911161464415653798084260615297277298486172253135984357874602234962374017127404099967470652021717114621424116194248199462890625",
    "0.46721644973365601360133602238548895907862098406622072706034309566800215546218584935514980265179575405374460581236738760590882160438856238290914946721340585134019107726999209262430667877197265625",
    "43.1803311978604564169383897713532689475466446393217746344408051982016068191582531106463642440364694513352131182034723647935400394961893836912807402147128875424808569505330524407327175140380859375",
    "-0.8282564029863776126578656468289904548301926008360850725607385349535211657160850878019333716199903358537998263735956441731642451949396509520995591712819820502493317615488876981544308364391326904296875",
    "-82.1735520704620443988984953466372698685913582823295765947054351112482828444196668754236587930583613773862145085498288866696315110855282598447092727081999454104808966459216890143579803407192230224609375",
    "-3.0240920288930329988980374744242432174547781269616617810591667842849794114087483433483690367847145810986439948789823331633666322045350227369058242087843354551669750041043016608455218374729156494140625",
    "44.8136269782289928625179989261586968859704136307989349216436641020178036768807849394357169505890377033019777761450383475357716008285913212762440013978080310193437274524086433302727527916431427001953125",
    "37.714117388111989956041102504146323181250085280470547904334284311710392522527934075592357284175540227603002107141232187606083931450205311188420619775073138932840188797257496844395063817501068115234375",
    "-77.4823796790540815922431050434539860561416780588691652960638531675828303613178820006138086077402642358169796090547340159455478668941626522124178433128328640096781076973542212726897560060024261474609375",
    "-1.68169675859976659383108338072526819755655584974521789613087401183387429879714920504893731220566066433092139915659379637884440972895014577991321577305253685830077614582478418014943599700927734375",
    "48.5373479167760837984751589435425712042885137507272816756492359179219111789370782710753352832578889227710125612304041008705021198073669031963387872820418281736599974696133585894131101667881011962890625"};
    
    
    template <typename Type, typename Reference, typename Tol> void test(Tol tolerance)
    {
    	std::cout << "\n========================\nTesting:\n   " << boost::core::demangle(typeid(Type).name()) << " against:\n   " << boost::core::demangle(typeid(Reference).name()) << "\n";
    	for (const auto& str : suspects) {
    		//std::cout << str << "\n";
    		Type      arg_1(str);
    		Reference arg_ref_1(str);
    		if ((arg_1 != static_cast<Type>(arg_ref_1)) or (static_cast<Reference>(arg_1) != arg_ref_1)) {
    			std::cout << "These are different numbers, cannot work with that. It is because 'suspects' were prepared with 207 semi-random bits, 62 decimal places that is. Some extra cutting and "
    			             "casting could be done here to fix this.\n";
    			exit(1);
    		}
    		/* skip extra cutting and casting, I didn't test that.
    		if ((arg_1 != static_cast<Type>(arg_ref_1)) or (static_cast<Reference>(arg_1) != arg_ref_1)) {
    			arg_1     = static_cast<Type>(arg_ref_1);
    			arg_ref_1 = static_cast<Reference>(arg_1);
    		}*/
    
    #define TEST_FUNCTION(func)                                                                                                                                  \
    	{                                                                                                                                                    \
    		Type      val_1     = boost::multiprecision::func(arg_1);                                                                                    \
    		Reference val_ref_1 = boost::multiprecision::func(arg_ref_1);                                                                                \
    		if (boost::multiprecision::isfinite(val_1) and (boost::multiprecision::isfinite(val_ref_1))) {                                               \
    			auto ulp = boost::math::float_distance(static_cast<Type>(val_ref_1), val_1);                                                         \
    			if ((ulp > tolerance) or (ulp < -tolerance)) {                                                                                       \
    				std::cout << std::setw(7) << #func << " ULP dist = " << std::setw(15) << ulp << "   argument = " << arg_1 /* str */ << "\n"; \
    			}                                                                                                                                    \
    		} else if ((boost::multiprecision::isfinite(val_1) and (not(boost::multiprecision::isfinite(val_ref_1))))) {                                 \
    			std::cout << std::setw(7) << #func << "  : lower precision is a finite value, higher isn't (only the opposite makes sense)"          \
    			          << "   argument = " << arg_1 /* str */ << "\n";                                                                            \
    		}                                                                                                                                            \
    	}
    
    		TEST_FUNCTION(sin)
    		TEST_FUNCTION(cos)
    		TEST_FUNCTION(tan)
    		// acos domain check (-1,1)
    		if ((arg_1 > static_cast<Type>(-1)) and (arg_1 < static_cast<Type>(1))) {
    			TEST_FUNCTION(acos)
    		}
    		TEST_FUNCTION(erfc)
    		// gamma domain check skip negative integers
    		if (not((arg_1 < 0) and (static_cast<Type>(boost::multiprecision::round(arg_1)) == arg_1))) {
    			TEST_FUNCTION(tgamma)
    			if(boost::multiprecision::tgamma(arg_1) > 0 ) {
    				TEST_FUNCTION(lgamma)
    			}
    		}
    		// I just copy that macro, but function fma(…,…,…) uses three args
    		for (const auto& str2 : fma_suspects) {
    			Type      arg_2(str2);
    			Reference arg_ref_2(str2);
    			for (const auto& str3 : fma_suspects) {
    				Type      arg_3(str3);
    				Reference arg_ref_3(str3);
    				{
    					Type      val_1     = boost::multiprecision::fma(arg_1, arg_2, arg_3);
    					Reference val_ref_1 = boost::multiprecision::fma(arg_ref_1, arg_ref_2, arg_ref_3);
    					if (boost::multiprecision::isfinite(val_1)) {
    						auto ulp = boost::math::float_distance(static_cast<Type>(val_ref_1), val_1);
    						if ((ulp > tolerance) or (ulp < -tolerance)) {
    							std::cout << std::setw(7) << "fma"
    							          << " ULP dist = " << std::setw(15) << ulp << "   argument = " << arg_1 << " , " << arg_2 << " , " << arg_3 /* str, str2, str3 */ << "\n";
    						}
    					}
    				}
    			}
    		}
    	}
    	std::cout << "DONE\n";
    #undef TEST_FUNCTION
    }
    
    int main()
    {
    	test<boost::multiprecision::number<boost::multiprecision::cpp_bin_float<62>, boost::multiprecision::et_off>, boost::multiprecision::mpfr_float_500>(10000);
    	test<boost::multiprecision::number<boost::multiprecision::cpp_bin_float<62>, boost::multiprecision::et_off>,
    	     boost::multiprecision::number<boost::multiprecision::cpp_bin_float<124>, boost::multiprecision::et_off>>(10000);
    	test<boost::multiprecision::number<boost::multiprecision::cpp_bin_float<62>>, boost::multiprecision::mpfr_float_500>(10000);
    
    	// MPFR works
    	test<boost::multiprecision::number<boost::multiprecision::mpfr_float_backend<62, boost::multiprecision::allocate_stack>, boost::multiprecision::et_off>, boost::multiprecision::mpfr_float_500>(4);
    	test<boost::multiprecision::mpfr_float_100, boost::multiprecision::mpfr_float_500>(4);
    
    	// we might test these later too. Won't work now, because suspects data was prepared for 62 decimal places.
    	//test<boost::multiprecision::float128, boost::multiprecision::mpfr_float_50>(4);
    	//test<boost::multiprecision::mpfr_float_50, boost::multiprecision::mpfr_float_100>(4);
    }
    

    EDIT: Here's my (fixed lgamma domain) output:

    ========================
    Testing:
       boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<62u, (boost::multiprecision::backends::digit_base_type)10, void, int, 0, 0>, (boost::multiprecision::expression_template_option)0> against:
       boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<500u, (boost::multiprecision::mpfr_allocation_type)1>, (boost::multiprecision::expression_template_option)1>
        cos ULP dist =    -1.05074e+07   argument = -98.9602
        tan ULP dist =     1.01564e+07   argument = -98.9602
        sin ULP dist =    -1.67319e+07   argument = -97.3894
        tan ULP dist =     1.67319e+07   argument = -97.3894
        cos ULP dist =      1.6665e+07   argument = -95.8186
        tan ULP dist =     8.78195e+06   argument = -95.8186
        sin ULP dist =      4.3195e+07   argument = -91.1062
        tan ULP dist =     -4.3195e+07   argument = -91.1062
        cos ULP dist =    -1.82969e+07   argument = -89.5354
        tan ULP dist =    -3.58613e+07   argument = -89.5354
        cos ULP dist =     5.33625e+07   argument = -58.1195
        tan ULP dist =     5.97394e+07   argument = -58.1195
        fma ULP dist =           15802   argument = -28.6439 , 1.31807 , 37.7557
        sin ULP dist =     1.07071e+06   argument = -25.1327
        tan ULP dist =     1.07071e+06   argument = -25.1327
     tgamma ULP dist =          -10271   argument = -15.7361
        fma ULP dist =         -194768   argument = -0.321361 , -65.6435 , -21.0952
       acos ULP dist =          -10075   argument = 0.999891
     lgamma ULP dist =           13963   argument = 1.31807
        fma ULP dist =          -26108   argument = 14.8191 , -3.02409 , 44.8136
        sin ULP dist =         -527053   argument = 50.2654
        tan ULP dist =         -527053   argument = 50.2654
        sin ULP dist =    -1.74569e+06   argument = 81.6814
        tan ULP dist =    -1.74569e+06   argument = 81.6814
        sin ULP dist =    -5.39938e+06   argument = 91.1062
        tan ULP dist =     5.39938e+06   argument = 91.1062
       erfc ULP dist =           20408   argument = 91.731
       erfc ULP dist =           22029   argument = 96.0062
        sin ULP dist =     8.36595e+06   argument = 97.3894
        tan ULP dist =    -8.36595e+06   argument = 97.3894
       erfc ULP dist =           20748   argument = 98.099
        cos ULP dist =    -2.62684e+06   argument = 98.9602
        tan ULP dist =    -1.55465e+06   argument = 98.9602
       erfc ULP dist =           21844   argument = 99.2158
       erfc ULP dist =           22085   argument = 99.5262
    DONE
    
    ========================
    Testing:
       boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<62u, (boost::multiprecision::backends::digit_base_type)10, void, int, 0, 0>, (boost::multiprecision::expression_template_option)0> against:
       boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<124u, (boost::multiprecision::backends::digit_base_type)10, void, int, 0, 0>, (boost::multiprecision::expression_template_option)0>
        cos ULP dist =    -1.05074e+07   argument = -98.9602
        tan ULP dist =     1.01564e+07   argument = -98.9602
        sin ULP dist =    -1.67319e+07   argument = -97.3894
        tan ULP dist =     1.67319e+07   argument = -97.3894
        cos ULP dist =      1.6665e+07   argument = -95.8186
        tan ULP dist =     8.78195e+06   argument = -95.8186
        sin ULP dist =      4.3195e+07   argument = -91.1062
        tan ULP dist =     -4.3195e+07   argument = -91.1062
        cos ULP dist =    -1.82969e+07   argument = -89.5354
        tan ULP dist =    -3.58613e+07   argument = -89.5354
        cos ULP dist =     5.33625e+07   argument = -58.1195
        tan ULP dist =     5.97394e+07   argument = -58.1195
        fma ULP dist =           15802   argument = -28.6439 , 1.31807 , 37.7557
        sin ULP dist =     1.07071e+06   argument = -25.1327
        tan ULP dist =     1.07071e+06   argument = -25.1327
     tgamma ULP dist =          -10271   argument = -15.7361
        fma ULP dist =         -194768   argument = -0.321361 , -65.6435 , -21.0952
       acos ULP dist =          -10075   argument = 0.999891
     lgamma ULP dist =           13963   argument = 1.31807
        fma ULP dist =          -26108   argument = 14.8191 , -3.02409 , 44.8136
        sin ULP dist =         -527053   argument = 50.2654
        tan ULP dist =         -527053   argument = 50.2654
        sin ULP dist =    -1.74569e+06   argument = 81.6814
        tan ULP dist =    -1.74569e+06   argument = 81.6814
        sin ULP dist =    -5.39938e+06   argument = 91.1062
        tan ULP dist =     5.39938e+06   argument = 91.1062
       erfc ULP dist =           20408   argument = 91.731
       erfc ULP dist =           22029   argument = 96.0062
        sin ULP dist =     8.36595e+06   argument = 97.3894
        tan ULP dist =    -8.36595e+06   argument = 97.3894
       erfc ULP dist =           20748   argument = 98.099
        cos ULP dist =    -2.62684e+06   argument = 98.9602
        tan ULP dist =    -1.55465e+06   argument = 98.9602
       erfc ULP dist =           21844   argument = 99.2158
       erfc ULP dist =           22085   argument = 99.5262
    DONE
    
    ========================
    Testing:
       boost::multiprecision::number<boost::multiprecision::backends::cpp_bin_float<62u, (boost::multiprecision::backends::digit_base_type)10, void, int, 0, 0>, (boost::multiprecision::expression_template_option)0> against:
       boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<500u, (boost::multiprecision::mpfr_allocation_type)1>, (boost::multiprecision::expression_template_option)1>
        cos ULP dist =    -1.05074e+07   argument = -98.9602
        tan ULP dist =     1.01564e+07   argument = -98.9602
        sin ULP dist =    -1.67319e+07   argument = -97.3894
        tan ULP dist =     1.67319e+07   argument = -97.3894
        cos ULP dist =      1.6665e+07   argument = -95.8186
        tan ULP dist =     8.78195e+06   argument = -95.8186
        sin ULP dist =      4.3195e+07   argument = -91.1062
        tan ULP dist =     -4.3195e+07   argument = -91.1062
        cos ULP dist =    -1.82969e+07   argument = -89.5354
        tan ULP dist =    -3.58613e+07   argument = -89.5354
        cos ULP dist =     5.33625e+07   argument = -58.1195
        tan ULP dist =     5.97394e+07   argument = -58.1195
        fma ULP dist =           15802   argument = -28.6439 , 1.31807 , 37.7557
        sin ULP dist =     1.07071e+06   argument = -25.1327
        tan ULP dist =     1.07071e+06   argument = -25.1327
     tgamma ULP dist =          -10271   argument = -15.7361
        fma ULP dist =         -194768   argument = -0.321361 , -65.6435 , -21.0952
       acos ULP dist =          -10075   argument = 0.999891
     lgamma ULP dist =           13963   argument = 1.31807
        fma ULP dist =          -26108   argument = 14.8191 , -3.02409 , 44.8136
        sin ULP dist =         -527053   argument = 50.2654
        tan ULP dist =         -527053   argument = 50.2654
        sin ULP dist =    -1.74569e+06   argument = 81.6814
        tan ULP dist =    -1.74569e+06   argument = 81.6814
        sin ULP dist =    -5.39938e+06   argument = 91.1062
        tan ULP dist =     5.39938e+06   argument = 91.1062
       erfc ULP dist =           20408   argument = 91.731
       erfc ULP dist =           22029   argument = 96.0062
        sin ULP dist =     8.36595e+06   argument = 97.3894
        tan ULP dist =    -8.36595e+06   argument = 97.3894
       erfc ULP dist =           20748   argument = 98.099
        cos ULP dist =    -2.62684e+06   argument = 98.9602
        tan ULP dist =    -1.55465e+06   argument = 98.9602
       erfc ULP dist =           21844   argument = 99.2158
       erfc ULP dist =           22085   argument = 99.5262
    DONE
    
    ========================
    Testing:
       boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<62u, (boost::multiprecision::mpfr_allocation_type)0>, (boost::multiprecision::expression_template_option)0> against:
       boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<500u, (boost::multiprecision::mpfr_allocation_type)1>, (boost::multiprecision::expression_template_option)1>
    DONE
    
    ========================
    Testing:
       boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<100u, (boost::multiprecision::mpfr_allocation_type)1>, (boost::multiprecision::expression_template_option)1> against:
       boost::multiprecision::number<boost::multiprecision::backends::mpfr_float_backend<500u, (boost::multiprecision::mpfr_allocation_type)1>, (boost::multiprecision::expression_template_option)1>
    DONE
    
    opened by cosurgi 37
  • precision-semantics policy

    precision-semantics policy

    edit: i changed the title of this issue from "writing a complex class using mpfr_float that is precision-semantically equivalent to the new mpc type".

    within the Boost.Multiprecision library, there are some precision rules:

    • Copying or move-assignment copies the precision of the source.
    • Assignment keeps the precision of the target.

    for legacy purposes, i have to keep supporting users of my library with Boost versions that do not contain the new mpc implementation of complex numbers. i want my type to be precision-semantically identical with mpc. i am struggling with one thing, and, while i know this isn't a Boost.Mutiprecision issue per se, since it is a use of bmp, i am hoping for some help.

    if this is too off-topic, please politely decline -- i'm feeling anxious about asking this already, admitting that i am ignorant about this; i'm a mathematician first and a programmer second.


    my class is the "obvious" implementation of complex numbers with a bmp::mpfr_float as the real and imaginary fields, and all the typical operator overloading, etc. it's not elegant but it works.

    my one and only family of failing tests relate to mixed precision arithmetic. when i take complex's a and b, at different precisions, and say add them, stuffing them into a previously existing z at some other precision, the mpc policy is that z should be at its same precision after the arithmetic. however, with RVO and move assignment in the mix, my z ends up at whatever precision the result is at. i can fix this by making the move assignment operator preserve precision, but this breaks moving. no good. i feel like i need to distinguish between moving and these cases coming from arithmetic, but that it's an impossible problem without re-writing the whole thing using ET's and basically replicating Boost.Multiprecision, a non starter.

    here's the test that fails for my naive implementation, whereas this test passes for the glorious new mpc implementation:

    BOOST_AUTO_TEST_CASE(complex_precision_predictable_add)
    {
    	DefaultPrecision(30);
    	bertini::mpfr_complex a(1,2);
    
    	DefaultPrecision(50);
    	bertini::mpfr_complex b(3,4);
    
    	DefaultPrecision(70);
    	bertini::mpfr_complex c(5,6);
    
    	a = b+c;
    	BOOST_CHECK_EQUAL(Precision(a),30);
    
    	DefaultPrecision(90);
    	a = b+c;
    	BOOST_CHECK_EQUAL(Precision(a),30);
    }
    

    my obvious overload for +:

    inline custom_complex operator+(custom_complex const& lhs, const custom_complex & rhs){
    		return custom_complex(lhs.real()+rhs.real(), lhs.imag()+rhs.imag());
    	}
    

    should i give up and just accept that because of RVO and move assignment that, without expression templating my type and basically recreating Boost.Multiprecision itself at some level, this is an impossible problem to solve?

    my genuine thanks for any consideration. 🙇‍♀️

    opened by ofloveandhate 35
  • Thread contention with GMP backend

    Thread contention with GMP backend

    I compile the file gmp_threads.cxx with the command

    g++ gmp_threads.cxx -o gmp_threads -lgmp -lpthread -Iqft/src/boost_bin/include -g

    and run it under valgrind/helgrind with the command

    valgrind --tool=helgrind ./gmp_threads

    which gives the these warnings. Looking into the details, it seems that boost::multiprecision is setting the precision in boost::multiprecision::detail::scoped_default_precision on line 123 of precision.hpp. I think it hands this off to gmp's internal routine to set precision. I think it is properly guarded by a mutex, so there is no obvious undefined behavior. However, locking this mutex causes other threads to block, which causes my multithreaded program running on 32 cores to run about as fast as a single core.

    I do not see this problem with Boost 1.68, but I do see it with 1.70 and later (including the current version on github).

    This PR checks within the constructors and destructors of scoped_default_precision for whether the precision is actually changing.

    opened by wlandry 34
  • Comba Multiplier

    Comba Multiplier

    Comba multiplier is a computer-friendly multiplication routine which is a drop-in replacement for native schoolbook multiplication.

    Algorithmic Differences: Consider writing the both operands in two lines one below the other.

    Naive school book multiplication first multiplies a digit of operand to all the digits of other operands. This can be visualized as a row multiplication where row is the second operand and then these computed rows are added to obtain final answer.

    Comba Multiplier can be thought of as a multiplication routine where the digits of answer are computed sequentially. In other words it computes the result column wise first.

    For example:

      2               3
    x 7               8
    --------------------
      14  37(16+21)   24
    
    ans[0] = 8*3       = 24
    ans[1] = 2*8 + 3*7 = 37
    ans[2] = 2*7       = 14
    

    Benefits: There are less additions and assignments.

    Subtleness: It might not be evident that how this method will save addition because a compensating addition overflow += temp < carry is performed. But taking advantage of hardware this can be reduced to saving the status of Carry Flag to overflow counter using adc instruction. For more optimisation details kindly see the bugs:

    1. GCC 93141
    2. LLVM 44460
    opened by madhur4127 34
  • Faster gcd for double-limb-case.

    Faster gcd for double-limb-case.

    Using the performance test from @madhur4127 I see:

    Before:
    BM<cpp_int>/400        8202 ns         7952 ns        74667
    BM<cpp_int>/600       14937 ns        14579 ns        40727
    BM<cpp_int>/800       21660 ns        21449 ns        29867
    BM<cpp_int>/1000      25239 ns        25112 ns        28000
    BM<cpp_int>/1200      34091 ns        34424 ns        21333
    BM<cpp_int>/1400      43222 ns        43493 ns        15448
    BM<cpp_int>/1600      53205 ns        53013 ns        11200
    BM<cpp_int>/1800      62238 ns        62500 ns        10000
    BM<cpp_int>/2000      71333 ns        69754 ns         8960
    
    after:
    BM<cpp_int>/400        2251 ns         2176 ns       373333
    BM<cpp_int>/600        2659 ns         2651 ns       224000
    BM<cpp_int>/800        3826 ns         3770 ns       194783
    BM<cpp_int>/1000       4351 ns         4297 ns       160000
    BM<cpp_int>/1200       5064 ns         4604 ns       112000
    BM<cpp_int>/1400       6112 ns         6138 ns       112000
    BM<cpp_int>/1600       8036 ns         7952 ns       112000
    BM<cpp_int>/1800       9050 ns         8894 ns        89600
    BM<cpp_int>/2000      10376 ns        10463 ns        89600
    

    With msvc.

    opened by jzmaddock 30
  • Faster GCD and MOD (single limb)

    Faster GCD and MOD (single limb)

    Changes Binary GCD to Euclid for first iteration of GCD. This reduces the computation of N x 1 gcd to 1 x 1 gcd, which can be further solved by Binary GCD.

    The number of bits (N) of the bigger operand is shown by the numeral in the benchmark name with the type used.

    • 1st : Original cpp_int
    • 2nd : New cpp_int
    • 3rd : GMP's mpz_int
    -----------------------------------------------------------
    Benchmark                 Time             CPU   Iterations
    -----------------------------------------------------------
    BM<cpp_int>/400       10571 ns        10340 ns        68348
    BM<cpp_int>/600       18617 ns        18206 ns        38073
    BM<cpp_int>/800       26179 ns        25998 ns        26051
    BM<cpp_int>/1000      36157 ns        35827 ns        19687
    BM<cpp_int>/1200      45856 ns        45357 ns        15461
    BM<cpp_int>/1400      61913 ns        61316 ns        11331
    BM<cpp_int>/1600      76138 ns        73477 ns         9953
    BM<cpp_int>/1800      98735 ns        92880 ns         8363
    BM<cpp_int>/2000     108352 ns       107191 ns         7081
    
    BM<cpp_int>/400         997 ns          983 ns       710923
    BM<cpp_int>/600        1311 ns         1297 ns       539265
    BM<cpp_int>/800        1561 ns         1546 ns       468111
    BM<cpp_int>/1000       1770 ns         1749 ns       408779
    BM<cpp_int>/1200       1964 ns         1946 ns       342320
    BM<cpp_int>/1400       2286 ns         2237 ns       318498
    BM<cpp_int>/1600       2540 ns         2521 ns       274114
    BM<cpp_int>/1800       3021 ns         2949 ns       247813
    BM<cpp_int>/2000       3082 ns         3059 ns       220344
    
    BM<mpz_int>/400         271 ns          269 ns      2618283
    BM<mpz_int>/600         287 ns          285 ns      2407132
    BM<mpz_int>/800         297 ns          295 ns      2298375
    BM<mpz_int>/1000        331 ns          325 ns      2247878
    BM<mpz_int>/1200        346 ns          337 ns      2138969
    BM<mpz_int>/1400        361 ns          357 ns      2061188
    BM<mpz_int>/1600        548 ns          544 ns      1307383
    BM<mpz_int>/1800        567 ns          563 ns      1260716
    BM<mpz_int>/2000        596 ns          591 ns      1195748
    

    EDIT: I erroneously thought to have turned on O2

    opened by madhur4127 28
  • faster int sqrt

    faster int sqrt

    in current implementation of integer sqrt there are lines saying, that it's slow and must be rewritten to karatsuba sqrt method

    that's what I did

    I've wrote a fast 64 bit sqrt using std::sqrt with correction, because it's simple but still faster, than karatsuba implementation
    I've also wrote newton sqrt, but it's kinda the same on small numbers, and much longer on big numbers (probably because of kinda slow multiplication)

    also I wrote some tests, because as far as I understood, there was only 1 simple test for checking actual result, not for interfaces, consts and such

    a little about motivation:

    I could use GMP, but on small (128-256 bits) integers, fixed size cpp_int is way faster, than gmp, so for this kind of numbers it is actually faster to use cpp_int, but the sqrt is too slow, and was taking too much time
    on windows, the difference is even worse

    some benchmarks, that I run are here (you can find code here https://github.com/leviska/boost-sqrt ) sqrt https://pastebin.com/WvNzVH0g (first number is the size of cpp_int, second is the size of the integer) operations https://pastebin.com/Tk4mhAfq (you can see, that for 128-256 bit integers cpp_int is actually faster)

    opened by leviska 27
  • Crash when initializing a cpp_bin_float_[...] with a specific value

    Crash when initializing a cpp_bin_float_[...] with a specific value

    When I run this code:

    uint64_t doubleAsInt = 0xBFDFFFFFFFFFFFFC;
    const auto value = std::bit_cast<double>(doubleAsInt);
    boost::multiprecision::cpp_bin_float_quad v = value;
    

    0xBFDFFFFFFFFFFFFC as double is -0.499999999999999777955395074...

    I get an error in trunc.hpp. arg is 2147483647.999999, so 0.999999 bigger than INT_MAX. I tested this with cpp_bin_float_quad and cpp_bin_float_oct

    A fix that works for me is (in itrunc ):

    if (arg >= static_cast<T>(INT_MAX) + static_cast<T>(1))
    {
        BOOST_MP_THROW_EXCEPTION(std::domain_error("arg cannot be converted into an int"));
    }
    
    return static_cast<int>(boost::multiprecision::detail::impl::trunc(arg));
    

    So comparing >= with INT_MAX + 1. As long as arg is smaller than INT_MAX, arg can be respresented correctly after trucation. Or is there something else I don't take into account? I will create a pull request for this.

    And if this is a valid fix, it possibly should be applied to lltrunc, too.

    opened by schwubdiwub 0
  • cpp_dec_float incorrect constructing from string

    cpp_dec_float incorrect constructing from string

    When constructing cpp_dec_float from certain string values which don't represent correct numbers, we expect that the exception is thrown, but the object is successfully constructed:

    #include <iostream>
    #include <boost/multiprecision/cpp_dec_float.hpp>
    
    using boost::multiprecision::cpp_dec_float_50;
    
    int main()
    {
        std::cout.precision(10);
    
        cpp_dec_float_50 val1{"12a3.4"};
        std::cout << "no exception. val1=" << val1 << "\n";
    
        cpp_dec_float_50 val2{"1.2a34"};
        std::cout << "no exception. val2=" << val2 << "\n";
    
        return 0;
    }
    
    Output:
    no exception. val1=12.4
    no exception. val2=1.00000002
    

    This bug is reproduced on Boost 1.79.0 and 1.80.0. Also it is reproduced on current repository https://github.com/boostorg/boost.git (as of 2022-10-05). The behaviour is correct on Boost 1.78.0. Compilers: gcc 9.4.0 and gcc 10.3.0.

    opened by yaroslavchahovets 11
  • Error in square root for cpp_dec_float_100

    Error in square root for cpp_dec_float_100

    I recently updated boost from 1.71.0 to 1.80.0 and I started getting an error. After some investigation the problem arises from an incorrect rounding on the square root of 49 that makes it 7.000...01 instead of just 7. Here's the code that shows the issue

    #include <boost/multiprecision/cpp_dec_float.hpp>
    #include <iostream>
    
    using boost::multiprecision::cpp_dec_float_100;
    using std::string;
    
    int main() {
        cpp_dec_float_100 n(49);
        n = sqrt(n);
        if (n == (uint32_t) n)
            std::cout << "This should be logged as sqrt(49) = 7" << std::endl;
        else
            std::cout << "Error: sqrt(49) != 7" << std::endl;
    
        std::cout << std::endl << n.str() << std::endl;
        
        return 0;
    }
    
    Error: sqrt(49) != 7
    
    7.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
    

    I also replicated the issue here with version 1.79.0. In version 1.71.0 the calculation was correct.

    opened by abeccaro 1
  • Ubuntu 18.04 to be deprecated in GHA

    Ubuntu 18.04 to be deprecated in GHA

    We need to be sure to look into this line in relation to the December planned deprecation of Ubuntu 18.04 on GHA.

    I just hit one of the four hour scheduled brown-outs inadvertently and had previously been unaware of this scheduled deprecation.

    Cc: @jzmaddock and @mborland and @NAThompson

    opened by ckormanyos 7
Releases(Boost_1_81_0)
  • Boost_1_81_0(Dec 16, 2022)

  • Boost_1_80_0(Aug 13, 2022)

    Standalone release of Boost.Multiprecision which can be used on it's own without the rest of Boost, and/or in conjunction with the Boost.Math standalone release.

    Source code(tar.gz)
    Source code(zip)
  • v1.79(Apr 19, 2022)

    Initial standalone release of Multiprecision. This release can be used entirely independently from the rest of Boost, and/or in conjunction with a standalone Boost.Math release.

    Source code(tar.gz)
    Source code(zip)
Owner
Boost.org
Boost provides free peer-reviewed portable C++ source libraries.
Boost.org
Boost.org program_options module

Program Options, part of the collection of Boost C++ Libraries, allows for definition and acquisition of (name, value) pairs from the user via convent

Boost.org 77 Jan 1, 2023
Boost.GIL - Generic Image Library | Requires C++11 since Boost 1.68

Documentation GitHub Actions AppVeyor Azure Pipelines CircleCI Regression Codecov Boost.GIL Introduction Documentation Requirements Branches Community

Boost.org 154 Nov 24, 2022
MIRACL Cryptographic SDK: Multiprecision Integer and Rational Arithmetic Cryptographic Library is a C software library that is widely regarded by developers as the gold standard open source SDK for elliptic curve cryptography (ECC).

MIRACL What is MIRACL? Multiprecision Integer and Rational Arithmetic Cryptographic Library – the MIRACL Crypto SDK – is a C software library that is

MIRACL 527 Jan 7, 2023
MIRACL Cryptographic SDK: Multiprecision Integer and Rational Arithmetic Cryptographic Library is a C software library that is widely regarded by developers as the gold standard open source SDK for elliptic curve cryptography (ECC).

MIRACL What is MIRACL? Multiprecision Integer and Rational Arithmetic Cryptographic Library – the MIRACL Crypto SDK – is a C software library that is

MIRACL 524 Jan 2, 2023
HTTP and WebSocket built on Boost.Asio in C++11

HTTP and WebSocket built on Boost.Asio in C++11 Branch Linux/OSX Windows Coverage Documentation Matrix master develop Contents Introduction Appearance

Boost.org 3.6k Jan 4, 2023
C++ peer to peer library, built on the top of boost

Breep What is Breep? Breep is a c++ bridged peer to peer library. What does that mean? It means that even though the network is constructed as a peer

Lucas Lazare 110 Nov 24, 2022
Pion Network Library (Boost licensed open source)

Pion Network Library C++ framework for building lightweight HTTP interfaces Project Home: https://github.com/splunk/pion Retrieving the code git clone

Splunk GitHub 293 Nov 17, 2022
Provides very lightweight outcome and result (non-Boost edition)

master branch develop branch CTest dashboard: https://my.cdash.org/index.php?project=Boost.Outcome All tests passing source tarballs: https://github.c

Niall Douglas 586 Jan 2, 2023
Bubbles: simple and expandable c++ project template with googletest and boost included

Bubbles: A simple and expandable C++ project template with Googletest and Boost included. Building && Testing cmake -S . -B build cmake --build build

Will 8 Dec 20, 2021
Asynchronous gRPC with Boost.Asio executors

asio-grpc This library provides an implementation of boost::asio::execution_context that dispatches work to a grpc::CompletionQueue. Making it possibl

Dennis 180 Dec 31, 2022
requests-like networking library using boost for C++

cq == C++ Requests cq == C++ Requests is a "Python Requests"-like C++ header-only library for sending HTTP requests. The library is inspired a lot by

null 11 Dec 15, 2021
Ole Christian Eidheim 741 Dec 27, 2022
Lightweight, header-only, Boost-based socket pool library

Stream-client This is a lightweight, header-only, Boost-based library providing client-side network primitives to easily organize and implement data t

Tinkoff.ru 12 Aug 5, 2022
Boost headers

About This repository contains a set of header files from Boost. Can be useful when using header only libraries. How to use You can easily include the

null 2 Oct 16, 2021
A C++ library for localization using GNU gettext po files, based on boost spirit

spirit-po spirit-po is a header-only C++11 library that you can use for localization within the GNU gettext system, instead of using libintl. spirit-p

Chris Beck 40 Oct 21, 2022
Packio - An asynchronous msgpack-RPC and JSON-RPC library built on top of Boost.Asio.

Header-only | JSON-RPC | msgpack-RPC | asio | coroutines This library requires C++17 and is designed as an extension to boost.asio. It will let you bu

Quentin Chateau 58 Dec 26, 2022
Mx - C++ coroutine await, yield, channels, i/o events (single header + link to boost)

mx C++11 coroutine await, yield, channels, i/o events (single header + link to boost). This was originally part of my c++ util library kit, but I'm se

Grady O'Connell 4 Sep 21, 2019
Boost.org signals2 module

Signals2, part of collection of the Boost C++ Libraries, is an implementation of a managed signals and slots system. License Distributed under the Boo

Boost.org 52 Dec 1, 2022
Boost.org property_tree module

Maintainer This library is currently maintained by Richard Hodges with generous support from the C++ Alliance. Build Status Branch Status develop mast

Boost.org 36 Dec 6, 2022
Boost::ASIO low-level redis client (connector)

bredis Boost::ASIO low-level redis client (connector), github gitee Features header only zero-copy (currently only for received replies from Redis) lo

Ivan Baidakou 142 Dec 8, 2022