<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.8.7">Jekyll</generator><link href="http://cppalliance.org/feed.xml" rel="self" type="application/atom+xml" /><link href="http://cppalliance.org/" rel="alternate" type="text/html" /><updated>2026-04-03T01:06:07+00:00</updated><id>http://cppalliance.org/feed.xml</id><title type="html">The C++ Alliance</title><subtitle>The C++ Alliance is dedicated to helping the C++ programming language evolve. We see it developing as an ecosystem of open source libraries and as a growing community of those who contribute to those libraries..</subtitle><entry><title type="html">Systems, CI Updates Q1 2026</title><link href="http://cppalliance.org/sam/2026/03/31/SamsQ1Update.html" rel="alternate" type="text/html" title="Systems, CI Updates Q1 2026" /><published>2026-03-31T00:00:00+00:00</published><updated>2026-03-31T00:00:00+00:00</updated><id>http://cppalliance.org/sam/2026/03/31/SamsQ1Update</id><content type="html" xml:base="http://cppalliance.org/sam/2026/03/31/SamsQ1Update.html">&lt;h3 id=&quot;code-coverage-reports---designing-new-gcovr-templates&quot;&gt;Code Coverage Reports - designing new GCOVR templates&lt;/h3&gt;

&lt;p&gt;A major effort this quarter and continuing on since it was mentioned in the last newsletter is the development of codecov-like coverage reports that run in GitHub Actions and are hosted on GitHub Pages. Instructions: &lt;a href=&quot;https://github.com/boostorg/boost-ci/blob/master/docs/code-coverage.md&quot;&gt;Code Coverage with Github Actions and Github Pages&lt;/a&gt;. The process has really highlighted a phenomenon in open-source software where by publishing something to the whole community, end-users respond back with their own suggestions and fixes, and everything improves iteratively. It would not have happened otherwise. The upstream GCOVR project has taken an interest in the templates and we are working on merging them into the main repository for all gcovr users. Boost contributors and gcovr maintainers have suggested numerous modifications for the templates. Great work by Julio Estrada on the template development.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Better full page scrolling of C++ source code files&lt;/li&gt;
  &lt;li&gt;Include ‘functions’ listings on every page&lt;/li&gt;
  &lt;li&gt;Optionally disable branch coverage&lt;/li&gt;
  &lt;li&gt;Purposely restrict coverage directories to src/ and include/&lt;/li&gt;
  &lt;li&gt;Another scrolling bug fixed&lt;/li&gt;
  &lt;li&gt;Both blue and green colored themes&lt;/li&gt;
  &lt;li&gt;Codacy linting&lt;/li&gt;
  &lt;li&gt;New forward and back buttons. Allows navigation to each “miss” and subsequent pages&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;server-hosting&quot;&gt;Server Hosting&lt;/h3&gt;

&lt;p&gt;This quarter we decommissioned the Rackspace servers which had been in service 10-15 years. Rene provided a nice announcement:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://lists.boost.org/archives/list/boost@lists.boost.org/thread/XYFD42TTQRYHWTLGP6GCIZQ6NHCZLNQT/&quot;&gt;Farewell to Wowbagger - End of an Era for boost.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There was more to do then just delete servers, I built a new results.boost.org FTP server replacing the preexisting FTP server used by regression.boost.org. Configured and tested it. Inventoried the old machines, including a monitoring server. Built a replacement wowbagger called wowbagger2 to host a copy of the website - original.boost.org. The monthly cost of a small GCP Compute instance seems to be around 5% of the Rackspace legacy cloud server. Components: Ubuntu 24.04. Apache. PHP 5 PPA. “original.boost.org” continues to host a copy of the earlier boost.org website for comparison and development purposes which is interesting to check.&lt;/p&gt;

&lt;p&gt;Launched server instances for corosio.org and paperflow.&lt;/p&gt;

&lt;h3 id=&quot;fil-c&quot;&gt;Fil-C&lt;/h3&gt;

&lt;p&gt;Working with Tom Kent to add Fil-C https://github.com/pizlonator/fil-c test into the regression matrix https://regression.boost.org/ .
Built a Fil-C container image based on Drone images.
Debugging the build process. After a few roadblocks, the latest news is that Fil-C seems to be successfully building.  This is not quite finished but should be online soon.&lt;/p&gt;

&lt;h3 id=&quot;boost-release-process-boostorgrelease-tools&quot;&gt;Boost release process boostorg/release-tools&lt;/h3&gt;

&lt;p&gt;The boostorg/boost CircleCI jobs often threaten to cross the 1-hour time limit. Increased parallel processes from 4 to 8. Increased instance size from medium to large.
And yet another adjustment: there are 4 compression algorithms used by the releases (gz, bz2, 7z, zip) and it is possible to find drop-in replacement programs that 
go much faster than the standard ones by utilizing parallelization. lbzip2 pigz. The substitute binaries were applied to publish-releases.py recently. Now the same idea in ci_boost_release.py. All of this reduced the CircleCI job time by many minutes.&lt;/p&gt;

&lt;p&gt;Certain boost library pull requests were finally merged after a long delay allowing an upgrade of the Sphinx pip package. Tested a superproject container image for the CircleCI jobs with updated pip packages. Boost is currently in a code freeze so this will not go live until after 1.91.0. Sphinx docs continue to deal with upgrade incompatibilities. I prepared another set of pull requests to send to boost libraries using Sphinx.&lt;/p&gt;

&lt;h3 id=&quot;doc-previews-and-doc-builds&quot;&gt;Doc Previews and Doc Builds&lt;/h3&gt;

&lt;p&gt;Antora docs usually show an “Edit this Page” link. Recently a couple of Alliance developers happened to comment the link didn’t quite work in some of the doc previews, and so that opened a topic to research solutions and make the Antora edit-this-page feature more robust if possible. The issue is that Boost libraries are git submodules. When working as expected submodules are checked out as “HEAD detached at a74967f0” rather than “develop”. If Antora’s edit-this-page code sees “HEAD detached at a74967f0” it will default to the path HEAD. That’s wrong on the GitHub side. A solution we found (credit to Ruben Perez) is to set the antora config to edit_url: ‘{web_url}/edit/develop/{path}’. Don’t leave a {ref} type of variable in the path.&lt;/p&gt;

&lt;p&gt;Rolling out the antora-downloads-extension to numerous boost and alliance repositories. It will retry the ui-bundle download.&lt;/p&gt;

&lt;p&gt;Refactored the release-tools build_docs scripts so that the gems and pip packages are organized into a format that matches Gemfile and requirement.txt files, instead of what the script was doing before “gem install package”. By using a Gemfile, the script becomes compatible with other build systems so content can be copy-pasted easily.&lt;/p&gt;

&lt;p&gt;CircleCI superproject builds use docbook-xml.zip, where the download url broke. Switched the link address. Also hosting a copy of the file at https://dl.cpp.al/misc/docbook-xml.zip&lt;/p&gt;

&lt;h3 id=&quot;boost-website-boostorgwebsite-v2&quot;&gt;Boost website boostorg/website-v2&lt;/h3&gt;

&lt;p&gt;Collaborated in the process of on-boarding the consulting company Metalab who are working on V3, the next iteration of the boost.org website.&lt;/p&gt;

&lt;p&gt;Disable Fastly caching to assist metalab developers.&lt;/p&gt;

&lt;p&gt;Gitflow workflow planning meetings.&lt;/p&gt;

&lt;p&gt;Discussions about how Tools should be present on the libraries pages.&lt;/p&gt;

&lt;p&gt;On the DB servers, adjusted postgresql authentication configurations from md5 to scram-sha-256 on all databases and multiple ansible roles. Actually this turns out to be a superficial change even though it should be done. The reason is that newer postgres will use scram-sha-256 behind-the-scenes regardless.&lt;/p&gt;

&lt;p&gt;Wrote deploy-qa.sh, a script to enable metalab QA engineers to deploy a pull request onto a test server. The precise git SHA commit of any open pull request can be tested.&lt;/p&gt;

&lt;p&gt;Wrote upload-images.sh, a script to store Bob Ostrom’s boost cartoons in S3 instead of the github repo.&lt;/p&gt;

&lt;h3 id=&quot;mailman3&quot;&gt;Mailman3&lt;/h3&gt;

&lt;p&gt;Synced production lists to the staging server. Wrote a document in the cppalliance/boost-mailman repo explaining how the multi-step process of syncing can be done.&lt;/p&gt;

&lt;h3 id=&quot;boostorg&quot;&gt;boostorg&lt;/h3&gt;

&lt;p&gt;Migrated cppalliance/decimal to boostorg/decimal.&lt;/p&gt;

&lt;h3 id=&quot;jenkins&quot;&gt;Jenkins&lt;/h3&gt;

&lt;p&gt;The Jenkins server is building documentation previews for dozens of boostorg and cppalliance repositories where each job is assigned its own “workspace” directory and then proceeds to install 1GB of node_modules. That was happening for every build and every pull request. The disk space on the server was filling up, every few weeks yet another 100GB. Rather than continue to resize the disk, or delete all jobs too quickly, was there the opportunity for optimization? Yes. In the superproject container image relocate the nodejs installation to /opt/nvm instead of root’s home directory. The /opt/nvm installation can now be “shared” by other jobs which reduces space. Conditionally check if mermaid is needed and/or if mermaid is already available in /opt/nvm. With these modifications, since each job doesn’t need to install a large amount of npm packages the job size is drastically reduced.&lt;/p&gt;

&lt;p&gt;Upgraded server and all plugins. Necessary to fix spurious bugs in certain Jenkins jobs.&lt;/p&gt;

&lt;p&gt;Debugging Jenkins runners, set subnet and zone on the cloud server configurations.&lt;/p&gt;

&lt;p&gt;Fixed lcov jobs, that need cxxstd=20&lt;/p&gt;

&lt;p&gt;Migrated many administrative scripts from a local directory on the server to the jenkins-ci repository. Revise, clean, discard certain scripts.&lt;/p&gt;

&lt;p&gt;Dmitry contributed diff-reports that should now appear in every pull request which has been configured for LCOV previews.&lt;/p&gt;

&lt;p&gt;Implemented –flags in lcov build scripts [–skip-gcovr] [–skip-genhtml] [–skip-diff-report] [–only-gcovr]&lt;/p&gt;

&lt;p&gt;Ansible role task: install check_jenkins_queue nagios plugin automatically from Ansible.&lt;/p&gt;

&lt;h3 id=&quot;gha&quot;&gt;GHA&lt;/h3&gt;

&lt;p&gt;Completed a major upgrade of the Terraform installation which had lagged upstream code by nearly two years.&lt;/p&gt;

&lt;p&gt;Deployed a series of GitHub Actions runners for Joaquin’s latest benchmarks at https://github.com/boostorg/boost_hub_benchmarks. Installed latest VS2026. MacOS upgrade to 26.3.&lt;/p&gt;

&lt;h3 id=&quot;drone&quot;&gt;Drone&lt;/h3&gt;

&lt;p&gt;Launched new MacOS 26 drone runners, and FreeBSD 15.0 drone runners.&lt;/p&gt;</content><author><name></name></author><category term="sam" /><summary type="html">Code Coverage Reports - designing new GCOVR templates A major effort this quarter and continuing on since it was mentioned in the last newsletter is the development of codecov-like coverage reports that run in GitHub Actions and are hosted on GitHub Pages. Instructions: Code Coverage with Github Actions and Github Pages. The process has really highlighted a phenomenon in open-source software where by publishing something to the whole community, end-users respond back with their own suggestions and fixes, and everything improves iteratively. It would not have happened otherwise. The upstream GCOVR project has taken an interest in the templates and we are working on merging them into the main repository for all gcovr users. Boost contributors and gcovr maintainers have suggested numerous modifications for the templates. Great work by Julio Estrada on the template development. Better full page scrolling of C++ source code files Include ‘functions’ listings on every page Optionally disable branch coverage Purposely restrict coverage directories to src/ and include/ Another scrolling bug fixed Both blue and green colored themes Codacy linting New forward and back buttons. Allows navigation to each “miss” and subsequent pages Server Hosting This quarter we decommissioned the Rackspace servers which had been in service 10-15 years. Rene provided a nice announcement: Farewell to Wowbagger - End of an Era for boost.org There was more to do then just delete servers, I built a new results.boost.org FTP server replacing the preexisting FTP server used by regression.boost.org. Configured and tested it. Inventoried the old machines, including a monitoring server. Built a replacement wowbagger called wowbagger2 to host a copy of the website - original.boost.org. The monthly cost of a small GCP Compute instance seems to be around 5% of the Rackspace legacy cloud server. Components: Ubuntu 24.04. Apache. PHP 5 PPA. “original.boost.org” continues to host a copy of the earlier boost.org website for comparison and development purposes which is interesting to check. Launched server instances for corosio.org and paperflow. Fil-C Working with Tom Kent to add Fil-C https://github.com/pizlonator/fil-c test into the regression matrix https://regression.boost.org/ . Built a Fil-C container image based on Drone images. Debugging the build process. After a few roadblocks, the latest news is that Fil-C seems to be successfully building. This is not quite finished but should be online soon. Boost release process boostorg/release-tools The boostorg/boost CircleCI jobs often threaten to cross the 1-hour time limit. Increased parallel processes from 4 to 8. Increased instance size from medium to large. And yet another adjustment: there are 4 compression algorithms used by the releases (gz, bz2, 7z, zip) and it is possible to find drop-in replacement programs that go much faster than the standard ones by utilizing parallelization. lbzip2 pigz. The substitute binaries were applied to publish-releases.py recently. Now the same idea in ci_boost_release.py. All of this reduced the CircleCI job time by many minutes. Certain boost library pull requests were finally merged after a long delay allowing an upgrade of the Sphinx pip package. Tested a superproject container image for the CircleCI jobs with updated pip packages. Boost is currently in a code freeze so this will not go live until after 1.91.0. Sphinx docs continue to deal with upgrade incompatibilities. I prepared another set of pull requests to send to boost libraries using Sphinx. Doc Previews and Doc Builds Antora docs usually show an “Edit this Page” link. Recently a couple of Alliance developers happened to comment the link didn’t quite work in some of the doc previews, and so that opened a topic to research solutions and make the Antora edit-this-page feature more robust if possible. The issue is that Boost libraries are git submodules. When working as expected submodules are checked out as “HEAD detached at a74967f0” rather than “develop”. If Antora’s edit-this-page code sees “HEAD detached at a74967f0” it will default to the path HEAD. That’s wrong on the GitHub side. A solution we found (credit to Ruben Perez) is to set the antora config to edit_url: ‘{web_url}/edit/develop/{path}’. Don’t leave a {ref} type of variable in the path. Rolling out the antora-downloads-extension to numerous boost and alliance repositories. It will retry the ui-bundle download. Refactored the release-tools build_docs scripts so that the gems and pip packages are organized into a format that matches Gemfile and requirement.txt files, instead of what the script was doing before “gem install package”. By using a Gemfile, the script becomes compatible with other build systems so content can be copy-pasted easily. CircleCI superproject builds use docbook-xml.zip, where the download url broke. Switched the link address. Also hosting a copy of the file at https://dl.cpp.al/misc/docbook-xml.zip Boost website boostorg/website-v2 Collaborated in the process of on-boarding the consulting company Metalab who are working on V3, the next iteration of the boost.org website. Disable Fastly caching to assist metalab developers. Gitflow workflow planning meetings. Discussions about how Tools should be present on the libraries pages. On the DB servers, adjusted postgresql authentication configurations from md5 to scram-sha-256 on all databases and multiple ansible roles. Actually this turns out to be a superficial change even though it should be done. The reason is that newer postgres will use scram-sha-256 behind-the-scenes regardless. Wrote deploy-qa.sh, a script to enable metalab QA engineers to deploy a pull request onto a test server. The precise git SHA commit of any open pull request can be tested. Wrote upload-images.sh, a script to store Bob Ostrom’s boost cartoons in S3 instead of the github repo. Mailman3 Synced production lists to the staging server. Wrote a document in the cppalliance/boost-mailman repo explaining how the multi-step process of syncing can be done. boostorg Migrated cppalliance/decimal to boostorg/decimal. Jenkins The Jenkins server is building documentation previews for dozens of boostorg and cppalliance repositories where each job is assigned its own “workspace” directory and then proceeds to install 1GB of node_modules. That was happening for every build and every pull request. The disk space on the server was filling up, every few weeks yet another 100GB. Rather than continue to resize the disk, or delete all jobs too quickly, was there the opportunity for optimization? Yes. In the superproject container image relocate the nodejs installation to /opt/nvm instead of root’s home directory. The /opt/nvm installation can now be “shared” by other jobs which reduces space. Conditionally check if mermaid is needed and/or if mermaid is already available in /opt/nvm. With these modifications, since each job doesn’t need to install a large amount of npm packages the job size is drastically reduced. Upgraded server and all plugins. Necessary to fix spurious bugs in certain Jenkins jobs. Debugging Jenkins runners, set subnet and zone on the cloud server configurations. Fixed lcov jobs, that need cxxstd=20 Migrated many administrative scripts from a local directory on the server to the jenkins-ci repository. Revise, clean, discard certain scripts. Dmitry contributed diff-reports that should now appear in every pull request which has been configured for LCOV previews. Implemented –flags in lcov build scripts [–skip-gcovr] [–skip-genhtml] [–skip-diff-report] [–only-gcovr] Ansible role task: install check_jenkins_queue nagios plugin automatically from Ansible. GHA Completed a major upgrade of the Terraform installation which had lagged upstream code by nearly two years. Deployed a series of GitHub Actions runners for Joaquin’s latest benchmarks at https://github.com/boostorg/boost_hub_benchmarks. Installed latest VS2026. MacOS upgrade to 26.3. Drone Launched new MacOS 26 drone runners, and FreeBSD 15.0 drone runners.</summary></entry><entry><title type="html">Statement from the C++ Alliance on WG21 Committee Meeting Support</title><link href="http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement.html" rel="alternate" type="text/html" title="Statement from the C++ Alliance on WG21 Committee Meeting Support" /><published>2026-03-27T00:00:00+00:00</published><updated>2026-03-27T00:00:00+00:00</updated><id>http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement</id><content type="html" xml:base="http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement.html">&lt;p&gt;The C++ Alliance is proud to support attendance at WG21 committee meetings. We believe that facilitating the attendance of domain experts produces better outcomes for C++ and for the broader ecosystem, and we are committed to making participation more accessible.&lt;/p&gt;

&lt;p&gt;We want to be unequivocally clear: the C++ Alliance does not, and will never, direct or compel attendees to vote in any particular way. Our support comes with no strings attached. Those who attend are free and encouraged to exercise their independent judgment on every proposal before the committee.&lt;/p&gt;

&lt;p&gt;The integrity of the WG21 standards process depends on the independence of its participants. We respect that process deeply, and any suggestion to the contrary does not reflect our values or our program.&lt;/p&gt;

&lt;p&gt;If you are interested in learning more about our attendance program, please reach out to us at &lt;a href=&quot;mailto:info@cppalliance.org&quot;&gt;info@cppalliance.org&lt;/a&gt;.&lt;/p&gt;</content><author><name></name></author><category term="company" /><summary type="html">The C++ Alliance is proud to support attendance at WG21 committee meetings. We believe that facilitating the attendance of domain experts produces better outcomes for C++ and for the broader ecosystem, and we are committed to making participation more accessible. We want to be unequivocally clear: the C++ Alliance does not, and will never, direct or compel attendees to vote in any particular way. Our support comes with no strings attached. Those who attend are free and encouraged to exercise their independent judgment on every proposal before the committee. The integrity of the WG21 standards process depends on the independence of its participants. We respect that process deeply, and any suggestion to the contrary does not reflect our values or our program. If you are interested in learning more about our attendance program, please reach out to us at info@cppalliance.org.</summary></entry><entry><title type="html">Corosio Beta: Coroutine-Native Networking for C++20</title><link href="http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking.html" rel="alternate" type="text/html" title="Corosio Beta: Coroutine-Native Networking for C++20" /><published>2026-03-11T00:00:00+00:00</published><updated>2026-03-11T00:00:00+00:00</updated><id>http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking</id><content type="html" xml:base="http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking.html">&lt;h1 id=&quot;corosio-beta-coroutine-native-networking-for-c20&quot;&gt;Corosio Beta: Coroutine-Native Networking for C++20&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;The C++ Alliance is releasing the Corosio beta, a networking library designed from the ground up for C++20 coroutines. We are inviting serious C++ developers to use it, break it, and tell us what needs to change before it goes to Boost formal review.&lt;/em&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;the-gap-c20-left-open&quot;&gt;The Gap C++20 Left Open&lt;/h2&gt;

&lt;p&gt;C++20 gave us coroutines. It did not give us networking to go with them. Boost.Asio has added coroutine support over the years, but its foundations were laid for a world of callbacks and completion handlers. Retrofitting coroutines onto that model produces code that works, but never quite feels like the language you are writing in. We decided to find out what networking looks like when you start over.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;what-corosio-is&quot;&gt;What Corosio Is&lt;/h2&gt;

&lt;p&gt;Corosio is a coroutine-only networking library for C++20. It provides TCP sockets, acceptors, TLS streams, timers, and DNS resolution. Every operation is an awaitable. You write &lt;code&gt;co_await&lt;/code&gt; and the library handles executor affinity, cancellation, and frame allocation. No callbacks. No futures. No sender/receiver.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-c&quot;&gt;auto [socket] = co_await acceptor.async_accept();
auto n = co_await socket.async_read_some(buffer);
co_await socket.async_write(response);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Corosio runs on Windows (IOCP), Linux (epoll), and macOS (kqueue). It targets GCC 12+, Clang 17+, and MSVC 14.34+, with no dependencies outside the standard library. Capy, its I/O foundation, is fetched automatically by CMake.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;built-on-capy&quot;&gt;Built on Capy&lt;/h2&gt;

&lt;p&gt;Corosio is built on Capy, a coroutine I/O foundation library that ships alongside it. The core insight driving Capy’s design comes from Peter Dimov: &lt;em&gt;an API designed from the ground up to use C++20 coroutines can achieve performance and ergonomics which cannot otherwise be obtained.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Capy’s &lt;em&gt;IoAwaitable&lt;/em&gt; protocol ensures coroutines resume on the correct executor after I/O completes, without thread-local globals, implicit context, or manual dispatch. Cancellation follows the same forward-propagation model: stop tokens flow from the top of a coroutine chain to the platform API boundary, giving you uniform cancellation across all operations. Frame allocation uses thread-local recycling pools to achieve zero steady-state heap allocations after warmup.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;what-we-are-asking-for&quot;&gt;What We Are Asking For&lt;/h2&gt;

&lt;p&gt;We are looking for feedback on correctness, ergonomics, platform behavior, documentation, and performance under real workloads. Specifically:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Does the executor affinity model hold up under production conditions?&lt;/li&gt;
  &lt;li&gt;Does cancellation behave correctly across complex coroutine chains?&lt;/li&gt;
  &lt;li&gt;Are there platform-specific edge cases in the IOCP, epoll, or kqueue backends?&lt;/li&gt;
  &lt;li&gt;Does the zero-allocation model hold in your deployment scenarios?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are inviting serious C++ developers, especially if you have shipped networking code, to use it, break it, and tell us what your experience was. The Boost review process rewards libraries that arrive having already faced serious scrutiny.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;get-it&quot;&gt;Get It&lt;/h2&gt;

&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git clone https://github.com/cppalliance/corosio.git
cd corosio
cmake -S . -B build -G Ninja
cmake --build build

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Or with CMake FetchContent:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;include(FetchContent)
FetchContent_Declare(corosio
  GIT_REPOSITORY https://github.com/cppalliance/corosio.git
  GIT_TAG        develop
  GIT_SHALLOW    TRUE)
FetchContent_MakeAvailable(corosio)
target_link_libraries(my_app Boost::corosio)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Requires:&lt;/strong&gt; CMake 3.25+, GCC 12+ / Clang 17+ / MSVC 14.34+&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;resources&quot;&gt;Resources&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/corosio&quot;&gt;Corosio on GitHub&lt;/a&gt; – https://github.com/cppalliance/corosio&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://master.corosio.cpp.al/&quot;&gt;Corosio Docs&lt;/a&gt; – https://develop.corosio.cpp.al/&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/capy&quot;&gt;Capy on GitHub&lt;/a&gt; – https://github.com/cppalliance/capy&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://master.capy.cpp.al/&quot;&gt;Capy Docs&lt;/a&gt; – https://develop.capy.cpp.al/&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/corosio/issues&quot;&gt;File an Issue&lt;/a&gt; – https://github.com/cppalliance/corosio/issues&lt;/p&gt;</content><author><name></name></author><category term="mark" /><summary type="html">Corosio Beta: Coroutine-Native Networking for C++20 The C++ Alliance is releasing the Corosio beta, a networking library designed from the ground up for C++20 coroutines. We are inviting serious C++ developers to use it, break it, and tell us what needs to change before it goes to Boost formal review. The Gap C++20 Left Open C++20 gave us coroutines. It did not give us networking to go with them. Boost.Asio has added coroutine support over the years, but its foundations were laid for a world of callbacks and completion handlers. Retrofitting coroutines onto that model produces code that works, but never quite feels like the language you are writing in. We decided to find out what networking looks like when you start over. What Corosio Is Corosio is a coroutine-only networking library for C++20. It provides TCP sockets, acceptors, TLS streams, timers, and DNS resolution. Every operation is an awaitable. You write co_await and the library handles executor affinity, cancellation, and frame allocation. No callbacks. No futures. No sender/receiver. auto [socket] = co_await acceptor.async_accept(); auto n = co_await socket.async_read_some(buffer); co_await socket.async_write(response); Corosio runs on Windows (IOCP), Linux (epoll), and macOS (kqueue). It targets GCC 12+, Clang 17+, and MSVC 14.34+, with no dependencies outside the standard library. Capy, its I/O foundation, is fetched automatically by CMake. Built on Capy Corosio is built on Capy, a coroutine I/O foundation library that ships alongside it. The core insight driving Capy’s design comes from Peter Dimov: an API designed from the ground up to use C++20 coroutines can achieve performance and ergonomics which cannot otherwise be obtained. Capy’s IoAwaitable protocol ensures coroutines resume on the correct executor after I/O completes, without thread-local globals, implicit context, or manual dispatch. Cancellation follows the same forward-propagation model: stop tokens flow from the top of a coroutine chain to the platform API boundary, giving you uniform cancellation across all operations. Frame allocation uses thread-local recycling pools to achieve zero steady-state heap allocations after warmup. What We Are Asking For We are looking for feedback on correctness, ergonomics, platform behavior, documentation, and performance under real workloads. Specifically: Does the executor affinity model hold up under production conditions? Does cancellation behave correctly across complex coroutine chains? Are there platform-specific edge cases in the IOCP, epoll, or kqueue backends? Does the zero-allocation model hold in your deployment scenarios? We are inviting serious C++ developers, especially if you have shipped networking code, to use it, break it, and tell us what your experience was. The Boost review process rewards libraries that arrive having already faced serious scrutiny. Get It git clone https://github.com/cppalliance/corosio.git cd corosio cmake -S . -B build -G Ninja cmake --build build Or with CMake FetchContent: include(FetchContent) FetchContent_Declare(corosio GIT_REPOSITORY https://github.com/cppalliance/corosio.git GIT_TAG develop GIT_SHALLOW TRUE) FetchContent_MakeAvailable(corosio) target_link_libraries(my_app Boost::corosio) Requires: CMake 3.25+, GCC 12+ / Clang 17+ / MSVC 14.34+ Resources Corosio on GitHub – https://github.com/cppalliance/corosio Corosio Docs – https://develop.corosio.cpp.al/ Capy on GitHub – https://github.com/cppalliance/capy Capy Docs – https://develop.capy.cpp.al/ File an Issue – https://github.com/cppalliance/corosio/issues</summary></entry><entry><title type="html">A postgres library for Boost</title><link href="http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update.html" rel="alternate" type="text/html" title="A postgres library for Boost" /><published>2026-01-23T00:00:00+00:00</published><updated>2026-01-23T00:00:00+00:00</updated><id>http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update.html">&lt;p&gt;Do you know Boost.MySQL? If you’ve been reading my posts, you probably do.
Many people have wondered ‘why not Postgres?’. Well, the time is now.
TL;DR: I’m writing the equivalent of Boost.MySQL, but for PostgreSQL.
You can find the code &lt;a href=&quot;https://github.com/anarthal/nativepg&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since libPQ is already a good library, the NativePG project intends
to be more ambitious than Boost.MySQL. In addition to the expected
Asio interface, I intend to provide a sans-io API that exposes primitives
like message serialization.&lt;/p&gt;

&lt;p&gt;Throughout this post, I will go into the intended library design and the rationales
behind its design.&lt;/p&gt;

&lt;h2 id=&quot;the-lowest-level-message-serialization&quot;&gt;The lowest level: message serialization&lt;/h2&gt;

&lt;p&gt;PostgreSQL clients communicate with the server using
a binary protocol on top of TCP, termed &lt;a href=&quot;https://www.postgresql.org/docs/current/protocol.html&quot;&gt;the frontend/backend protocol&lt;/a&gt;.
The protocol defines a set of messages used for interactions. For example, when running a query, the following happens:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;┌────────┐                                    ┌────────┐
│ Client │                                    │ Server │
└───┬────┘                                    └───┬────┘
    │                                             │
    │  Query                                      │
    │ ──────────────────────────────────────────&amp;gt; │
    │                                             │
    │                        RowDescription       │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
    │                              DataRow        │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
    │                        CommandComplete      │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
    │                        ReadyForQuery        │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In the lowest layer, this library provides functions to serialize and parse
such messages. The goal here is being as efficient as possible.
Parsing functions are non-allocating, and use an approach inspired by
Boost.Url collections:&lt;/p&gt;

&lt;h2 id=&quot;parsing-database-types&quot;&gt;Parsing database types&lt;/h2&gt;

&lt;p&gt;The PostgreSQL type system is quite rich. In addition to the usual SQL built-in types,
it supports advanced scalars like UUIDs, arrays and user-defined aggregates.&lt;/p&gt;

&lt;p&gt;When running a query, libPQ exposes retrieved data as either raw text or bytes.
This is what the server sends in the &lt;code&gt;DataRow&lt;/code&gt; packets shown above.
To do something useful with the data, users likely need parsing and serializing
such types.&lt;/p&gt;

&lt;p&gt;The next layer of NativePG is in charge of providing such functions.
This will likely contain some extension points for users to plug in their types.
This is the general form of such functions:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;system::error_code parse(span&amp;lt;const std::byte&amp;gt; from, T&amp;amp; to, const connection_state&amp;amp;);
void serialize(const T&amp;amp; from, dynamic_buffer&amp;amp; to, const connection_state&amp;amp;);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note that some types might require access to session configuration.
For instance, dates may be expressed using different wire formats depending
on the connection’s runtime settings.&lt;/p&gt;

&lt;p&gt;At the time of writing, only ints and strings are supported,
but this will be extended soon.&lt;/p&gt;

&lt;h2 id=&quot;composing-requests&quot;&gt;Composing requests&lt;/h2&gt;

&lt;p&gt;Efficiency in database communication is achieved with pipelining.
A network round-trip with the server is worth a thousand allocations in the client.
It is thus critical that:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The protocol properly supports pipelining. This is the case with PostgreSQL.&lt;/li&gt;
  &lt;li&gt;The client should expose an interface to it, and make it very easy to use.
libPQ does the first, and NativePG intends to achieve the second.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NativePG pipelines by default. In NativePG, a &lt;code&gt;request&lt;/code&gt; object is always
a pipeline:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Create a request
request req;

// These two queries will be executed as part of a pipeline
req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;});
req.add_query(&quot;DELETE FROM libs WHERE author &amp;lt;&amp;gt; $1&quot;, {&quot;Ruben&quot;});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Everything you may ask the server can be added to &lt;code&gt;request&lt;/code&gt;.
This includes preparing and executing statements, establishing
pipeline synchronization points, and so on.
It aims to be close enough to the protocol to be powerful,
while also exposing high-level functions to make things easier.&lt;/p&gt;

&lt;h2 id=&quot;reading-responses&quot;&gt;Reading responses&lt;/h2&gt;

&lt;p&gt;Like &lt;code&gt;request&lt;/code&gt;, the core response mechanism aims to be as close
to the protocol as possible. Since use cases here are much more varied,
there is no single &lt;code&gt;response&lt;/code&gt; class, but a concept, instead.
This is what a &lt;code&gt;response_handler&lt;/code&gt; looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;
struct my_handler {
    // Check that the handler is compatible with the request,
    // and prepare any required data structures. Called once at the beginning
    handler_setup_result setup(const request&amp;amp; req, std::size_t pipeline_offset);

    // Called once for every message received from the server
    // (e.g. `RowDescription`, `DataRow`, `CommandComplete`)
    void on_message(const any_request_message&amp;amp; msg);

    // The overall result of the operation (error_code + diagnostic string).
    // Called after the operation has finished.
    const extended_error&amp;amp; result() const;
};

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note that &lt;code&gt;on_message&lt;/code&gt; is not allowed to report errors.
Even if a handler encounters a problem with a message
(imagine finding a &lt;code&gt;NULL&lt;/code&gt; for a field where the user isn’t expecting one),
this is a user error, rather than a protocol error.
Subsequent steps in the pipeline must not be affected by this.&lt;/p&gt;

&lt;p&gt;This is powerful but very low-level. Using this mechanism, the library
exposes an interface to parse the result of a query into a user-supplied
struct, using Boost.Describe:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;struct library
{
    std::int32_t id;
    std::string name;
    std::string cpp_version;
};
BOOST_DESCRIBE_STRUCT(library, (), (id, name, cpp_version))

// ...
std::vector&amp;lt;library&amp;gt; libs;
auto handler = nativepg::into(libs); // this is a valid response_handler
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;network-algorithms&quot;&gt;Network algorithms&lt;/h2&gt;

&lt;p&gt;Given a user request and response handler, how do we send these to the server?
We need a set of network algorithms to achieve this. Some of these are trivial:
sending a request to the server is an &lt;code&gt;asio::write&lt;/code&gt; on the request’s buffer.
Others, however, are more involved:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Reading a pipeline response needs to verify that the message
sequence is what we expected, for security, and handle errors gracefully.&lt;/li&gt;
  &lt;li&gt;The handshake algorithm, in charge of authentication when we connect to the
server, needs to respond to server authentication challenges, which may
come in different forms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Writing these using &lt;code&gt;asio::async_compose&lt;/code&gt; is problematic because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;They become tied to Boost.Asio.&lt;/li&gt;
  &lt;li&gt;They are difficult to test.&lt;/li&gt;
  &lt;li&gt;They result in long compile times and code bloat due to templating.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the moment, these are written as finite state machines, similar to
how OpenSSL behaves in non-blocking mode:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Reads the response of a pipeline (simplified).
// This is a hand-wired generator.
class read_response_fsm {
public:
    // User-supplied arguments: request and response
    read_response_fsm(const request&amp;amp; req, response_handler_ref handler);

    // Yielded to signal that we should read from the server
    struct read_args { span&amp;lt;std::byte&amp;gt; buffer; };

    // Yielded to signal that we're done
    struct done_args { system::error_code result; };

    variant&amp;lt;read_args, done_args&amp;gt;
    resume(connection_state&amp;amp;, system::error_code io_result, std::size_t bytes_transferred);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The idea is that higher-level code should call &lt;code&gt;resume&lt;/code&gt; until it returns
a &lt;code&gt;done_args&lt;/code&gt; value. This allows de-coupling from the underlying I/O runtime.&lt;/p&gt;

&lt;p&gt;Since NativePG targets C++20, I’m considering rewriting this as a coroutine.
Boost.Capy (currently under development - hopefully part of Boost soon)
could be a good candidate for this.&lt;/p&gt;

&lt;h2 id=&quot;putting-everything-together-the-asio-interface&quot;&gt;Putting everything together: the Asio interface&lt;/h2&gt;

&lt;p&gt;At the end of the day, most users just want a &lt;code&gt;connection&lt;/code&gt; object they can easily
use. Once all the sans-io parts are working, writing it is pretty straight-forward.
This is what end user code looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Create a connection
connection conn{co_await asio::this_coro::executor};

// Connect
co_await conn.async_connect(
    {.hostname = &quot;localhost&quot;, .username = &quot;postgres&quot;, .password = &quot;&quot;, .database = &quot;postgres&quot;}
);
std::cout &amp;lt;&amp;lt; &quot;Startup complete\n&quot;;

// Compose our request and response
request req;
req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;});
std::vector&amp;lt;library&amp;gt; libs;

// Run the request
co_await conn.async_exec(req, into(libs));
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;auto-batch-connections&quot;&gt;Auto-batch connections&lt;/h2&gt;

&lt;p&gt;While &lt;code&gt;connection&lt;/code&gt; is good, experience has shown me that it’s still
too low-level for most users:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Connection establishment is manual with &lt;code&gt;async_connect&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;No built-in reconnection or health checks.&lt;/li&gt;
  &lt;li&gt;No built-in concurrent execution of requests.
That is, &lt;code&gt;async_exec&lt;/code&gt; first writes the request, then reads the response.
Other requests may not be executed during this period.
This limits the connection’s throughput.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this reason, NativePG will provide some higher-level interfaces
that will make server communication easier and more efficient.
To get a feel of what we need, we should first understand
the two main usage patterns that we expect.&lt;/p&gt;

&lt;p&gt;Most of the time, connections are used in a &lt;strong&gt;stateless&lt;/strong&gt; way.
For example, consider querying data from the server:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;});
co_await conn.async_exec(req, res);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This query is not mutating connection state in any way.
Other queries could be inserted before and after it without
making any difference.&lt;/p&gt;

&lt;p&gt;I plan to add a higher-level connection type, similar to
&lt;code&gt;redis::connection&lt;/code&gt; in Boost.Redis, that automatically
batches concurrent requests and handles reconnection.
The key differences with &lt;code&gt;connection&lt;/code&gt; would be:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Several independent tasks can share an auto-batch connection.
This is an error for &lt;code&gt;connection&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;If several requests are queued at the same time,
the connection may send them together to the server using a single system call.&lt;/li&gt;
  &lt;li&gt;There is no &lt;code&gt;async_connect&lt;/code&gt; in an auto-batch connection.
Reconnection is handled automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that this pattern is not exclusive to read-only or
individual queries. Transactions can work by using protocol features:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.set_autosync(false); // All subsequent queries are part of the same transaction
req.add_query(&quot;UPDATE table1 SET x = $1 WHERE y = 2&quot;, {42});
req.add_query(&quot;UPDATE table2 SET x = $1 WHERE y = 42&quot;, {2});
req.add_sync(); // The two updates run atomically
co_await conn.async_exec(req, res);
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;connection-pools&quot;&gt;Connection pools&lt;/h2&gt;

&lt;p&gt;I mentioned there were two main usage scenarios in the library.
Sometimes, it is required to use connections in a &lt;strong&gt;stateful&lt;/strong&gt; way:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.add_simple_query(&quot;BEGIN&quot;); // start a transaction manually
req.add_query(&quot;SELECT * FROM library WHERE author = $1 FOR UPDATE&quot;, {&quot;Ruben&quot;}); // lock rows
co_await conn.async_exec(req, lib);

// Do something in the client that depends on lib
if (lib.id == &quot;Boost.MySQL&quot;)
    co_return; // don't

// Now compose another request that depends on what we read from lib
req.clear();
req.add_query(&quot;UPDATE library SET status = 'deprecated' WHERE id = $1&quot;, {lib.id});
req.add_simple_query(&quot;COMMIT&quot;);
co_await conn.async_exec(req, ignore);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The key point here is that this pattern requires exclusive access to &lt;code&gt;conn&lt;/code&gt;.
No other requests should be interleaved between the first and the second
&lt;code&gt;async_exec&lt;/code&gt; invocations.&lt;/p&gt;

&lt;p&gt;The best way to solve this is by using a connection pool.
This is what client code could look like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;co_await pool.async_exec([&amp;amp;] (connection&amp;amp; conn) -&amp;gt; asio::awaitable&amp;lt;system::error_code&amp;gt; {
    request req;
    req.add_simple_query(&quot;BEGIN&quot;);
    req.add_query(&quot;SELECT balance, status FROM accounts WHERE user_id = $1 FOR UPDATE&quot;, {user_id});

    account_info acc;
    co_await conn.async_exec(req, into(acc));

    // Check if account has sufficient funds and is active
    if (acc.balance &amp;lt; payment_amount || acc.status != &quot;active&quot;)
        co_return error::insufficient_funds;

    // Call external payment gateway API - this CANNOT be done in SQL
    auto result = co_await payment_gateway.process_charge(user_id, payment_amount);

    // Compose next request based on the external API response
    req.clear();
    if (result.success) {
        req.add_query(
            &quot;UPDATE accounts SET balance = balance - $1 WHERE user_id = $2&quot;,
            {payment_amount, user_id}
        );
        req.add_simple_query(&quot;COMMIT&quot;);
    }
    co_await conn.async_exec(req, ignore);

    // The connection is automatically returned to the pool when this coroutine completes
    co_return result.success ? error_code{} : error::payment_failed;
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I explicitly want to avoid having a &lt;code&gt;connection_pool::async_get_connection()&lt;/code&gt;
function, like in Boost.MySQL. This function returns a proxy object that grants access
to a free connection. When destroyed, the connection is returned to the pool.
This pattern looks great on paper, but runs into severe complications in
multi-threaded code. The proxy object’s destructor needs to mutate the pool’s state,
thus needing at least an &lt;code&gt;asio::dispatch&lt;/code&gt; to the pool’s executor, which may or may not
be a strand. It is so easy to get wrong that Boost.MySQL added a &lt;code&gt;pool_params::thread_safe&lt;/code&gt; boolean
option to take care of this automatically, adding extra complexity. Definitely something to avoid.&lt;/p&gt;

&lt;h2 id=&quot;sql-formatting&quot;&gt;SQL formatting&lt;/h2&gt;

&lt;p&gt;As we’ve seen, the protocol has built-in support for adding
parameters to queries (see placeholders like &lt;code&gt;$1&lt;/code&gt;). These placeholders
are expanded in the server securely.&lt;/p&gt;

&lt;p&gt;While this covers most cases, sometimes we need to generate SQL
that is too dynamic to be handled by the server. For instance,
a website might allow multiple optional filters, translating into
&lt;code&gt;WHERE&lt;/code&gt; clauses that might or might not be present.&lt;/p&gt;

&lt;p&gt;These use cases require SQL generated in the client. To do so,
we need a way of formatting user-supplied values without
running into SQL injection vulnerabilities. The final piece
of the library becomes a &lt;code&gt;format_sql&lt;/code&gt; function akin to the
one in Boost.MySQL.&lt;/p&gt;

&lt;h2 id=&quot;final-thoughts&quot;&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;While the plan is clear, there is still much to be done here.
There are dedicated APIs for high-throughput data copying and
push notifications that need to be implemented. Some of the described
APIs have a solid working implementation, while others still need
some work. All in all, I hope that this library can soon reach a state
where it can be useful to people.&lt;/p&gt;</content><author><name></name></author><category term="ruben" /><summary type="html">Do you know Boost.MySQL? If you’ve been reading my posts, you probably do. Many people have wondered ‘why not Postgres?’. Well, the time is now. TL;DR: I’m writing the equivalent of Boost.MySQL, but for PostgreSQL. You can find the code here. Since libPQ is already a good library, the NativePG project intends to be more ambitious than Boost.MySQL. In addition to the expected Asio interface, I intend to provide a sans-io API that exposes primitives like message serialization. Throughout this post, I will go into the intended library design and the rationales behind its design. The lowest level: message serialization PostgreSQL clients communicate with the server using a binary protocol on top of TCP, termed the frontend/backend protocol. The protocol defines a set of messages used for interactions. For example, when running a query, the following happens: ┌────────┐ ┌────────┐ │ Client │ │ Server │ └───┬────┘ └───┬────┘ │ │ │ Query │ │ ──────────────────────────────────────────&amp;gt; │ │ │ │ RowDescription │ │ &amp;lt;────────────────────────────────────────── │ │ │ │ DataRow │ │ &amp;lt;────────────────────────────────────────── │ │ │ │ CommandComplete │ │ &amp;lt;────────────────────────────────────────── │ │ │ │ ReadyForQuery │ │ &amp;lt;────────────────────────────────────────── │ │ │ In the lowest layer, this library provides functions to serialize and parse such messages. The goal here is being as efficient as possible. Parsing functions are non-allocating, and use an approach inspired by Boost.Url collections: Parsing database types The PostgreSQL type system is quite rich. In addition to the usual SQL built-in types, it supports advanced scalars like UUIDs, arrays and user-defined aggregates. When running a query, libPQ exposes retrieved data as either raw text or bytes. This is what the server sends in the DataRow packets shown above. To do something useful with the data, users likely need parsing and serializing such types. The next layer of NativePG is in charge of providing such functions. This will likely contain some extension points for users to plug in their types. This is the general form of such functions: system::error_code parse(span&amp;lt;const std::byte&amp;gt; from, T&amp;amp; to, const connection_state&amp;amp;); void serialize(const T&amp;amp; from, dynamic_buffer&amp;amp; to, const connection_state&amp;amp;); Note that some types might require access to session configuration. For instance, dates may be expressed using different wire formats depending on the connection’s runtime settings. At the time of writing, only ints and strings are supported, but this will be extended soon. Composing requests Efficiency in database communication is achieved with pipelining. A network round-trip with the server is worth a thousand allocations in the client. It is thus critical that: The protocol properly supports pipelining. This is the case with PostgreSQL. The client should expose an interface to it, and make it very easy to use. libPQ does the first, and NativePG intends to achieve the second. NativePG pipelines by default. In NativePG, a request object is always a pipeline: // Create a request request req; // These two queries will be executed as part of a pipeline req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;}); req.add_query(&quot;DELETE FROM libs WHERE author &amp;lt;&amp;gt; $1&quot;, {&quot;Ruben&quot;}); Everything you may ask the server can be added to request. This includes preparing and executing statements, establishing pipeline synchronization points, and so on. It aims to be close enough to the protocol to be powerful, while also exposing high-level functions to make things easier. Reading responses Like request, the core response mechanism aims to be as close to the protocol as possible. Since use cases here are much more varied, there is no single response class, but a concept, instead. This is what a response_handler looks like: struct my_handler { // Check that the handler is compatible with the request, // and prepare any required data structures. Called once at the beginning handler_setup_result setup(const request&amp;amp; req, std::size_t pipeline_offset); // Called once for every message received from the server // (e.g. `RowDescription`, `DataRow`, `CommandComplete`) void on_message(const any_request_message&amp;amp; msg); // The overall result of the operation (error_code + diagnostic string). // Called after the operation has finished. const extended_error&amp;amp; result() const; }; Note that on_message is not allowed to report errors. Even if a handler encounters a problem with a message (imagine finding a NULL for a field where the user isn’t expecting one), this is a user error, rather than a protocol error. Subsequent steps in the pipeline must not be affected by this. This is powerful but very low-level. Using this mechanism, the library exposes an interface to parse the result of a query into a user-supplied struct, using Boost.Describe: struct library { std::int32_t id; std::string name; std::string cpp_version; }; BOOST_DESCRIBE_STRUCT(library, (), (id, name, cpp_version)) // ... std::vector&amp;lt;library&amp;gt; libs; auto handler = nativepg::into(libs); // this is a valid response_handler Network algorithms Given a user request and response handler, how do we send these to the server? We need a set of network algorithms to achieve this. Some of these are trivial: sending a request to the server is an asio::write on the request’s buffer. Others, however, are more involved: Reading a pipeline response needs to verify that the message sequence is what we expected, for security, and handle errors gracefully. The handshake algorithm, in charge of authentication when we connect to the server, needs to respond to server authentication challenges, which may come in different forms. Writing these using asio::async_compose is problematic because: They become tied to Boost.Asio. They are difficult to test. They result in long compile times and code bloat due to templating. At the moment, these are written as finite state machines, similar to how OpenSSL behaves in non-blocking mode: // Reads the response of a pipeline (simplified). // This is a hand-wired generator. class read_response_fsm { public: // User-supplied arguments: request and response read_response_fsm(const request&amp;amp; req, response_handler_ref handler); // Yielded to signal that we should read from the server struct read_args { span&amp;lt;std::byte&amp;gt; buffer; }; // Yielded to signal that we're done struct done_args { system::error_code result; }; variant&amp;lt;read_args, done_args&amp;gt; resume(connection_state&amp;amp;, system::error_code io_result, std::size_t bytes_transferred); }; The idea is that higher-level code should call resume until it returns a done_args value. This allows de-coupling from the underlying I/O runtime. Since NativePG targets C++20, I’m considering rewriting this as a coroutine. Boost.Capy (currently under development - hopefully part of Boost soon) could be a good candidate for this. Putting everything together: the Asio interface At the end of the day, most users just want a connection object they can easily use. Once all the sans-io parts are working, writing it is pretty straight-forward. This is what end user code looks like: // Create a connection connection conn{co_await asio::this_coro::executor}; // Connect co_await conn.async_connect( {.hostname = &quot;localhost&quot;, .username = &quot;postgres&quot;, .password = &quot;&quot;, .database = &quot;postgres&quot;} ); std::cout &amp;lt;&amp;lt; &quot;Startup complete\n&quot;; // Compose our request and response request req; req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;}); std::vector&amp;lt;library&amp;gt; libs; // Run the request co_await conn.async_exec(req, into(libs)); Auto-batch connections While connection is good, experience has shown me that it’s still too low-level for most users: Connection establishment is manual with async_connect. No built-in reconnection or health checks. No built-in concurrent execution of requests. That is, async_exec first writes the request, then reads the response. Other requests may not be executed during this period. This limits the connection’s throughput. For this reason, NativePG will provide some higher-level interfaces that will make server communication easier and more efficient. To get a feel of what we need, we should first understand the two main usage patterns that we expect. Most of the time, connections are used in a stateless way. For example, consider querying data from the server: request req; req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;}); co_await conn.async_exec(req, res); This query is not mutating connection state in any way. Other queries could be inserted before and after it without making any difference. I plan to add a higher-level connection type, similar to redis::connection in Boost.Redis, that automatically batches concurrent requests and handles reconnection. The key differences with connection would be: Several independent tasks can share an auto-batch connection. This is an error for connection. If several requests are queued at the same time, the connection may send them together to the server using a single system call. There is no async_connect in an auto-batch connection. Reconnection is handled automatically. Note that this pattern is not exclusive to read-only or individual queries. Transactions can work by using protocol features: request req; req.set_autosync(false); // All subsequent queries are part of the same transaction req.add_query(&quot;UPDATE table1 SET x = $1 WHERE y = 2&quot;, {42}); req.add_query(&quot;UPDATE table2 SET x = $1 WHERE y = 42&quot;, {2}); req.add_sync(); // The two updates run atomically co_await conn.async_exec(req, res); Connection pools I mentioned there were two main usage scenarios in the library. Sometimes, it is required to use connections in a stateful way: request req; req.add_simple_query(&quot;BEGIN&quot;); // start a transaction manually req.add_query(&quot;SELECT * FROM library WHERE author = $1 FOR UPDATE&quot;, {&quot;Ruben&quot;}); // lock rows co_await conn.async_exec(req, lib); // Do something in the client that depends on lib if (lib.id == &quot;Boost.MySQL&quot;) co_return; // don't // Now compose another request that depends on what we read from lib req.clear(); req.add_query(&quot;UPDATE library SET status = 'deprecated' WHERE id = $1&quot;, {lib.id}); req.add_simple_query(&quot;COMMIT&quot;); co_await conn.async_exec(req, ignore); The key point here is that this pattern requires exclusive access to conn. No other requests should be interleaved between the first and the second async_exec invocations. The best way to solve this is by using a connection pool. This is what client code could look like: co_await pool.async_exec([&amp;amp;] (connection&amp;amp; conn) -&amp;gt; asio::awaitable&amp;lt;system::error_code&amp;gt; { request req; req.add_simple_query(&quot;BEGIN&quot;); req.add_query(&quot;SELECT balance, status FROM accounts WHERE user_id = $1 FOR UPDATE&quot;, {user_id}); account_info acc; co_await conn.async_exec(req, into(acc)); // Check if account has sufficient funds and is active if (acc.balance &amp;lt; payment_amount || acc.status != &quot;active&quot;) co_return error::insufficient_funds; // Call external payment gateway API - this CANNOT be done in SQL auto result = co_await payment_gateway.process_charge(user_id, payment_amount); // Compose next request based on the external API response req.clear(); if (result.success) { req.add_query( &quot;UPDATE accounts SET balance = balance - $1 WHERE user_id = $2&quot;, {payment_amount, user_id} ); req.add_simple_query(&quot;COMMIT&quot;); } co_await conn.async_exec(req, ignore); // The connection is automatically returned to the pool when this coroutine completes co_return result.success ? error_code{} : error::payment_failed; }); I explicitly want to avoid having a connection_pool::async_get_connection() function, like in Boost.MySQL. This function returns a proxy object that grants access to a free connection. When destroyed, the connection is returned to the pool. This pattern looks great on paper, but runs into severe complications in multi-threaded code. The proxy object’s destructor needs to mutate the pool’s state, thus needing at least an asio::dispatch to the pool’s executor, which may or may not be a strand. It is so easy to get wrong that Boost.MySQL added a pool_params::thread_safe boolean option to take care of this automatically, adding extra complexity. Definitely something to avoid. SQL formatting As we’ve seen, the protocol has built-in support for adding parameters to queries (see placeholders like $1). These placeholders are expanded in the server securely. While this covers most cases, sometimes we need to generate SQL that is too dynamic to be handled by the server. For instance, a website might allow multiple optional filters, translating into WHERE clauses that might or might not be present. These use cases require SQL generated in the client. To do so, we need a way of formatting user-supplied values without running into SQL injection vulnerabilities. The final piece of the library becomes a format_sql function akin to the one in Boost.MySQL. Final thoughts While the plan is clear, there is still much to be done here. There are dedicated APIs for high-throughput data copying and push notifications that need to be implemented. Some of the described APIs have a solid working implementation, while others still need some work. All in all, I hope that this library can soon reach a state where it can be useful to people.</summary></entry><entry><title type="html">Systems, CI Updates Q4 2025</title><link href="http://cppalliance.org/sam/2026/01/22/SamsQ4Update.html" rel="alternate" type="text/html" title="Systems, CI Updates Q4 2025" /><published>2026-01-22T00:00:00+00:00</published><updated>2026-01-22T00:00:00+00:00</updated><id>http://cppalliance.org/sam/2026/01/22/SamsQ4Update</id><content type="html" xml:base="http://cppalliance.org/sam/2026/01/22/SamsQ4Update.html">&lt;h3 id=&quot;doc-previews-and-doc-builds&quot;&gt;Doc Previews and Doc Builds&lt;/h3&gt;

&lt;p&gt;The pull request to isomorphic-git “Support git commands run in submodules” was merged, and released in the latest version. (See previous post for an explanation). The commit modified 153 files, all the git api commands, and tests applying to each one. The next step is for upstream Antora to adjust package.json and refer to the newer isomorphic-git so it will be distributed along with Antora. Since isomorphic-git is more widely used than just Antora, their userbase is already field testing the new version.&lt;/p&gt;

&lt;p&gt;Created an antora extension https://github.com/cppalliance/antora-downloads-extension that will retry ui-bundle downloads. The Boost Superproject builds sometimes fail because of Antora download failures. I am now in the process of rolling out this extension to all affected repositories. It must be included in each playbook if that playbook downloads the bundle as part of the build process.&lt;/p&gt;

&lt;p&gt;Adjusted doc previews to update the existing PR comments instead of posting many new ones, to reduce the email spam effect. The job will modify a timestamp in the PR comment which allows developers to see the most recent build time and if the pages rebuilt successfully. I needed to solve some puzzles to implement this, since usually Jenkins jobs are stateless and don’t know if they previously posted a comment, or which comment it was that should be modified across subsequent jobs runs. It turns out there is a feature “Build with Parameters”, and properties/parameters can be saved in the job.&lt;/p&gt;

&lt;h3 id=&quot;boost-website-boostorgwebsite-v2&quot;&gt;Boost website boostorg/website-v2&lt;/h3&gt;

&lt;p&gt;Lowered the CPU threshold on the horizontal pod autoscaler to scale pods more rapidly when there is increased traffic.&lt;/p&gt;

&lt;p&gt;When web visitors go to the wrong domain or URL, set the redirects to 301 “moved permanently”. Reduced the number of redirect hops by sending visitors directly to the final URL www.boost.org.&lt;/p&gt;

&lt;p&gt;Investigated a bug where PDF files were timing out and crashing the server. Those should not be parsed by beautiful soup or lxml.&lt;/p&gt;

&lt;p&gt;During this quarter we published boost 1.90.0. Worked closely with the release managers to resolve problems during the release. The boost.org website was not fully updating after importing the new version.&lt;/p&gt;

&lt;p&gt;Meetings about CMS feature, other topics. Many general discussions about website issues.&lt;/p&gt;

&lt;h3 id=&quot;mailman3&quot;&gt;Mailman3&lt;/h3&gt;

&lt;p&gt;When unmoderating a new user on mailman3 an administrator must click a drop-down and select “Default Processing” so this subscriber may send emails directly to the list and not continue to be moderated. I have started developing an enhancement in Postorius whereby there is one simple button “Accept and Unmoderate” thus streamlining the process. However as often happens with new and radical ideas sent to the Mailman maintainers, they put up roadblocks. While I believe the new feature is promising, and it is helpful to quickly unmoderate users, without skipping that step, the future of the pull request is uncertain.&lt;/p&gt;

&lt;h3 id=&quot;boost-ci&quot;&gt;boost-ci&lt;/h3&gt;

&lt;p&gt;Created a Fastly CDN mirror of keyserver.ubuntu.com at keyserver.boost.org. If keyserver.ubuntu.com experiences occasional outages but keys are cached on the CDN mirror, then CI jobs will be able to proceed without difficulty. Configured both Drone and boost-ci to use the CDN at keyserver.boost.org.&lt;/p&gt;

&lt;h3 id=&quot;jenkins&quot;&gt;Jenkins&lt;/h3&gt;

&lt;p&gt;Beast2 doc previews. Capy previews. JSON lcov jobs. Openmethod doc previews.&lt;/p&gt;

&lt;p&gt;Modified email notifications to send ‘recovery’ type messages after failed jobs.  Other enhancements to Jenkins jobs.&lt;/p&gt;

&lt;h3 id=&quot;boost-release-process-boostorgrelease-tools&quot;&gt;Boost release process boostorg/release-tools&lt;/h3&gt;

&lt;p&gt;When building releases with publish-release.py, generate “nodocs” copies of the Boost releases and upload them to archives.boost.io. The “nodocs” versions are now functional. If anyone would like to accelerate their CI build process, set the target URL to nodocs such as: https://archives.boost.io/release/1.90.0/source-nodocs/boost_1_90_0.tar.gz .&lt;/p&gt;

&lt;p&gt;Migrated the release workstation instance from GCP to AWS so that during the next Boost release 1.91.0 the server will be closer to AWS S3, allowing file uploads to transfer faster. Designed a mechanism that resizes the server instance on a cron schedule via GHA. Most of the time it’s quite small, but during releases the server is allocated more CPU.&lt;/p&gt;

&lt;h3 id=&quot;drone&quot;&gt;Drone&lt;/h3&gt;

&lt;p&gt;Microsoft Windows - VS2026 container image.&lt;br /&gt;
Ubuntu 25.10 container image.&lt;/p&gt;

&lt;h3 id=&quot;gha&quot;&gt;GHA&lt;/h3&gt;

&lt;p&gt;Added CI jobs to build “documentation” in the boostorg/container repository. GHA will test doc builds, which helps when debugging modifications to documentation.&lt;/p&gt;

&lt;p&gt;Fil-C is a “fanatically compatible memory-safe implementation of C and C++.” https://github.com/pizlonator/fil-c  Upon request, I composed an example Fil-C GitHub Actions job at https://github.com/sdarwin/fil-c-demo which was then applied by developers in other Boost repositories.&lt;/p&gt;</content><author><name></name></author><category term="sam" /><summary type="html">Doc Previews and Doc Builds The pull request to isomorphic-git “Support git commands run in submodules” was merged, and released in the latest version. (See previous post for an explanation). The commit modified 153 files, all the git api commands, and tests applying to each one. The next step is for upstream Antora to adjust package.json and refer to the newer isomorphic-git so it will be distributed along with Antora. Since isomorphic-git is more widely used than just Antora, their userbase is already field testing the new version. Created an antora extension https://github.com/cppalliance/antora-downloads-extension that will retry ui-bundle downloads. The Boost Superproject builds sometimes fail because of Antora download failures. I am now in the process of rolling out this extension to all affected repositories. It must be included in each playbook if that playbook downloads the bundle as part of the build process. Adjusted doc previews to update the existing PR comments instead of posting many new ones, to reduce the email spam effect. The job will modify a timestamp in the PR comment which allows developers to see the most recent build time and if the pages rebuilt successfully. I needed to solve some puzzles to implement this, since usually Jenkins jobs are stateless and don’t know if they previously posted a comment, or which comment it was that should be modified across subsequent jobs runs. It turns out there is a feature “Build with Parameters”, and properties/parameters can be saved in the job. Boost website boostorg/website-v2 Lowered the CPU threshold on the horizontal pod autoscaler to scale pods more rapidly when there is increased traffic. When web visitors go to the wrong domain or URL, set the redirects to 301 “moved permanently”. Reduced the number of redirect hops by sending visitors directly to the final URL www.boost.org. Investigated a bug where PDF files were timing out and crashing the server. Those should not be parsed by beautiful soup or lxml. During this quarter we published boost 1.90.0. Worked closely with the release managers to resolve problems during the release. The boost.org website was not fully updating after importing the new version. Meetings about CMS feature, other topics. Many general discussions about website issues. Mailman3 When unmoderating a new user on mailman3 an administrator must click a drop-down and select “Default Processing” so this subscriber may send emails directly to the list and not continue to be moderated. I have started developing an enhancement in Postorius whereby there is one simple button “Accept and Unmoderate” thus streamlining the process. However as often happens with new and radical ideas sent to the Mailman maintainers, they put up roadblocks. While I believe the new feature is promising, and it is helpful to quickly unmoderate users, without skipping that step, the future of the pull request is uncertain. boost-ci Created a Fastly CDN mirror of keyserver.ubuntu.com at keyserver.boost.org. If keyserver.ubuntu.com experiences occasional outages but keys are cached on the CDN mirror, then CI jobs will be able to proceed without difficulty. Configured both Drone and boost-ci to use the CDN at keyserver.boost.org. Jenkins Beast2 doc previews. Capy previews. JSON lcov jobs. Openmethod doc previews. Modified email notifications to send ‘recovery’ type messages after failed jobs. Other enhancements to Jenkins jobs. Boost release process boostorg/release-tools When building releases with publish-release.py, generate “nodocs” copies of the Boost releases and upload them to archives.boost.io. The “nodocs” versions are now functional. If anyone would like to accelerate their CI build process, set the target URL to nodocs such as: https://archives.boost.io/release/1.90.0/source-nodocs/boost_1_90_0.tar.gz . Migrated the release workstation instance from GCP to AWS so that during the next Boost release 1.91.0 the server will be closer to AWS S3, allowing file uploads to transfer faster. Designed a mechanism that resizes the server instance on a cron schedule via GHA. Most of the time it’s quite small, but during releases the server is allocated more CPU. Drone Microsoft Windows - VS2026 container image. Ubuntu 25.10 container image. GHA Added CI jobs to build “documentation” in the boostorg/container repository. GHA will test doc builds, which helps when debugging modifications to documentation. Fil-C is a “fanatically compatible memory-safe implementation of C and C++.” https://github.com/pizlonator/fil-c Upon request, I composed an example Fil-C GitHub Actions job at https://github.com/sdarwin/fil-c-demo which was then applied by developers in other Boost repositories.</summary></entry><entry><title type="html">Containers galore</title><link href="http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update.html" rel="alternate" type="text/html" title="Containers galore" /><published>2026-01-18T00:00:00+00:00</published><updated>2026-01-18T00:00:00+00:00</updated><id>http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update.html">&lt;p&gt;During Q4 2025, I’ve been working in the following areas:&lt;/p&gt;

&lt;h3 id=&quot;boostbloom&quot;&gt;Boost.Bloom&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Written &lt;a href=&quot;https://bannalia.blogspot.com/2025/10/bulk-operations-in-boostbloom.html&quot;&gt;an article&lt;/a&gt; explaining
the usage and implementation of the recently introduced bulk operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostunordered&quot;&gt;Boost.Unordered&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Written maintenance fixes
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/320&quot;&gt;PR#320&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/321&quot;&gt;PR#321&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/326&quot;&gt;PR#326&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/327&quot;&gt;PR#327&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/328&quot;&gt;PR#328&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/335&quot;&gt;PR#335&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostmultiindex&quot;&gt;Boost.MultiIndex&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Refactored the library to use Boost.Mp11 instead of Boost.MPL (&lt;a href=&quot;https://github.com/boostorg/multi_index/pull/87&quot;&gt;PR#87&lt;/a&gt;),
remove pre-C++11 variadic argument emulation (&lt;a href=&quot;https://github.com/boostorg/multi_index/pull/88&quot;&gt;PR#88&lt;/a&gt;)
and remove all sorts of pre-C++11 polyfills (&lt;a href=&quot;https://github.com/boostorg/multi_index/pull/90&quot;&gt;PR#90&lt;/a&gt;).
These changes are explained in &lt;a href=&quot;https://bannalia.blogspot.com/2025/12/boostmultiindex-refactored.html&quot;&gt;an article&lt;/a&gt;
and will be shipped in Boost 1.91. Transition is expected to be mostly backwards
compatible, though two Boost libraries needed adjustments as they use MultiIndex
in rather advanced ways (see below).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostflyweight&quot;&gt;Boost.Flyweight&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Adapted the library to work with Boost.MultiIndex 1.91
(&lt;a href=&quot;https://github.com/boostorg/flyweight/pull/25&quot;&gt;PR#25&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostbimap&quot;&gt;Boost.Bimap&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Adapted the library to work with Boost.MultiIndex 1.91
(&lt;a href=&quot;https://github.com/boostorg/bimap/pull/50&quot;&gt;PR#50&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;other-boost-libraries&quot;&gt;Other Boost libraries&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Helped set up the Antora-based doc build chain for DynamicBitset
(&lt;a href=&quot;https://github.com/boostorg/dynamic_bitset/pull/96&quot;&gt;PR#96&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/dynamic_bitset/pull/97&quot;&gt;PR#97&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/dynamic_bitset/pull/98&quot;&gt;PR#98&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Same with OpenMethod
(&lt;a href=&quot;https://github.com/boostorg/openmethod/pull/40&quot;&gt;PR#40&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Fixed concept compliance of iterators provided by Spirit
(&lt;a href=&quot;https://github.com/boostorg/spirit/pull/840&quot;&gt;PR#840&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/spirit/pull/841&quot;&gt;PR#841&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;experiments-with-fil-c&quot;&gt;Experiments with Fil-C&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://fil-c.org/&quot;&gt;Fil-C&lt;/a&gt; is a C and C++ compiler built on top of LLVM that adds run-time
memory-safety mechanisms preventing out-of-bounds and use-after-free accesses. 
I’ve been experimenting with compiling Boost.Unordered test suite with Fil-C and running
some benchmarks to measure the resulting degradation in execution times and memory usage.
Results follow:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Articles
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://bannalia.blogspot.com/2025/11/some-experiments-with-boostunordered-on.html&quot;&gt;Some experiments with Boost.Unordered on Fil-C&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://bannalia.blogspot.com/2025/11/comparing-run-time-performance-of-fil-c.html&quot;&gt;Comparing the run-time performance of Fil-C and ASAN&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Repos
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://github.com/joaquintides/fil-c_boost_unordered&quot;&gt;Compiling Boost.Unordered test suite with Fil-C&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map_fil-c&quot;&gt;Benchmarks of Fil-C and ASAN against baseline&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map_fil-c_memory&quot;&gt;Memory consumption of Fil-C and ASAN with respect to baseline&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;proof-of-concept-of-a-semistable-vector&quot;&gt;Proof of concept of a semistable vector&lt;/h3&gt;

&lt;p&gt;By “semistable vector” I mean that pointers to the elements may be invalidated
upon insertion and erasure (just like a regular &lt;code&gt;std::vector&lt;/code&gt;) but iterators
to non-erased elements remain valid throughout.
I’ve written a small &lt;a href=&quot;https://github.com/joaquintides/semistable_vector/&quot;&gt;proof of concept&lt;/a&gt;
of this idea and measured its performance against non-stable &lt;code&gt;std::vector&lt;/code&gt; and fully
stable &lt;code&gt;std::list&lt;/code&gt;. It is dubious that such container could be of interest for production
use, but the techniques explored are mildly interesting and could be adapted, for
instance, to write powerful safe iterator facilities.&lt;/p&gt;

&lt;h3 id=&quot;teaser-exploring-the-stdhive-space&quot;&gt;Teaser: exploring the &lt;code&gt;std::hive&lt;/code&gt; space&lt;/h3&gt;

&lt;p&gt;In short, &lt;code&gt;std::hive&lt;/code&gt; (coming in C++26) is a container with stable references/iterators
and fast insertion and erasure. The &lt;a href=&quot;https://github.com/mattreecebentley/plf_hive&quot;&gt;reference implementation&lt;/a&gt;
for this container relies on a rather convoluted data structure, and I started to wonder
if something simpler could deliver superior performance. Expect to see the results of
my experiments in Q1 2026.&lt;/p&gt;

&lt;h3 id=&quot;website&quot;&gt;Website&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Filed issues
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1936&quot;&gt;#1936&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1937&quot;&gt;#1937&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1984&quot;&gt;#1984&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;support-to-the-community&quot;&gt;Support to the community&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;I’ve been part of a task force with the C++ Alliance to review the entire
catalog of Boost libraries (170+) and categorize them according to their
maintainance status and relevance in light of additions to the C++
standard library over the years.&lt;/li&gt;
  &lt;li&gt;Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).&lt;/li&gt;
&lt;/ul&gt;</content><author><name></name></author><category term="joaquin" /><summary type="html">During Q4 2025, I’ve been working in the following areas: Boost.Bloom Written an article explaining the usage and implementation of the recently introduced bulk operations. Boost.Unordered Written maintenance fixes PR#320, PR#321, PR#326, PR#327, PR#328, PR#335. Boost.MultiIndex Refactored the library to use Boost.Mp11 instead of Boost.MPL (PR#87), remove pre-C++11 variadic argument emulation (PR#88) and remove all sorts of pre-C++11 polyfills (PR#90). These changes are explained in an article and will be shipped in Boost 1.91. Transition is expected to be mostly backwards compatible, though two Boost libraries needed adjustments as they use MultiIndex in rather advanced ways (see below). Boost.Flyweight Adapted the library to work with Boost.MultiIndex 1.91 (PR#25). Boost.Bimap Adapted the library to work with Boost.MultiIndex 1.91 (PR#50). Other Boost libraries Helped set up the Antora-based doc build chain for DynamicBitset (PR#96, PR#97, PR#98). Same with OpenMethod (PR#40). Fixed concept compliance of iterators provided by Spirit (PR#840, PR#841). Experiments with Fil-C Fil-C is a C and C++ compiler built on top of LLVM that adds run-time memory-safety mechanisms preventing out-of-bounds and use-after-free accesses. I’ve been experimenting with compiling Boost.Unordered test suite with Fil-C and running some benchmarks to measure the resulting degradation in execution times and memory usage. Results follow: Articles Some experiments with Boost.Unordered on Fil-C Comparing the run-time performance of Fil-C and ASAN Repos Compiling Boost.Unordered test suite with Fil-C Benchmarks of Fil-C and ASAN against baseline Memory consumption of Fil-C and ASAN with respect to baseline Proof of concept of a semistable vector By “semistable vector” I mean that pointers to the elements may be invalidated upon insertion and erasure (just like a regular std::vector) but iterators to non-erased elements remain valid throughout. I’ve written a small proof of concept of this idea and measured its performance against non-stable std::vector and fully stable std::list. It is dubious that such container could be of interest for production use, but the techniques explored are mildly interesting and could be adapted, for instance, to write powerful safe iterator facilities. Teaser: exploring the std::hive space In short, std::hive (coming in C++26) is a container with stable references/iterators and fast insertion and erasure. The reference implementation for this container relies on a rather convoluted data structure, and I started to wonder if something simpler could deliver superior performance. Expect to see the results of my experiments in Q1 2026. Website Filed issues #1936, #1937, #1984. Support to the community I’ve been part of a task force with the C++ Alliance to review the entire catalog of Boost libraries (170+) and categorize them according to their maintainance status and relevance in light of additions to the C++ standard library over the years. Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).</summary></entry><entry><title type="html">Decimal is Accepted and Next Steps</title><link href="http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html" rel="alternate" type="text/html" title="Decimal is Accepted and Next Steps" /><published>2026-01-15T00:00:00+00:00</published><updated>2026-01-15T00:00:00+00:00</updated><id>http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html">&lt;p&gt;After two reviews the Decimal (&lt;a href=&quot;https://github.com/cppalliance/decimal&quot;&gt;https://github.com/cppalliance/decimal&lt;/a&gt;) library has been accepted into Boost.
Look for it to ship for the first time with Boost 1.91 in the Spring.
For current and prospective users, a new release series (v6) is available on the releases page of the library.
This major version change contains all of the bug fixes and addresses comments from the second review.
We have once again overhauled the documentation based on the review to include a significant increase in the number of examples.
Between the &lt;code&gt;Basic Usage&lt;/code&gt; and &lt;code&gt;Examples&lt;/code&gt; tabs on the website we believe there’s now enough information to quickly make good use of the library.
One big quality of life worth highlighting for this version is that it ships with pretty printers for both GDB and LLDB.
It is a huge release (1108 commits with a diff stat of &amp;gt;50k LOC), but is be better than ever.
I expect that this is the last major version that will be released prior to moving to the Boost release cycle.&lt;/p&gt;

&lt;p&gt;Where to go from here?&lt;/p&gt;

&lt;p&gt;As I have mentioned in previous posts, the int128 (&lt;a href=&quot;https://github.com/cppalliance/int128&quot;&gt;https://github.com/cppalliance/int128&lt;/a&gt;) library started life as the backend for portable arithmetic and representation in the Decimal library.
It has since been expanded to include more of the standard library features that are unnecessary as a back-end, but useful to many people like &lt;code&gt;&amp;lt;format&amp;gt;&lt;/code&gt; support. 
The last major update that I intend to make to the library prior to proposal for Boost is to add CUDA support.
This would not only add portability to another platform for many users, it would open the door for Decimal to also have CUDA support.
I will also be looking at a few of our performance measures as I think there are still places for improvement (such as signed 128-bit division).&lt;/p&gt;

&lt;p&gt;Lastly, towards the end of this quarter (March 5 - March 15), I will be serving as the review manager for Alfredo Correa’s Multi (&lt;a href=&quot;https://github.com/correaa/boost-multi&quot;&gt;https://github.com/correaa/boost-multi&lt;/a&gt;) library.
Multi is a modern C++ library that provides manipulation and access of data in multidimensional arrays for both CPU and GPU memory.
Feel free to give the library a go now and comment on what you find. 
This is a very high quality library which should have an exciting review.&lt;/p&gt;</content><author><name></name></author><category term="matt" /><summary type="html">After two reviews the Decimal (https://github.com/cppalliance/decimal) library has been accepted into Boost. Look for it to ship for the first time with Boost 1.91 in the Spring. For current and prospective users, a new release series (v6) is available on the releases page of the library. This major version change contains all of the bug fixes and addresses comments from the second review. We have once again overhauled the documentation based on the review to include a significant increase in the number of examples. Between the Basic Usage and Examples tabs on the website we believe there’s now enough information to quickly make good use of the library. One big quality of life worth highlighting for this version is that it ships with pretty printers for both GDB and LLDB. It is a huge release (1108 commits with a diff stat of &amp;gt;50k LOC), but is be better than ever. I expect that this is the last major version that will be released prior to moving to the Boost release cycle. Where to go from here? As I have mentioned in previous posts, the int128 (https://github.com/cppalliance/int128) library started life as the backend for portable arithmetic and representation in the Decimal library. It has since been expanded to include more of the standard library features that are unnecessary as a back-end, but useful to many people like &amp;lt;format&amp;gt; support. The last major update that I intend to make to the library prior to proposal for Boost is to add CUDA support. This would not only add portability to another platform for many users, it would open the door for Decimal to also have CUDA support. I will also be looking at a few of our performance measures as I think there are still places for improvement (such as signed 128-bit division). Lastly, towards the end of this quarter (March 5 - March 15), I will be serving as the review manager for Alfredo Correa’s Multi (https://github.com/correaa/boost-multi) library. Multi is a modern C++ library that provides manipulation and access of data in multidimensional arrays for both CPU and GPU memory. Feel free to give the library a go now and comment on what you find. This is a very high quality library which should have an exciting review.</summary></entry><entry><title type="html">From Prototype to Product: MrDocs in 2025</title><link href="http://cppalliance.org/alan/2025/10/28/Alan.html" rel="alternate" type="text/html" title="From Prototype to Product: MrDocs in 2025" /><published>2025-10-28T00:00:00+00:00</published><updated>2025-10-28T00:00:00+00:00</updated><id>http://cppalliance.org/alan/2025/10/28/Alan</id><content type="html" xml:base="http://cppalliance.org/alan/2025/10/28/Alan.html">&lt;p&gt;In 2024, the &lt;a href=&quot;https://www.mrdocs.com&quot;&gt;MrDocs&lt;/a&gt; project was a &lt;strong&gt;fragile prototype&lt;/strong&gt;. It documented Boost.URL, but the &lt;strong&gt;CLI&lt;/strong&gt;, &lt;strong&gt;configuration&lt;/strong&gt;, and &lt;strong&gt;build process&lt;/strong&gt; were unstable. Most users could not run it without direct help from the core group. That unstable baseline is the starting point for this report.&lt;/p&gt;

&lt;p&gt;In 2025, we moved the codebase to &lt;strong&gt;minimum-viable-product&lt;/strong&gt; shape. I led the releases that stabilized the pipeline, aligned the &lt;strong&gt;configuration model&lt;/strong&gt;, and documented the work in this report to support a smooth &lt;strong&gt;leadership transition&lt;/strong&gt;. This post summarizes the &lt;strong&gt;2024 gaps&lt;/strong&gt;, the &lt;strong&gt;2025 fixes&lt;/strong&gt;, and the &lt;strong&gt;recommended directions&lt;/strong&gt; for the next phase.&lt;/p&gt;

&lt;!-- prettier-ignore --&gt;
&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#system-overview&quot; id=&quot;markdown-toc-system-overview&quot;&gt;System Overview&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#2024-lessons-from-a-fragile-prototype&quot; id=&quot;markdown-toc-2024-lessons-from-a-fragile-prototype&quot;&gt;2024: Lessons from a Fragile Prototype&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#2025-from-prototype-to-mvp&quot; id=&quot;markdown-toc-2025-from-prototype-to-mvp&quot;&gt;2025: From Prototype to MVP&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#v003-enforcing-consistency&quot; id=&quot;markdown-toc-v003-enforcing-consistency&quot;&gt;v0.0.3: Enforcing Consistency&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#v004-establishing-the-foundation&quot; id=&quot;markdown-toc-v004-establishing-the-foundation&quot;&gt;v0.0.4: Establishing the Foundation&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#v005-stabilization-and-public-readiness&quot; id=&quot;markdown-toc-v005-stabilization-and-public-readiness&quot;&gt;v0.0.5: Stabilization and Public Readiness&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#2026-beyond-the-mvp&quot; id=&quot;markdown-toc-2026-beyond-the-mvp&quot;&gt;2026: Beyond the MVP&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#strategic-prioritization&quot; id=&quot;markdown-toc-strategic-prioritization&quot;&gt;Strategic Prioritization&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#reflection&quot; id=&quot;markdown-toc-reflection&quot;&gt;Reflection&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#metadata&quot; id=&quot;markdown-toc-metadata&quot;&gt;Metadata&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#extensions-and-plugins&quot; id=&quot;markdown-toc-extensions-and-plugins&quot;&gt;Extensions and Plugins&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#dependency-resilience&quot; id=&quot;markdown-toc-dependency-resilience&quot;&gt;Dependency Resilience&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#follow-up-issues-for-v006&quot; id=&quot;markdown-toc-follow-up-issues-for-v006&quot;&gt;Follow-up Issues for v0.0.6&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgments&quot; id=&quot;markdown-toc-acknowledgments&quot;&gt;Acknowledgments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;system-overview&quot;&gt;System Overview&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.mrdocs.com&quot;&gt;MrDocs&lt;/a&gt; is a C++ documentation generator built on &lt;strong&gt;Clang&lt;/strong&gt;. It parses source with full language fidelity, links declarations to their comments, and produces reference documentation that reflects real program structure—&lt;strong&gt;templates&lt;/strong&gt;, &lt;strong&gt;constraints&lt;/strong&gt;, and &lt;strong&gt;overloads&lt;/strong&gt; included.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Traditional tools often approximate the AST. MrDocs uses the AST directly, so documentation matches the code and modern C++ features render correctly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Unlike single-purpose generators, MrDocs separates the &lt;strong&gt;corpus&lt;/strong&gt; (semantic data) from the &lt;strong&gt;presentation layer&lt;/strong&gt;. Projects can choose among multiple &lt;strong&gt;output formats&lt;/strong&gt; or extend the system entirely: supply &lt;strong&gt;custom Handlebars templates&lt;/strong&gt; or script new generators using the &lt;strong&gt;plugin system&lt;/strong&gt;. The corpus is represented in the generators as a &lt;strong&gt;rich JSON-like DOM&lt;/strong&gt;. With schema files, MrDocs enables integration with &lt;strong&gt;build systems&lt;/strong&gt;, &lt;strong&gt;documentation frameworks&lt;/strong&gt;, or &lt;strong&gt;IDEs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;From the user’s perspective, MrDocs behaves like a &lt;strong&gt;well-engineered CLI utility&lt;/strong&gt;. It accepts &lt;strong&gt;configuration files&lt;/strong&gt;, supports &lt;strong&gt;relative paths&lt;/strong&gt;, accepts custom &lt;strong&gt;build options&lt;/strong&gt;, and reports &lt;strong&gt;warnings&lt;/strong&gt; in a controlled, &lt;strong&gt;compiler-like&lt;/strong&gt; fashion. For C++ teams transitioning from &lt;strong&gt;Doxygen&lt;/strong&gt;, the &lt;strong&gt;command structure&lt;/strong&gt; is somewhat familiar, but the &lt;strong&gt;internal model&lt;/strong&gt; is designed for &lt;strong&gt;reproducibility&lt;/strong&gt; and &lt;strong&gt;correctness&lt;/strong&gt;. Our goal is not just to render &lt;strong&gt;reference pages&lt;/strong&gt; but to provide a &lt;strong&gt;reliable pipeline&lt;/strong&gt; that any C++ project seeking &lt;strong&gt;modern documentation infrastructure&lt;/strong&gt; can adopt.&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
graph LR
  A[Source] --&amp;gt; B[Clang]
  B --&amp;gt; C[Corpus]
  C --&amp;gt; D{Plugin Layer}
  subgraph Generator
    E[HTML]
    F[AsciiDoc]
    G[XML]
    G2[...]
  end
  D --&amp;gt; E
  D --&amp;gt; F
  D --&amp;gt; G
  D --&amp;gt; G2
  E --&amp;gt; H{Plugin Layer}
  H --&amp;gt; H2[Published Docs]
  F --&amp;gt; H
  G --&amp;gt; H
  G2 --&amp;gt; H
  C --&amp;gt; I[Schema Export]
  I --&amp;gt; J[Integrations&lt;br /&gt;IDEs &amp;amp; Build Systems]
&lt;/div&gt;

&lt;h2 id=&quot;2024-lessons-from-a-fragile-prototype&quot;&gt;2024: Lessons from a Fragile Prototype&lt;/h2&gt;

&lt;p&gt;MrDocs entered 2024 as a proof-of-concept built for Boost.URL. It could document one or two curated codebases and produce asciidoc pages for Antora, but the workflow stopped there. The CLI exposed only the scenarios we needed. Configuration options lived in internal notes. The only dependable build path was the script sequence we used inside the Alliance. External users hit errors and missing options almost immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stability was just as fragile:&lt;/strong&gt; We had no &lt;strong&gt;sanitizers&lt;/strong&gt;, no &lt;strong&gt;warnings-as-errors&lt;/strong&gt;, and inconsistent &lt;strong&gt;CI hardware&lt;/strong&gt;. The binaries crashed as soon as they saw unfamiliar code. The pipeline worked only when the input looked like Boost.URL. Point it at slightly different code patterns and it would segfault. Each feature landed as a custom patch, so logic duplicated across generators, and fixing one path broke another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Early releases:&lt;/strong&gt; Release &lt;code&gt;v0.0.1&lt;/code&gt; captured that prototype: the early Handlebars engine, the HTML generator, the DOM refactor, and a list of APIs that only the core team could drive. &lt;code&gt;v0.0.2&lt;/code&gt; added structured configuration, automatic &lt;code&gt;compile_commands.json&lt;/code&gt;, and better SFINAE handling, but the tool was still insider-only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leadership transition:&lt;/strong&gt; Late in 2024 I became project lead with two initial priorities: &lt;strong&gt;document the gaps&lt;/strong&gt; and describe the &lt;strong&gt;true limits&lt;/strong&gt; of the system. That set the 2025 baseline—a functional prototype that needed &lt;strong&gt;coherence&lt;/strong&gt;, &lt;strong&gt;reproducibility&lt;/strong&gt;, and &lt;strong&gt;trust&lt;/strong&gt; before it could call itself a product.&lt;/p&gt;

&lt;p&gt;What 2025 later fixed were the weaknesses we saw here: configuration coherence, generator unification, schema validation, and basic options were all missing. The CLI, configuration files, and code drifted apart. Generators evolved independently with duplicated code and inconsistent naming. Editors had no schema to lean on. Extraction rules were ad hoc, which made the output incomplete. CI ran on an improvised matrix with no caching, sanitizers, or coverage, so regressions slipped through. That was the starting point.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Summary: 2024 produced a working demo, not a reproducible system. Each success exposed another weak link and clarified what had to change in 2025.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In short:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;2024 left us with a working prototype but no coherent architecture.&lt;/li&gt;
  &lt;li&gt;The system could demonstrate the concept, but not sustain or reproduce it.&lt;/li&gt;
  &lt;li&gt;Every improvement exposed another weak link, and every success demanded more structure than the system was built to handle.&lt;/li&gt;
  &lt;li&gt;It was a year of learning by exhaustion—and setting the stage for everything that came next.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key 2024 checkpoints align with the timeline below:&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
timeline
  title Prototypes
  2024 Q1 : Boost.URL showcase
  2024 Q2 : CLI gaps
  2024 Q3 : Config + SFINAE fixes
  2024 Q4 : Leadership transition
&lt;/div&gt;

&lt;h1 id=&quot;2025-from-prototype-to-mvp&quot;&gt;2025: From Prototype to MVP&lt;/h1&gt;

&lt;p&gt;I started the year with a gap analysis that compared MrDocs to other C++ documentation pipelines. From that review I defined the minimum viable product and three priority tracks. &lt;strong&gt;Usability&lt;/strong&gt; covered workflows and surface area that make adoption simple. &lt;strong&gt;Stability&lt;/strong&gt; covered deterministic behavior, proper data structures, and CI discipline. &lt;strong&gt;Foundation&lt;/strong&gt; covered configuration and data models that keep code, flags, and documentation aligned. The 2025 releases followed those tracks and turned MrDocs from a proof of concept into a tool that other teams can adopt.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;v0.0.3 — Consistency.&lt;/strong&gt; We replaced ad-hoc behavior with a coherent system: a single source of truth for configuration kept CLI, config files, and docs in sync; generators and templates were unified so changes propagate by design; core semantic extraction (e.g., concepts, constraints, SFINAE) became reliable; and CI hardened around reproducible, tested outputs across HTML and Antora.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;v0.0.4 — Foundation.&lt;/strong&gt; We introduced precise warning controls and a family of &lt;code&gt;extract-*&lt;/code&gt; options to match established tooling, added a JSON Schema for configuration (enabling editor validation/autocomplete), delivered a robust reference system for documentation comments, brought initial inline formatting to generators, and simplified onboarding with a cross-platform bootstrap script. CI gained sanitizers, coverage checks, and modern compilers.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;v0.0.5 — Stabilization.&lt;/strong&gt; We redesigned documentation metadata to support recursive inline elements, enforced safer polymorphic types with optional references and non-nullable patterns, and added user-facing improvements (sorting, automatic compilation database detection, quick reference indices, improved namespace/overload grouping, LLDB formatters). The website and documentation UI were refreshed for accessibility and responsiveness, new demos (including self-documentation) were published, and CI was further tightened with stricter policies and cross-platform bootstrap enhancements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these releases executed the roadmap derived from the initial gap analysis: they &lt;strong&gt;aligned&lt;/strong&gt; the moving parts, &lt;strong&gt;closed&lt;/strong&gt; the most important capability gaps, and delivered a &lt;strong&gt;stable foundation&lt;/strong&gt; that future work can extend without re-litigating fundamentals.&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {
  &quot;primaryColor&quot;: &quot;#e4eee8&quot;,
  &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;,
  &quot;primaryTextColor&quot;: &quot;#000000&quot;,
  &quot;lineColor&quot;: &quot;#baf9d9&quot;,
  &quot;secondaryColor&quot;: &quot;#f0eae4&quot;,
  &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;,
  &quot;fontSize&quot;: &quot;14px&quot;
}}}%%
mindmap
  root((MVP Evolution))
    v0.0.3
      Config sync
      Shared templates
      CI discipline
    v0.0.4
      Warning controls
      Schema
      Bootstrap
    v0.0.5
      Recursive docs
      Nav refresh
      Tooling polish
&lt;/div&gt;

&lt;h2 id=&quot;v003-enforcing-consistency&quot;&gt;v0.0.3: Enforcing Consistency&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;v0.0.3&lt;/code&gt; is where MrDocs stopped being a collection of one-off special cases and became a coherent system. Before this release, features landed in a single generator and drifted from the others; extraction handled only the narrowly requested pattern and crashed on nearby ones; and options were inconsistent—some hard-coded, some missing from CLI/config, with no mechanism to keep code, docs, and flags aligned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What changed:&lt;/strong&gt; The &lt;code&gt;v0.0.3&lt;/code&gt; release fixes this foundation. We introduced a single source of truth for &lt;strong&gt;configuration options&lt;/strong&gt; with TableGen-style metadata: docs, the config file, and the CLI always stay in sync. We added essential Doxygen-like options to make basic projects immediately usable and filled obvious gaps in symbols and doc comments.&lt;/p&gt;

&lt;p&gt;We implemented metadata extraction for &lt;strong&gt;core symbol types&lt;/strong&gt; and their information—such as template constraints, &lt;strong&gt;concepts&lt;/strong&gt;, and &lt;strong&gt;automatic SFINAE&lt;/strong&gt; detection. We &lt;strong&gt;unified generators&lt;/strong&gt; and templates so changes propagate by design, added &lt;strong&gt;tagfile support&lt;/strong&gt; and “lightweight reflection” to documentation comments as &lt;strong&gt;lazy DOM objects&lt;/strong&gt; and arrays, and &lt;strong&gt;extended Handlebars&lt;/strong&gt; to power the new generators. These features allowed us to create the initial version of the &lt;strong&gt;website&lt;/strong&gt; and ensure the documentation is always in sync.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build and testing discipline:&lt;/strong&gt; CI, builds, and tests were hardened. All generators were now tested, &lt;strong&gt;LLVM caching&lt;/strong&gt; systems improved, and we launched our first &lt;strong&gt;macOS release&lt;/strong&gt; (important for teams working on Antora UI bundles). All of this long tail of performance, correctness, and safety work turned “works on my machine” into repeatable, adoptable output across HTML and Antora.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;v0.0.3&lt;/code&gt; was the inflection point. For the first time, developers could depend on consistent configuration, &lt;strong&gt;shared templates&lt;/strong&gt;, and predictable behavior across generators. It aligned internal tools, eliminated duplicated effort, and replaced trial-and-error debugging with &lt;strong&gt;reproducible builds&lt;/strong&gt;. Every improvement in later versions built on this foundation.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Categorized improvements for v0.0.3&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;strong&gt;Configuration Options&lt;/strong&gt;: enforcing consistency, reproducible builds, and transparent reporting
      &lt;ul&gt;
        &lt;li&gt;Enforce configuration options are in sync with the JSON source of truth (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a1fb8ec6f23ef0802626329d7ab1e5c4635c52a7&quot; title=&quot;refactor(generate-config-info): normalization via visitor&quot;&gt;a1fb8ec6&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9daf71fe0539a3a6b926560a15e65fdbd6343356&quot; title=&quot;refactor: info nodes configuration file&quot;&gt;9daf71fe&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;File and symbol filters (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1b67a847db83f329af6cb9f059da7fa071939593&quot; title=&quot;feat: file and symbol filters&quot;&gt;1b67a847&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b352ba223db0ad0b3d5f7283072b5dffb95eab1e&quot; title=&quot;feat: symbol filters listed on docs&quot;&gt;b352ba22&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Reference and symbol configuration (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a3e4477f699e1c5c4d489239ad559f9d51823272&quot; title=&quot;feat: reference, symbol options&quot;&gt;a3e4477f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/30eaabc9a28aa3282bbe9e5b0c8b0e4a2c2c817f&quot; title=&quot;docs: reference, symbol options&quot;&gt;30eaabc9&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Extraction options (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/41411db2848e1fab628dc62ee2e1831628b5d4c7&quot; title=&quot;feat: extraction options&quot;&gt;41411db2&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1214d94bcf3597bd69caacd5b2648f677d4d197d&quot; title=&quot;docs: extraction options&quot;&gt;1214d94b&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Reporting options (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f994e47e318d852cc17cd026f7d7cdbcf3df0c5f&quot; title=&quot;feat: reporting options&quot;&gt;f994e47e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0dd9cb45cf0168dec028aeb276bd03a419ba3a12&quot; title=&quot;docs: reporting options&quot;&gt;0dd9cb45&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Configuration structure (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c8662b35fc85dc142f0694f299bb000a0f8899be&quot; title=&quot;feat: use structured information for configuration&quot;&gt;c8662b35&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/dcf5beef5a4b8ea75b24364b9c8a8f2f56d5e6c8&quot; title=&quot;feat: generate config documentation&quot;&gt;dcf5beef&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4bd3ea42420f20b6a45c545e7b61396567c3201f&quot; title=&quot;docs: configuration schema&quot;&gt;4bd3ea42&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;CLI workflows (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a2dc4c7883917025f0b63b227be7476f3986fd1d&quot; title=&quot;feat: CLI orchestrator improvements&quot;&gt;a2dc4c78&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3c0f90df53794a02d3c53d25aa4fa5c8a69fbaad&quot; title=&quot;docs: CLI quick reference&quot;&gt;3c0f90df&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Warnings (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4eab1933ff58330fb2c6753a648a26fba3038118&quot; title=&quot;docs: warnings&quot;&gt;4eab1933&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5e586f2b03dd7b1eb5a45e51c904d8cbf4f63661&quot; title=&quot;feat: warnings&quot;&gt;5e586f2b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0e2dd713ebde919bf0ebc231d9a5795eb99b0d25&quot; title=&quot;feat: warning when configuration references missing include directories&quot;&gt;0e2dd713&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;SettingsDB (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/225b2d50835485b746c766df8993e1bb66938d17&quot; title=&quot;feat: settings DB&quot;&gt;225b2d50&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/51639e77b629f00c02aa11afe41a01e12804ef63&quot; title=&quot;feat: settings db generator&quot;&gt;51639e77&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Deterministic configuration (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b544974105efc225af0af7f9952ef96338fe4c44&quot; title=&quot;feat: deterministic configuration order&quot;&gt;b5449741&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Global configuration documentation (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ec3dbf5c3d72b6a3cee6bea66f3002c59b398b80&quot; title=&quot;docs: global configuration reference&quot;&gt;ec3dbf5c&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Generators&lt;/strong&gt;: unification, new features, and early refactoring
      &lt;ul&gt;
        &lt;li&gt;Antora/HTML generator consistency (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/e674182fd5b72a91f7acd74d2f93df13d1d604b3&quot; title=&quot;refactor: antora/HTML generator consistency&quot;&gt;e674182f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/82e86a6cb1ced9c8aca8024f6314d1b4089f7cbd&quot; title=&quot;feat: unify Antora and HTML generation&quot;&gt;82e86a6c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9154b9c5957e4fa8aa4ad918b6d9e9cb61a2a08d&quot; title=&quot;feat: Antora generator templates&quot;&gt;9154b9c5&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;HTML generator improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a28cb2f7e2df935295b041b30c89ea2f0f7316a3&quot; title=&quot;feat: HTML generator improvements&quot;&gt;a28cb2f7&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/064ce55a568bf8adca76a56c16b918836147cee0&quot; title=&quot;feat(Handlebars): html generators&quot;&gt;064ce55a&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5f6665d8f8c0c54f1a77a4a6d9447bb7a8c9e968&quot; title=&quot;feat: html nav helper&quot;&gt;5f6665d8&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Documentation for generators (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2382e8cf095d8241d745e381042ec9cdb15f347d&quot; title=&quot;docs(generators): HTML and Antora&quot;&gt;2382e8cf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/646a1e5bae94b295ffdbbe07d0a7de618f2ab422&quot; title=&quot;docs: Antora generator docs&quot;&gt;646a1e5b&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Supporting new output formats (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/58a79f748dcefc4a6d561755a60f012f921985fe&quot; title=&quot;feat: generator registry&quot;&gt;58a79f74&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/271dde577da0c48f19c6d7dce39ed7e827642850&quot; title=&quot;feat: xml generator&quot;&gt;271dde57&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9d9f6652c8f247512c605bec097c1fd1f79afb57&quot; title=&quot;feat: xml generator docs&quot;&gt;9d9f6652&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Handlebars improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ebf4dbebc4b550321d0119b3372d856e56f5e41f&quot; title=&quot;feat: Handlebars improvements&quot;&gt;ebf4dbeb&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/be76fc073a95fdd2b4f69d0d68d03355e5caa0d1&quot; title=&quot;feat: handlebars helpers documentation&quot;&gt;be76fc07&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Generator tooling (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/00fc84cff9390743ecc1ff87f4d49d68e19698d7&quot; title=&quot;feat: generator tests&quot;&gt;00fc84cf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6a69747d86ea7117de64a559211a96d792f8f83a&quot; title=&quot;feat: generator harness&quot;&gt;6a69747d&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Navigation helpers (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/fdccad42c85358aed91c318ed3daa9d1113facde&quot; title=&quot;feat: navigation helpers&quot;&gt;fdccad42&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;DOM optimizations (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9b41d2e44fcb17d383c8d926c9988ccc381315d7&quot; title=&quot;feat: DOM optimizations&quot;&gt;9b41d2e4&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Libraries and metadata&lt;/strong&gt;: unification, fixes, and extraction enhancements
      &lt;ul&gt;
        &lt;li&gt;Info node visitor and traversal improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/be86a08d4df00800004337b52844af1f8d76f9fb&quot; title=&quot;feat: info node visitor improvements&quot;&gt;be86a08d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/58ab5a5ea28200bf26be8314ebb677cb5b87f106&quot; title=&quot;feat: traversal improvements&quot;&gt;58ab5a5e&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Metadata consistency (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/544ee37d11fa30537642abff3cf39e4beab8a7e2&quot; title=&quot;feat: metadata consistency&quot;&gt;544ee37d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/62f8a2bd3f52eef902bb47e8106d3b8cf886fbac&quot; title=&quot;feat: metadata refactor&quot;&gt;62f8a2bd&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bd9c704f87f40812d2b176143e7f24cc786ca7f0&quot; title=&quot;feat: metadata extraction&quot;&gt;bd9c704f&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Template and concept support (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4b0b4a7198a270e21d73f6c024d0d3c6cf6f8bbf&quot; title=&quot;feat: concept extraction&quot;&gt;4b0b4a71&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/57cf74de0a87fd29496b8aa00f9b355a51443ed6&quot; title=&quot;feat: SFINAE detection improvements&quot;&gt;57cf74de&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/92aa76a4529919831e3e2b8802e9b47b68d5d447&quot; title=&quot;feat: template constraints extraction&quot;&gt;92aa76a4&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Symbol resolution and references (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f64d4a06c17782fb8f75309cba3138ff9aa12f7d&quot; title=&quot;feat: symbol resolution improvements&quot;&gt;f64d4a06&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/aa9333d4c2eab4cc02c33ad4c7a0f8fb2c7cee25&quot; title=&quot;feat: reference handling improvements&quot;&gt;aa9333d4&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Documentation improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5d3f21c8c8d8235f57deeef78d9e4eab4607c6f9&quot; title=&quot;docs: metadata documentation&quot;&gt;5d3f21c8&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Website and Documentation&lt;/strong&gt;: turning features into a showcase and simplifying workflows
      &lt;ul&gt;
        &lt;li&gt;Create website (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/05400c3c42c85c31a892d763cddcb2b562205c10&quot; title=&quot;docs: website landing page&quot;&gt;05400c3c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8fba2020cb971722fcb4c7942d11cd8f1cfcd866&quot; title=&quot;docs: landing page download link&quot;&gt;8fba2020&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Use the new features to create an HTML panel demos workflow (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/12ceadee834e3dbb133f6e5ed24f6d2aafacbdc3&quot; title=&quot;docs: website panels use embedded HTML&quot;&gt;12ceadee&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d38d3e1a59983b09405b9accb47abb0f7d40a9d7&quot; title=&quot;docs(demos): enable HTML demos&quot;&gt;d38d3e1a&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c46c4a9179abb7701a2f1c6f9446f29caae64350&quot; title=&quot;ci: enable html demos&quot;&gt;c46c4a91&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Unify Antora author mode playbook (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/999ea4f3468ba3ad920b0cb91b56b5227c48d5a2&quot; title=&quot;docs: unify author mode playbook&quot;&gt;999ea4f3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Generator use cases and trade-offs (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2307ca6aba463bc67417e929563d63fb037fe3b4&quot; title=&quot;docs(generators): use cases and trade-offs&quot;&gt;2307ca6a&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Correctness and simplification (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4d884f43470596c69e500fe3ba55a2f504412056&quot; title=&quot;docs: simplify demos table&quot;&gt;4d884f43&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/55214d7242eaf4a5a8c5746d6b8779e82dbaeaf7&quot; title=&quot;docs: releases extension allows CI authentication and retries&quot;&gt;55214d72&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b078beadd046bed6806604b96196f91a234e1140&quot; title=&quot;docs(Scope): include lookups in documentation&quot;&gt;b078bead&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d8b7fcf4245e98d5935ebbb05c02d0aba62e3faa&quot; title=&quot;docs(usage): cmake example uses TMP_CPP_FILE&quot;&gt;d8b7fcf4&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/96484836ef67bba4b54df5f780c5caac3a255f68&quot; title=&quot;docs: libc++ compiler requirements&quot;&gt;96484836&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/62f361fb50ebb5c09e901b8a16b4cfa992bffcb1&quot; title=&quot;ci: remove info node support warnings&quot;&gt;62f361fb&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Build, Testing, and Releases&lt;/strong&gt;: strengthening CI, improving LLVM caching workflow, and stabilizing releases
      &lt;ul&gt;
        &lt;li&gt;Templates are tested with golden tests (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2bc09e65c916a0701ed3bf09ef11a7fb15d0abf1&quot; title=&quot;test: asciidoc golden tests&quot;&gt;2bc09e65&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9eece731f3865f2ed50faf3ee36c8c३०8b1ff90&quot; title=&quot;test: html golden tests&quot;&gt;9eece731&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;LLVM caches and runners improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4c14e875b06ad995ed3206cd2979dea13f004bd6&quot; title=&quot;ci: no fallback for GHA LLVM cache&quot;&gt;4c14e875&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bd54dc7c2562ec751e42fb161116468a4838cb6d&quot; title=&quot;ci: unify llvm parameters&quot;&gt;bd54dc7c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3d92071a351fb1ee59011d55ca90147762c62bb8&quot; title=&quot;ci: intermediary steps use actions&quot;&gt;3d92071a&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8537d3dbc71878a3ea6e176b0d25af8d0d51e799&quot; title=&quot;ci: resolve llvm-root for cache@v4&quot;&gt;8537d3db&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f3b33a473eb9b4d3abf6591e8aa49401efae7ba9&quot; title=&quot;ci(llvm-matrix): filter uses Node.js 20&quot;&gt;f3b33a47&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5982cc7e8bcaeb67fe5287c507636302416a7613&quot; title=&quot;ci(llvm-releases): handle empty llvm releases matrix&quot;&gt;5982cc7e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/93487669932e940115f9c6d827e301d41d2e9616&quot; title=&quot;ci(releases): test all releases&quot;&gt;93487669&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Enable macOS workflow (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/390159e34a91074627c333b6f0d09a25bf9d5452&quot; title=&quot;ci: enable macos&quot;&gt;390159e3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Stabilize artifacts (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5e0f628e5b1cdb06a3dd260e0e42069f12733353&quot; title=&quot;ci(releases): antora includes stacktraces&quot;&gt;5e0f628e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d1c3566ed55f0dfc2225d3f67224291367aa00f3&quot; title=&quot;ci: fix package asset uploads&quot;&gt;d1c3566e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/62736e456f1e9e089822930e10a322ebadc89730&quot; title=&quot;ci: demos artifact path is relative&quot;&gt;62736e45&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Tests support individual file inputs, which improved local tests considerably (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/75b1bc52d35648890f21c397fcfbcfb570d43d97&quot; title=&quot;Support file inputs&quot;&gt;75b1bc52&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Performance, correctness, and safety (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a820ad790d4fb943516d9f676bf8d96e9d7fd374&quot; title=&quot;ci(llvm-releases): ssh uses relative user paths&quot;&gt;a820ad79&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/43e5f2520462b9ab2fd5c9d6558d3c299c1a4b1a&quot; title=&quot;ci: prevent redundant builds&quot;&gt;43e5f252&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a382820f3adba34ca9b6d6c48924ee72fb6291b0&quot; title=&quot;ci: release packaging improvements&quot;&gt;a382820f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/fbcb5b2d445df1fa746aac1d4735d10d5451d70f&quot; title=&quot;ci: move sanitizer workflows&quot;&gt;fbcb5b2d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6a2290cbde99556ac06f94c0c1e1cd2ea9f29a44&quot; title=&quot;ci: enforce formatting on generators&quot;&gt;6a2290cb&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/49f4125ff42e0e1d80a55df6d49c6940700ebab7&quot; title=&quot;ci: disable failing llvm tests temporarily&quot;&gt;49f4125f&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;v004-establishing-the-foundation&quot;&gt;v0.0.4: Establishing the Foundation&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;v0.0.4&lt;/code&gt; completed the core capabilities we need for production. With the moving parts aligned in &lt;code&gt;v0.0.3&lt;/code&gt;, this release focused on the fundamentals. It added consistent &lt;strong&gt;warning options&lt;/strong&gt;, &lt;strong&gt;extraction controls&lt;/strong&gt; that match established tools, &lt;strong&gt;schema support&lt;/strong&gt; for IDE auto-completion, a complete &lt;strong&gt;reference system&lt;/strong&gt; for doc comments, and initial &lt;strong&gt;inline formatting&lt;/strong&gt; in the generators. The &lt;strong&gt;bootstrap script&lt;/strong&gt; became a one-step path to a working build. We also hardened the pipeline with modern &lt;strong&gt;CI&lt;/strong&gt; practices—sanitizers, coverage integration, and standardized presets.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Categorized improvements for v0.0.4&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;strong&gt;Configuration and Extraction&lt;/strong&gt;: structured configuration, extraction controls, and schema validation
      &lt;ul&gt;
        &lt;li&gt;Configuration schema (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d9517e1d37c61b45a8df89d647abb12ca0582788&quot; title=&quot;feat: generate JSON schema for config&quot;&gt;d9517e1d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5f846c1c1d4be0aa18862b08d4f39b8a1c398058&quot; title=&quot;feat: config schema docs&quot;&gt;5f846c1c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ffa0d1a661cbf7ef6b49666f598d33490af65f05&quot; title=&quot;feat: schema validation&quot;&gt;ffa0d1a6&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Extraction filters (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0a60bb989b1e60292b4e6fc8b5517fcd9e237ebd&quot; title=&quot;feat: extraction filter improvements&quot;&gt;0a60bb98&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a7d7714db8268c6e1df4032ff889473f6d429847&quot; title=&quot;feat: extraction filters doc updates&quot;&gt;a7d7714d&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Reference configuration (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d18a8ab3b0eeabac8d0a2ed880c1c1f196fedfbd&quot; title=&quot;feat: reference configuration updates&quot;&gt;d18a8ab3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Documentation metadata (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6676c1e8ed7d1f6d3828bcaf8b28577c88eb02e5&quot; title=&quot;feat: documentation metadata improvements&quot;&gt;6676c1e8&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Warnings and Reporting&lt;/strong&gt;: consistent governance with CLI parity
      &lt;ul&gt;
        &lt;li&gt;Warning controls (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2a29f0a04824c5b3d70755029766b1d19b8c5bcd&quot; title=&quot;feat: warning controls&quot;&gt;2a29f0a0&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6d3c1f47d662d0ed9264f10dd3d9cc3229a48bc3&quot; title=&quot;docs: warning controls&quot;&gt;6d3c1f47&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Extract options (&lt;code&gt;extract-{public,protected,private,inline}&lt;/code&gt;) (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/aa5a6be3d1f9a87d2fd1941f0904ffa52c57d205&quot; title=&quot;feat: extract options align with Doxygen defaults&quot;&gt;aa5a6be3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;CLI defaults (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d85439c399c88113a69e01358fd9a63a64c6af38&quot; title=&quot;feat: CLI defaults and reporting updates&quot;&gt;d85439c3&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Generators&lt;/strong&gt;: Javadoc, inline formatting, and reference improvements
      &lt;ul&gt;
        &lt;li&gt;Documentation reference system (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4b430f9b1bd1c6b7df49bb004bb7961c6f215047&quot; title=&quot;feat: documentation reference system&quot;&gt;4b430f9b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/73489e2b4be42d2b2c26cb013fe532d3fb4e9ff4&quot; title=&quot;docs: reference system docs&quot;&gt;73489e2b&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Javadoc metadata (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8dd3af67bbbf0a0f1e57d9f351d10d160dfde0f4&quot; title=&quot;feat: Javadoc metadata extraction&quot;&gt;8dd3af67&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f7e59d4c61d77c2587da9ba0fa808c5b1e366f3b&quot; title=&quot;docs: Javadoc metadata reference&quot;&gt;f7e59d4c&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Inline formatting (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5c7490a3d5388551e68f6f021caa6e741d0f2f86&quot; title=&quot;feat: inline formatting support&quot;&gt;5c7490a3&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d1d807456573e9350a13e01da27a8e8fc3d317fc&quot; title=&quot;fix: inline formatting edge cases&quot;&gt;d1d80745&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;XML generator alignment (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9867e0d25fb16973109fec922dd068991de3d5af&quot; title=&quot;feat: XML generator schema alignment&quot;&gt;9867e0d2&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0f890f2c1d2d471ffe9343d7b15b731afc93e8e2&quot; title=&quot;fix: XML generator synchronizes metadata&quot;&gt;0f890f2c&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Build and CI&lt;/strong&gt;: sanitizers, coverage, and reproducible builds
      &lt;ul&gt;
        &lt;li&gt;Sanitizer integration (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6257c74758f0f382d7c4d6cd430144bd7e7a1740&quot; title=&quot;ci: add asan clang Linux job&quot;&gt;6257c747&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/88954d7f00b1d7fb8de8824e422ddc8fd7081f39&quot; title=&quot;ci: add msan Linux job&quot;&gt;88954d7f&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Coverage reporting (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bf195759192109cee82097cce91440d0155616b5&quot; title=&quot;ci: enable coverage validation for PRs&quot;&gt;bf195759&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Relocatable build (&lt;code&gt;std::format&lt;/code&gt;) (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/7b871032ae0fd34e69370e0ab45e910255f8f1c9&quot; title=&quot;feat: switch to std::format for relocatable build&quot;&gt;7b871032&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Bootstrap modernization (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3eec9a48e7df379a43c2abaea65a74acc9bd733f&quot; title=&quot;build(bootstrap): find_tool also looks at prefixes&quot;&gt;3eec9a48&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/71afb87b3e3c397d0681da961f754cdfb50d4aad&quot; title=&quot;build(bootstrap): run configurations create paths with path.join&quot;&gt;71afb87b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/524e7923750f2dd8e8e19d11cc468fa8dd49f70a&quot; title=&quot;build(bootstrap): visual studio run configurations and tasks&quot;&gt;524e7923&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;v005-stabilization-and-public-readiness&quot;&gt;v0.0.5: Stabilization and Public Readiness&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;v0.0.5&lt;/code&gt; marked the transition toward a &lt;strong&gt;sustained development model&lt;/strong&gt; and prepared the project for &lt;strong&gt;handoff&lt;/strong&gt;. This release focused on &lt;strong&gt;presentation&lt;/strong&gt;, &lt;strong&gt;polish&lt;/strong&gt;, and &lt;strong&gt;reliability&lt;/strong&gt;—ensuring that MrDocs was ready not only for internal use but for public visibility. During this period, we expanded the set of &lt;strong&gt;public demos&lt;/strong&gt;, refined the &lt;strong&gt;website and documentation&lt;/strong&gt;, and stabilized the &lt;strong&gt;infrastructure&lt;/strong&gt; to support a growing user base. The goal was to leave the project in a state where it could continue evolving smoothly, with a stable core, clear development practices, and a professional public face.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community and visibility&lt;/strong&gt;: Beyond the commits, this release reflected broader &lt;strong&gt;activity around the project&lt;/strong&gt;. We generated and published several &lt;strong&gt;new demos&lt;/strong&gt;, many of which revealed &lt;strong&gt;integration issues&lt;/strong&gt; that were subsequently fixed. As more external users began adopting MrDocs, the &lt;strong&gt;feedback loop accelerated&lt;/strong&gt;: bug reports, feature requests, and real-world &lt;strong&gt;edge cases&lt;/strong&gt; guided much of the work. New contributors joined the team, collaboration became more distributed, and visibility increased. Around the same time, I introduced MrDocs to developers at &lt;strong&gt;CppCon 2025&lt;/strong&gt;, where it received strong feedback from library authors testing it on their own projects. The tool was beginning to gain recognition as a &lt;strong&gt;viable, modern alternative to Doxygen&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical progress&lt;/strong&gt;: This release focused on correctness. We redesigned the documentation comment data structures to support &lt;strong&gt;recursive inline elements&lt;/strong&gt; and render &lt;strong&gt;Markdown and HTML-style formatting&lt;/strong&gt; correctly. We moved to &lt;strong&gt;non-nullable polymorphic types&lt;/strong&gt; and &lt;strong&gt;optional references&lt;/strong&gt; so that invariants fail at compile time rather than at runtime. User-facing updates included new &lt;strong&gt;sorting options&lt;/strong&gt;, &lt;strong&gt;automatic compilation database detection&lt;/strong&gt;, a &lt;strong&gt;quick reference index&lt;/strong&gt;, broader namespace and overload grouping, and &lt;strong&gt;LLDB formatters&lt;/strong&gt; for Clang and MrDocs symbols. We &lt;strong&gt;refreshed the website and documentation UI&lt;/strong&gt; for accessibility and responsiveness, added new &lt;strong&gt;demos&lt;/strong&gt; (including the MrDocs self-reference), and tightened CI with more sanitizers, stricter warning policies, and cross-platform bootstrap improvements.&lt;/p&gt;

&lt;p&gt;Together, these improvements completed the transition from a &lt;strong&gt;developing prototype&lt;/strong&gt; to a &lt;strong&gt;dependable product&lt;/strong&gt;. &lt;code&gt;v0.0.5&lt;/code&gt; established a &lt;strong&gt;stable foundation&lt;/strong&gt; for others to build on—&lt;strong&gt;polished&lt;/strong&gt;, &lt;strong&gt;documented&lt;/strong&gt;, and &lt;strong&gt;resilient&lt;/strong&gt;—so future releases could focus on extending capabilities rather than consolidating them. With this release, the project reached a point where the &lt;strong&gt;handoff could occur naturally&lt;/strong&gt;, closing one chapter and opening another.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Categorized improvements for v0.0.5&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;strong&gt;Metadata&lt;/strong&gt;: documentation inlines and safety improvements
      &lt;ul&gt;
        &lt;li&gt;Recursive documentation inlines (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/51e2b655af43f36bc2fd3e9c369dbd48046d2de6&quot; title=&quot;feat(metadata): support recursive inline elements in documentation&quot;&gt;51e2b655&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Consistent sorting options for members and namespaces (&lt;code&gt;sort-members-by&lt;/code&gt;, &lt;code&gt;sort-namespace-members-by&lt;/code&gt;) (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f0ba28dd3526144e8053aa01eb1bbe5e90b7a4f3&quot; title=&quot;feat: `sort-members-by` option&quot;&gt;f0ba28dd&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a0f694dcf6c7d4fd0249f42f91592f65a5d78afd&quot; title=&quot;feat: `sort-namespace-members-by` option&quot;&gt;a0f694dc&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Non-nullable polymorphic types and optional references (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c9f9ba132627696b2140a62e078ed128edb2ea31&quot; title=&quot;feat(lib): optional nullable traits&quot;&gt;c9f9ba13&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8ef3ffaf8628f6c1c4109f2600061c7fb3778577&quot; title=&quot;feat(lib): optional references&quot;&gt;8ef3ffaf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bd3e1217e60f949c2bbf692917750fac3d9fad11&quot; title=&quot;refactor(lib): use mrdocs::Optional in public API&quot;&gt;bd3e1217&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/afa558a6dd834c10ba4153828d16340304d75c2c&quot; title=&quot;refactor(Corpus): enforce non-optional polymorphic types&quot;&gt;afa558a6&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6ba8ef6bdc5dcbb60c7b09344d3839bd39e49325&quot; title=&quot;refactor(Corpus): valueless_after_move is asserted&quot;&gt;6ba8ef6b&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Consistent metadata class family hierarchy pattern (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6d4954975bba75c184393b5d93f3f9f040311ed0&quot;&gt;6d495497&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;MrDocsSettings includes automatic compilation database support (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9afededbfe293f2e47fa2d7266b80772b0d0cb04&quot; title=&quot;feat: MrDocsSettings compilation database&quot;&gt;9afededb&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a1f289de6719d8004e11ebd066c3d2a49c4d28d4&quot; title=&quot;fix: use a distinct include guard in MrDocsSettingsDB.hpp&quot;&gt;a1f289de&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Quick reference index (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/68e029c17c51711c982c6e049510c8e47f5e4f66&quot; title=&quot;feat: quick reference index page&quot;&gt;68e029c1&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/940c33f47062b6d8f915bd5e92a3ce6f6e60d774&quot; title=&quot;feat: add close button to docs nav (#1033)&quot;&gt;940c33f4&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Namespace/using/overloads grouping includes using declarations and overloads as shadows (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/69e1c3bcd9607bc2037a50f865ecea976a72f5a6&quot; title=&quot;feat: namespace tranches include using declarations&quot;&gt;69e1c3bc&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d722b7d09ee20479fcd06726f95376589c39cc85&quot; title=&quot;feat(handlebars): using declaration page includes shadows and briefs&quot;&gt;d722b7d0&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2b59269cbe74bce8ee261552dd35a72cfb240b20&quot; title=&quot;feat: overload sets as shadow declarations&quot;&gt;2b59269c&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Conditional &lt;code&gt;explicit&lt;/code&gt; clauses in templated methods (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2bff4e2fbf93e35a5eeb31e0505c0bde9bcf7c6d&quot; title=&quot;feat: conditionally explicit clauses in templated methods&quot;&gt;2bff4e2f&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Destructor overloads supported in class templates (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/336ad3190fac18a69481b166d72b2d647db129c9&quot; title=&quot;feat: destructor overloads in class templates&quot;&gt;336ad319&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Using declarations include all shadow variants (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/88a1cebf1e62b87551cf2fd6ec5e1705d3a4e34a&quot; title=&quot;test: test cases for all using declaration variants&quot;&gt;88a1cebf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9253fd8f228208d17d94a5dc34a75c8c6c5c542d&quot; title=&quot;test: using declaration shadows only include previous declarations&quot;&gt;9253fd8f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a7d5cf6a00874addd313f74b7e833e0df6df1aaa&quot; title=&quot;test: using declaration with mixed shadows&quot;&gt;a7d5cf6a&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;&lt;code&gt;show-enum-constants&lt;/code&gt; option (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/07b69e1c92eee1b8d4176a4076161c10759d8aaf&quot; title=&quot;feat: show-enum-constants option&quot;&gt;07b69e1c&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Custom LLDB formatters for Clang and MrDocs symbols (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/069bd8f4f6aa85f24c5d938542e42791ee91c46a&quot; title=&quot;feat(lldb): LLDB data formatters&quot;&gt;069bd8f4&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f83eca17b1ca0ef593fd55373d48f48d101ec2cd&quot; title=&quot;fix(lldb): only handle Info types directly in mrdocs namespace&quot;&gt;f83eca17&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1b39fdd76abb2d531ab28f46ee086571dd745e44&quot; title=&quot;fix(lldb): clang ast formatters&quot;&gt;1b39fdd7&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/aefc53c7b016d43a46c75b150d41fec2f82f00b4&quot; title=&quot;fix(lldb): consistent &amp;lt;unnamed&amp;gt; clang summary&quot;&gt;aefc53c7&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Performance, correctness, and safety (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d1788049ffa6b8412869af319820328a05a24536&quot; title=&quot;feat: templates receive config via reflection&quot;&gt;d1788049&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3bd94cff54039b60217ec14767f359bc54f168d1&quot; title=&quot;refactor(Config): config dom object update function&quot;&gt;3bd94cff&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8a8115602137641f5dab378f292843bc9ad56f37&quot; title=&quot;fix: overloads finalizer preemptively emplaces members&quot;&gt;8a811560&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3ff37448d207b89e83a27e3ff58e6401a76eaee3&quot; title=&quot;fix: legible names handle using declarations as shadow&quot;&gt;3ff37448&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ad1e7baa611ba05f02759347d114b4cdb464a3c4&quot; title=&quot;Remove duplicate template argument list for excluded class template specialization&quot;&gt;ad1e7baa&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b10b8aa3bd2917e35e480390e2ce47d5b8dc9d48&quot; title=&quot;fix: symbol shadows table has a single column&quot;&gt;b10b8aa3&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/482c0be836921577afa08e62a7ed1d1829fafc9a&quot; title=&quot;refactor: xml generator use config values directly&quot;&gt;482c0be8&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d66da796fe9877bbece0bec7983b8c25bc16d1f5&quot; title=&quot;fix(handlebars): html code blocks start on the first line&quot;&gt;d66da796&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ec8daa11085ba58b52628c468f5591ffc0340208&quot; title=&quot;fix(handlebars): starts_with helper validates arguments&quot;&gt;ec8daa11&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5234b67cd0745048408705c50b2108cf4f09aedd&quot; title=&quot;fix(handlebars): recursively traversed namespaces do not include description&quot;&gt;5234b67c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5e879b102c76f416d0a40cae87ef16226ddc1431&quot; title=&quot;fix(handlebars): records include protected base classes&quot;&gt;5e879b10&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/35e14c93f27d323dea675bb83eb78d7077c8ad9d&quot; title=&quot;fix(ci,style): improve asset copying and enhance UI contrast for docs site (#979)&quot;&gt;35e14c93&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d5a28a8973ef3b6d01f2d48b229c9e666e093d7d&quot; title=&quot;feat(handlebars): final specifier&quot;&gt;d5a28a89&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6878c199920260f6ccca2f9be0f933bf08318398&quot; title=&quot;fix: `using` synopsis uses the nameinfo only&quot;&gt;6878c199&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/21ce3e74db3ff5fa9f7b0180530b95a3ef32a1d3&quot; title=&quot;fix: std::formatter for clang::mrdocs::SymbolID&quot;&gt;21ce3e74&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2da2081b0b0724309fc7a68c071e830e7faa2da9&quot; title=&quot;fix: remove an unused `else if` in record.hbs&quot;&gt;2da2081b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b528ae11a46f6b8fda4e47a30e32bc8868cc9555&quot; title=&quot;fix: simplify the logic about base classes in record.hbs&quot;&gt;b528ae11&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Website and Documentation&lt;/strong&gt;: new demos and a new website
      &lt;ul&gt;
        &lt;li&gt;New demos (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/cfa9eb7d1c7770ba6e1b6d12bf7322cb81afa4d2&quot; title=&quot;docs: fmt demo&quot;&gt;cfa9eb7d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1b930b863a7a7a763ef1349f51a1813769a84e41&quot; title=&quot;docs: fmt demo&quot;&gt;1b930b86&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c18be83e355a1a6bdea95f21f911080869267a07&quot; title=&quot;docs: nlohmann.json demo&quot;&gt;c18be83e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/177fae4a79f6d8d4665026f25aa2ce2482c59a09&quot; title=&quot;docs: extension sorts demos by release&quot;&gt;177fae4a&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/33275050025921c6aa6c241268899920f456e652&quot; title=&quot;docs: add range-v3 demo&quot;&gt;33275050&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Website and documentation refresh (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/35e14c93f27d323dea675bb83eb78d7077c8ad9d&quot; title=&quot;fix(ci,style): improve asset copying and enhance UI contrast for docs site (#979)&quot;&gt;35e14c93&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a643774216d553b7f0f16c3e9b7380c17da7f0c1&quot; title=&quot;docs: redesign landing page&quot;&gt;a6437742&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Self-documentation (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f2a5f77eb9d2273a15329f3d5c9963c1f48d9952&quot; title=&quot;docs: MrDocs documents itself&quot;&gt;f2a5f77e&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Antora enhancements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5ed0f48fda415df1d3f67bff4c8072921bffeb29&quot; title=&quot;docs: Antora enhancements&quot;&gt;5ed0f48f&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Build, Testing, and Releases&lt;/strong&gt;: improvements and hardening CI
      &lt;ul&gt;
        &lt;li&gt;Toolchain and CI hardening (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6257c74758f0f382d7c4d6cd430144bd7e7a1740&quot; title=&quot;ci: add asan clang Linux job&quot;&gt;6257c747&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/88954d7f00b1d7fb8de8824e422ddc8fd7081f39&quot; title=&quot;ci: add msan Linux job&quot;&gt;88954d7f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bf195759192109cee82097cce91440d0155616b5&quot; title=&quot;ci: enable coverage validation for PRs&quot;&gt;bf195759&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ba0dcfd37dee134f363cd0365d435b39fd6b766b&quot; title=&quot;ci: treat warnings as errors&quot;&gt;ba0dcfd3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Bootstrap improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3eec9a48e7df379a43c2abaea65a74acc9bd733f&quot; title=&quot;build(bootstrap): find_tool also looks at prefixes&quot;&gt;3eec9a48&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/71afb87b3e3c397d0681da961f754cdfb50d4aad&quot; title=&quot;build(bootstrap): run configurations create paths with path.join&quot;&gt;71afb87b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/524e7923750f2dd8e8e19d11cc468fa8dd49f70a&quot; title=&quot;build(bootstrap): visual studio run configurations and tasks&quot;&gt;524e7923&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4b79ef4136fabd8673d63361a0ba0412ed94330f&quot; title=&quot;build(bootstrap): probe vcvarsall environment&quot;&gt;4b79ef41&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/7d27204ee78255d557a384e4031688fe51a58779&quot; title=&quot;build(bootstrap): Boost documentation run configuration folder&quot;&gt;7d27204e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/988e9ebc576690c8885def76ee8ec4796764703&quot; title=&quot;build(bootstrap): config info for docs&quot;&gt;988e9ebc&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/94a5b799543e7b62802c8a18ca26ec156086ad24&quot; title=&quot;build(bootstrap): remove dependency build directories after installation&quot;&gt;94a5b799&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/be7332cf2a9b727fc8b4913c8b4303842505caa2&quot; title=&quot;build: presets use optimizeddebug to match bootstrap&quot;&gt;be7332cf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4d705c96be5daa974f0fc3417383b86eb3a9608d&quot; title=&quot;build(bootstrap): ensure git symlinks&quot;&gt;4d705c96&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f48bbd2fc9ee9e77120ed374997ba3ded4a6963d&quot; title=&quot;build: bootstrap enables libcxx hardening mode&quot;&gt;f48bbd2f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f93634610e131c0ab9ec6c45d4644eed4a16186d&quot; title=&quot;fix: bootstrap uses latest clang include directory&quot;&gt;f9363461&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Performance, correctness, and safety (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5aa714b21e11dbc64e51f81d7097adda59cd7cb4&quot; title=&quot;build: custom target to test all generators&quot;&gt;5aa714b2&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/469f41ee79957525e5fd52e1e3838624d03458f1&quot; title=&quot;remove_bad_files script does not rely on mapfile&quot;&gt;469f41ee&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/629f184895a04117b057602df9785cd23661f139&quot; title=&quot;build: quote genexp for target_include_directories&quot;&gt;629f1848&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2f0dd8c1c4dfd1a01e9543049ad00ca2bc9df984&quot; title=&quot;ci: antora workflow uses full clone&quot;&gt;2f0dd8c1&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/acf7c10709a1f3a4436101522d799718415ebad8&quot; title=&quot;ci: debug level for antora generation and copy&quot;&gt;acf7c107&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h1 id=&quot;2026-beyond-the-mvp&quot;&gt;2026: Beyond the MVP&lt;/h1&gt;

&lt;p&gt;MrDocs now ships a working MVP, but significant &lt;strong&gt;foundational work&lt;/strong&gt; remains. The priority framework is the same: start with &lt;strong&gt;gap analysis&lt;/strong&gt;, shape an &lt;strong&gt;MVP&lt;/strong&gt; (or now just a viable product), and rank follow-on work against that baseline. In 2025 we invested in &lt;strong&gt;presentation&lt;/strong&gt; earlier than &lt;strong&gt;infrastructure&lt;/strong&gt;. That inversion still raises costs: each foundational change forces rework across user-facing pieces.&lt;/p&gt;

&lt;p&gt;I do not know how the leadership model will evolve in 2026. The team might keep a single coordinator or move to shared stewardship. Regardless, the project only succeeds if we continue investing in &lt;strong&gt;foundational capabilities&lt;/strong&gt;. The steps below outline the &lt;strong&gt;recommendations&lt;/strong&gt; I believe will help keep MrDocs &lt;strong&gt;sustainable over the long term&lt;/strong&gt;.&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {
  &quot;primaryColor&quot;: &quot;#f2eadf&quot;,
  &quot;primaryBorderColor&quot;: &quot;#ffe8c6&quot;,
  &quot;primaryTextColor&quot;: &quot;#000000&quot;,
  &quot;lineColor&quot;: &quot;#ffe8c8&quot;,
  &quot;secondaryColor&quot;: &quot;#e8ebf3&quot;,
  &quot;tertiaryColor&quot;: &quot;#eceaf4&quot;,
  &quot;fontSize&quot;: &quot;14px&quot;
}}}%%
mindmap
  root((2026 Priorities))
    Reflection
      Describe symbols
      Shared walkers
    Metadata
      Recursive docs
      Stable names
      Typed expressions
    Extensions
      Script helpers
      Plugin ABI
    Dependencies
      Curated toolchain
      Opt-in stubs
    Community
      Integration demos
      Outreach cadence
&lt;/div&gt;

&lt;h2 id=&quot;strategic-prioritization&quot;&gt;Strategic Prioritization&lt;/h2&gt;

&lt;p&gt;Aligning &lt;strong&gt;priorities&lt;/strong&gt; is itself the highest priority. At the start of my tenure as project lead we followed a strict sequence—&lt;strong&gt;gap analysis&lt;/strong&gt;, then an &lt;strong&gt;MVP&lt;/strong&gt;, then a set of &lt;strong&gt;priorities&lt;/strong&gt;—but that model exposed limitations once work began to land. The &lt;strong&gt;issue tracker&lt;/strong&gt; does not reflect how priorities relate to each other, and as individual tickets close the priority stack does not adjust automatically. The project’s &lt;strong&gt;complexity&lt;/strong&gt; now amplifies the risk: without a clear view of &lt;strong&gt;dependencies&lt;/strong&gt; we can assign a high-value engineer to a task that drags several teammates into the same bottleneck, resulting in net-negative progress. Defining priorities therefore includes understanding the team’s &lt;strong&gt;skills&lt;/strong&gt;, mapping how they &lt;strong&gt;collaborate&lt;/strong&gt;, and making sure no one becomes a &lt;strong&gt;sink&lt;/strong&gt; that blocks everyone else. &lt;strong&gt;Alignment&lt;/strong&gt; across roles remains essential so the plan reflects the people who actually execute it.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;tooling&lt;/strong&gt; already exists to put this into practice. &lt;strong&gt;GitHub&lt;/strong&gt; now lets us mark issues as &lt;strong&gt;blocked by&lt;/strong&gt; or &lt;strong&gt;blocking&lt;/strong&gt; others and to model &lt;strong&gt;parent/child relationships&lt;/strong&gt;. We can use those relationships to &lt;strong&gt;reorganize the priorities programmatically&lt;/strong&gt;. Once the relationships are encoded, &lt;strong&gt;priorities gain semantic meaning&lt;/strong&gt; because we can explain why a small ticket matters in the larger story. Priorities become the &lt;strong&gt;byproduct of higher-level goals&lt;/strong&gt;— narratives  about the product—rather than a short-term &lt;strong&gt;static wish list&lt;/strong&gt; of individual features.&lt;/p&gt;

&lt;p&gt;We also need to strengthen the &lt;strong&gt;operational tools&lt;/strong&gt; that keep the team coordinated. &lt;strong&gt;Coverage&lt;/strong&gt; in CI is still far below our other C++ Alliance projects, and the gap shows up as crashes whenever a new library explores an untested path in the codebase. Improving coverage is a priority in its own right. We can pair that effort with &lt;strong&gt;automation&lt;/strong&gt; and &lt;strong&gt;analysis tools&lt;/strong&gt; like &lt;strong&gt;ReviewDog&lt;/strong&gt; to accelerate code-review feedback, &lt;strong&gt;Danger.js&lt;/strong&gt; to enforce pull-request policies, &lt;strong&gt;CodeClimate&lt;/strong&gt; or similar services for &lt;strong&gt;static analysis&lt;/strong&gt;, and &lt;strong&gt;clang-tidy&lt;/strong&gt; checks to catch issues earlier. Finally, we can invite other collaborators to revisit the &lt;strong&gt;gap analysis&lt;/strong&gt; and &lt;strong&gt;MVP&lt;/strong&gt;, including C++Alliance colleagues who specialize in &lt;strong&gt;marketing&lt;/strong&gt;. Their perspective will help us assign priorities that reflect both &lt;strong&gt;technical dependencies&lt;/strong&gt; and the project’s &lt;strong&gt;broader positioning&lt;/strong&gt;.&lt;/p&gt;

&lt;h2 id=&quot;reflection&quot;&gt;Reflection&lt;/h2&gt;

&lt;p&gt;The corpus keeps drifting out of sync because every important path in MrDocs duplicates representation by hand. Almost every subsystem reflects data from one format to another, and almost every internal operation traverses those structures. Each time we adjust a field we have to edit dozens of call sites, and even small mistakes create inconsistent state—different copies of the “truth” that evolve independently. Reflection eliminates this churn. If we can describe the corpus once and let the code iterate over those descriptions, the boilerplate disappears, the traversals remain correct, and we stop fighting the same battle.&lt;/p&gt;

&lt;p&gt;A lightweight option would be to enforce the corpus from JSON the way we treat configuration, but the volume of metadata in AST makes that impractical. Instead, we lean on &lt;strong&gt;compile-time reflection utilities&lt;/strong&gt; such as &lt;strong&gt;Boost.Describe&lt;/strong&gt; and &lt;strong&gt;Boost.mp11&lt;/strong&gt;. With those libraries we can convert the corpus to any representation, and each generator—including future &lt;strong&gt;binary&lt;/strong&gt; or &lt;strong&gt;JSON&lt;/strong&gt; targets—sees the same schema automatically. MrDocs can even emit the schema that powers each generator, keeping the schema, DOM, and documentation in sync. This approach also fixes the long-standing lag in the &lt;strong&gt;XML generator&lt;/strong&gt;, where updates have historically been manual and error-prone.&lt;/p&gt;

&lt;p&gt;The following sequence diagram illustrates how reflection consolidates data flow without duplicating logic:&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
sequenceDiagram
  participant AST as Clang AST
  participant Corpus as Typed Corpus
  participant Traits as Reflect Traits
  participant DOM as Corpus DOM
  participant Generators as Generators
  participant Clients as Integrations
  AST-&amp;gt;&amp;gt;Corpus: Extract symbols
  Corpus-&amp;gt;&amp;gt;Traits: Publish descriptors
  Traits-&amp;gt;&amp;gt;DOM: Build type-erased nodes
  DOM-&amp;gt;&amp;gt;Generators: Supply normalized schema
  Generators-&amp;gt;&amp;gt;Clients: Deliver outputs
  Clients-&amp;gt;&amp;gt;Generators: Provide feedback
  Generators-&amp;gt;&amp;gt;Traits: Request updates
&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Process:&lt;/strong&gt; We can start by describing the &lt;strong&gt;Symbols&lt;/strong&gt;, &lt;strong&gt;Javadoc&lt;/strong&gt;, and related classes, shipping each refactor as a dedicated PR so reviews stay contained. Each description removes custom specializations, reverts to &lt;code&gt;= default&lt;/code&gt; where possible, and replaces old logic with &lt;strong&gt;static asserts&lt;/strong&gt; that enforce invariants. We generalize the main merge logic first, then update callers such as the &lt;strong&gt;AST visitor&lt;/strong&gt; that walks &lt;code&gt;RecordTranche&lt;/code&gt;, ensuring the &lt;strong&gt;comments data structure&lt;/strong&gt; matches the new descriptions. A &lt;code&gt;MRDOCS_DESCRIBE_DERIVED&lt;/code&gt; helper can enumerate derived classes so every visit routine becomes generic. Once the C++ side is described, we rebuild the lazy DOM objects on top of Describe so their types mirror the DOM layout directly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use cases:&lt;/strong&gt; Redundant non-member functions like &lt;code&gt;tag_invoke&lt;/code&gt;, &lt;code&gt;operator⇔&lt;/code&gt;, &lt;code&gt;toString&lt;/code&gt;, and &lt;code&gt;merge&lt;/code&gt; collapse into &lt;strong&gt;shared implementations&lt;/strong&gt; that use traits unless real customization is required. New generators—binary, JSON, or otherwise—drop in with minimal code because the schema and traversal logic already exist. The XML generator stops maintaining a private representation and simply reads the described elements. We can finally standardize &lt;strong&gt;naming conventions&lt;/strong&gt; (kebab-case or camelCase) because the schema enforces them. Generating the &lt;strong&gt;Relax NG Compact&lt;/strong&gt; file becomes just another output produced from the same description. A metadata walker can then discover auxiliary objects and emit &lt;strong&gt;DOM documentation automatically&lt;/strong&gt;. As a side effect of integrating Boost.mp11, we can extend the &lt;code&gt;tag_invoke&lt;/code&gt; context protocol with tuple-based helpers for &lt;code&gt;mrdocs::FromValue&lt;/code&gt;, further narrowing the gap between concrete and DOM objects.&lt;/p&gt;

&lt;h2 id=&quot;metadata&quot;&gt;Metadata&lt;/h2&gt;

&lt;p&gt;MrDocs still carries metadata gaps that are too large to ignore. The subsections below highlight the three extraction areas that demand sustained effort; each of them blocks the rest of the system from staying consistent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recursive blocks and inlines.&lt;/strong&gt; Release 0.0.5 introduced the data structures for recursive Javadoc elements, but we still do not parse all of those structures. The fix is straightforward in concept—extend the CommonMark-based parser so every block and inline variant becomes a first-class node—but the implementation is long because there are many element types. We can ship this incrementally by opening issues and sub-issues, tackling one structure per PR, and starting with block elements before moving to inlines. The existing post-process documentation finalizer already contains the mechanics; we just need to wire each rule into the new documentation nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Legible names.&lt;/strong&gt; The current name generator appends hash fragments to differentiate symbols lazily, which makes references unstable and awkward. We need a stable allocator that remembers which symbols claimed which names. The highest-priority symbol should receive the base name, and suffixes should cascade to less critical overloads so the visible entries stay predictable. Moving the generator into the extraction phase and storing the assignments there ensures anchors remain stable, lets us update artifacts such as the Boost.URL tagfile, and produces names that actually read well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Populate expressions.&lt;/strong&gt; Whenever the extractor fails to recognize an expression, it falls back to the raw source string. That shortcut prevents us from applying the usual transformations, especially inside requires-expressions where implementation-defined symbols appear. We should introduce typed representations for the constructs we already understand and continue to store strings for the expressions we have not modeled yet. As coverage grows, more expressions flow through the structured pipeline, and the remaining string-based nodes shrink to the truly unknown cases.&lt;/p&gt;

&lt;h2 id=&quot;extensions-and-plugins&quot;&gt;Extensions and Plugins&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Extensions&lt;/strong&gt; and &lt;strong&gt;plugins&lt;/strong&gt; aim at the same outcome—letting projects &lt;strong&gt;customize MrDocs&lt;/strong&gt;—but they operate at different layers. Extensions run &lt;strong&gt;inside the application&lt;/strong&gt;, usually through &lt;strong&gt;interpreters&lt;/strong&gt; we bundle. We already ship &lt;strong&gt;Lua&lt;/strong&gt; and &lt;strong&gt;Duktape&lt;/strong&gt;, yet today they only power a handful of &lt;strong&gt;Handlebars helpers&lt;/strong&gt;. The plan is to widen that surface: add more interpreters where it makes sense, extend helper support so extensions can participate in &lt;strong&gt;escaping&lt;/strong&gt; and &lt;strong&gt;formatting&lt;/strong&gt;, and give extensions the ability to &lt;strong&gt;consume the entire corpus&lt;/strong&gt;. With that access, an extension can list every symbol, emit metadata in formats we do not yet support, or transform the corpus before it reaches a native generator. The same mechanism enables &lt;strong&gt;quality-of-life utilities&lt;/strong&gt;, such as a generator extension that checks whether a library’s public API changed according to a policy defined in code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plugins&lt;/strong&gt;, by contrast, are &lt;strong&gt;compiled artifacts&lt;/strong&gt;. They unlock similar customization goals, but their &lt;strong&gt;ABI must stay stable&lt;/strong&gt;, and platform differences mean a plugin built on one system will not run on another. To keep the surface manageable we should expose a &lt;strong&gt;narrow wrapper&lt;/strong&gt;: pass plugins a set of &lt;strong&gt;DOM proxies&lt;/strong&gt; so they never depend on the underlying &lt;strong&gt;Info classes&lt;/strong&gt;, use &lt;strong&gt;traits&lt;/strong&gt; or &lt;strong&gt;versioned interfaces&lt;/strong&gt; to handle incompatibilities, and &lt;strong&gt;plan the API carefully&lt;/strong&gt; before release.&lt;/p&gt;

&lt;h2 id=&quot;dependency-resilience&quot;&gt;Dependency Resilience&lt;/h2&gt;

&lt;p&gt;Working with &lt;strong&gt;dependent libraries&lt;/strong&gt; is still the most fragile part of the MrDocs workflow. &lt;strong&gt;Environments drift&lt;/strong&gt;, &lt;strong&gt;transitive dependencies change&lt;/strong&gt; without notice, and heavyweight projects force us to install &lt;strong&gt;toolchains&lt;/strong&gt; we do not actually need. In &lt;strong&gt;Boost.URL&lt;/strong&gt; alone we watch upstream Boost libraries evolve every few weeks; sometimes the code truly breaks, but just as often a new release exercises an untested path in MrDocs and triggers a crash because our &lt;strong&gt;coverage&lt;/strong&gt; is still thin. Other ecosystems push the cost even higher: documenting a library that depends on &lt;strong&gt;LLVM&lt;/strong&gt; can turn a three-second render into an hours-long process because the transitive LLVM &lt;strong&gt;headers&lt;/strong&gt; MrDocs needs are generated at build time, so we must compile and install LLVM merely to obtain include files. &lt;strong&gt;CI environments&lt;/strong&gt; regularly fail for the same reason.&lt;/p&gt;

&lt;p&gt;We already experimented with &lt;strong&gt;mitigation strategies&lt;/strong&gt; and should refine them rather than abandon the ideas. Shipping a &lt;strong&gt;curated standard library&lt;/strong&gt; with MrDocs removes one entire category of instability. The option will soon be disabled by default, but users can still enable it or even combine it with the system library when &lt;strong&gt;reproducibility&lt;/strong&gt; matters more than access to system libraries. This mirrors how &lt;strong&gt;Clang&lt;/strong&gt; ships &lt;strong&gt;libc++&lt;/strong&gt;; it does not allow invalid code, it simply guarantees a known baseline.&lt;/p&gt;

&lt;p&gt;On top of that, we have preliminary support for &lt;strong&gt;user-defined stubs&lt;/strong&gt;. &lt;strong&gt;Configuration files&lt;/strong&gt; can provide short descriptions of expected symbols from hard-to-build dependencies, and MrDocs can &lt;strong&gt;inject those during extraction&lt;/strong&gt;. For predictable patterns we can &lt;strong&gt;auto-generate stubs&lt;/strong&gt; when the user opts in, synthesizing symbols rather than failing immediately. None of this accepts invalid code—the compiler still diagnoses real errors—but it shields projects from breakage when a &lt;strong&gt;transitive dependency&lt;/strong&gt; tweaks implementation details or when generated headers are unavailable. The features remain &lt;strong&gt;optional&lt;/strong&gt;, so teams can disable synthesis to debug the underlying issue and still benefit from the faster path when schedules are tight. Even if the project moves in another direction we should &lt;strong&gt;document the proposal&lt;/strong&gt; and remove the existing stub hooks deliberately rather than letting them linger undocumented.&lt;/p&gt;

&lt;p&gt;The payoffs are clear. &lt;strong&gt;Boost libraries&lt;/strong&gt; could generate documentation without cloning the entire super-project, relying on &lt;strong&gt;SettingsDB&lt;/strong&gt; to produce a &lt;strong&gt;compilation database&lt;/strong&gt; and skipping &lt;strong&gt;CMake&lt;/strong&gt; entirely. MrDocs itself could publish reference docs without building &lt;strong&gt;LLVM&lt;/strong&gt; because the required symbols would come from stubs. &lt;strong&gt;Releases&lt;/strong&gt; would stop breaking every time a transitive dependency changes, and developers would regain hours currently spent firefighting. These are the &lt;strong&gt;stability&lt;/strong&gt; and &lt;strong&gt;reproducibility&lt;/strong&gt; gains we need if we want MrDocs to be the &lt;strong&gt;default tooling&lt;/strong&gt; for large C++ ecosystems.&lt;/p&gt;

&lt;h2 id=&quot;follow-up-issues-for-v006&quot;&gt;Follow-up Issues for v0.0.6&lt;/h2&gt;

&lt;p&gt;To keep this post focused on the big-picture transition, I spun the tactical tasks into GitHub issues for the 0.0.6 milestone. They’re queued up and ready for execution whenever the team circles back to implementation.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;List of follow-up issues for v0.0.6&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1081&quot;&gt;#1081&lt;/a&gt; Support custom stylesheets in the HTML generator&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1082&quot;&gt;#1082&lt;/a&gt; Format-agnostic Handlebars generator extension&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1083&quot;&gt;#1083&lt;/a&gt; Allow SettingsDB to describe a single source file&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1084&quot;&gt;#1084&lt;/a&gt; Guard against invalid source links&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1085&quot;&gt;#1085&lt;/a&gt; Complete tests for all using declaration forms&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1086&quot;&gt;#1086&lt;/a&gt; Explore a recursive project layout&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1087&quot;&gt;#1087&lt;/a&gt; Convert ConfigOptions.json into a schema file&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1088&quot;&gt;#1088&lt;/a&gt; Separate parent context and parent page&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1089&quot;&gt;#1089&lt;/a&gt; List deduction guides on the record page&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1090&quot;&gt;#1090&lt;/a&gt; Expand coverage for Friends&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1091&quot;&gt;#1091&lt;/a&gt; Remove dependency symbols after finalization&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1092&quot;&gt;#1092&lt;/a&gt; Review Bash Commands Parser&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1093&quot;&gt;#1093&lt;/a&gt; Review NameInfoVisitor&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1094&quot;&gt;#1094&lt;/a&gt; Improve overload-set documentation&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1095&quot;&gt;#1095&lt;/a&gt; CI uses the bootstrap script&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1096&quot;&gt;#1096&lt;/a&gt; Connect Antora extensions&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1097&quot;&gt;#1097&lt;/a&gt; Handlebars: optimize render state&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1098&quot;&gt;#1098&lt;/a&gt; Handlebars: explore template compilation&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1099&quot;&gt;#1099&lt;/a&gt; Handlebars: investigate incremental rendering&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h1 id=&quot;acknowledgments&quot;&gt;Acknowledgments&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Matheus Izvekov&lt;/strong&gt; and &lt;strong&gt;Krystian Stasiowski&lt;/strong&gt; kept the Clang integration moving. Their expertise cleared issues that would have stalled us. &lt;strong&gt;Gennaro Prota&lt;/strong&gt; and &lt;strong&gt;Fernando Pelliccioni&lt;/strong&gt; handled the maintenance load that kept the project on schedule. They took on the long tasks and followed them through.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Robert Beeston&lt;/strong&gt; and &lt;strong&gt;Julio Estrada&lt;/strong&gt; delivered the public face of MrDocs. The site we ship today exists because they turned open-ended goals into a complete experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vinnie Falco&lt;/strong&gt;, &lt;strong&gt;Louis Tatta&lt;/strong&gt;, and &lt;strong&gt;Sam Darwin&lt;/strong&gt; formed the backbone of my daily support. &lt;strong&gt;Vinnie&lt;/strong&gt; trusted the direction and backed the plan when decisions were difficult. &lt;strong&gt;Louis&lt;/strong&gt; made sure I had space to return after setbacks. &lt;strong&gt;Sam&lt;/strong&gt; kept the Alliance infrastructure running so the team always had what it needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ruben Perez&lt;/strong&gt;, &lt;strong&gt;Klemens Morgenstern&lt;/strong&gt;, &lt;strong&gt;Peter Dimov&lt;/strong&gt;, and &lt;strong&gt;Peter Turcan&lt;/strong&gt; offered honest feedback whenever we needed another perspective. Their observations sharpened the product and kept collaboration positive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Joaquín M López Muñoz&lt;/strong&gt; and &lt;strong&gt;Arnaud Bachelier&lt;/strong&gt; guided me through the people side of leadership. Their advice turned complex situations into workable plans.&lt;/p&gt;

&lt;p&gt;Working alongside everyone listed here has been a privilege. Their contributions made this year possible.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;The 2025 releases unified the generators, locked the configuration model, added sanitizers and coverage to CI, and introduced features that make the tool usable outside Boost.URL. The project is ready for new contributors because they can extend the code without rebuilding the basics, and downstream teams can run the CLI on large codebases and expect predictable output.&lt;/p&gt;

&lt;p&gt;While we delivered those releases, I learned that engineering progress depends on steady communication. Remote discussions often sound negative even when people agree on the goals, so I schedule short check-ins, add light signals like emojis, and keep space for conversations that are not task-driven. I also protect time to listen and ask for help when the workload gets heavy; if I lose that time, every deadline slips anyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Reflections&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Technical conversations start negative by default, so add clear signals when you agree or appreciate the work.&lt;/li&gt;
  &lt;li&gt;Assume terse feedback comes from the medium, not the person, and respond with patience.&lt;/li&gt;
  &lt;li&gt;Keep informal connection habits—buddy calls, breaks, or quick chats—to maintain trust.&lt;/li&gt;
  &lt;li&gt;Look after your own health and use outside support when needed.&lt;/li&gt;
  &lt;li&gt;Never allow the schedule to block real listening time; reset your calendar when that happens.&lt;/li&gt;
&lt;/ul&gt;</content><author><name></name></author><category term="alan" /><summary type="html">In 2024, the MrDocs project was a fragile prototype. It documented Boost.URL, but the CLI, configuration, and build process were unstable. Most users could not run it without direct help from the core group. That unstable baseline is the starting point for this report. In 2025, we moved the codebase to minimum-viable-product shape. I led the releases that stabilized the pipeline, aligned the configuration model, and documented the work in this report to support a smooth leadership transition. This post summarizes the 2024 gaps, the 2025 fixes, and the recommended directions for the next phase. System Overview 2024: Lessons from a Fragile Prototype 2025: From Prototype to MVP v0.0.3: Enforcing Consistency v0.0.4: Establishing the Foundation v0.0.5: Stabilization and Public Readiness 2026: Beyond the MVP Strategic Prioritization Reflection Metadata Extensions and Plugins Dependency Resilience Follow-up Issues for v0.0.6 Acknowledgments Conclusion System Overview MrDocs is a C++ documentation generator built on Clang. It parses source with full language fidelity, links declarations to their comments, and produces reference documentation that reflects real program structure—templates, constraints, and overloads included. Traditional tools often approximate the AST. MrDocs uses the AST directly, so documentation matches the code and modern C++ features render correctly. Unlike single-purpose generators, MrDocs separates the corpus (semantic data) from the presentation layer. Projects can choose among multiple output formats or extend the system entirely: supply custom Handlebars templates or script new generators using the plugin system. The corpus is represented in the generators as a rich JSON-like DOM. With schema files, MrDocs enables integration with build systems, documentation frameworks, or IDEs. From the user’s perspective, MrDocs behaves like a well-engineered CLI utility. It accepts configuration files, supports relative paths, accepts custom build options, and reports warnings in a controlled, compiler-like fashion. For C++ teams transitioning from Doxygen, the command structure is somewhat familiar, but the internal model is designed for reproducibility and correctness. Our goal is not just to render reference pages but to provide a reliable pipeline that any C++ project seeking modern documentation infrastructure can adopt. graph LR A[Source] --&amp;gt; B[Clang] B --&amp;gt; C[Corpus] C --&amp;gt; D{Plugin Layer} subgraph Generator E[HTML] F[AsciiDoc] G[XML] G2[...] end D --&amp;gt; E D --&amp;gt; F D --&amp;gt; G D --&amp;gt; G2 E --&amp;gt; H{Plugin Layer} H --&amp;gt; H2[Published Docs] F --&amp;gt; H G --&amp;gt; H G2 --&amp;gt; H C --&amp;gt; I[Schema Export] I --&amp;gt; J[IntegrationsIDEs &amp;amp; Build Systems] 2024: Lessons from a Fragile Prototype MrDocs entered 2024 as a proof-of-concept built for Boost.URL. It could document one or two curated codebases and produce asciidoc pages for Antora, but the workflow stopped there. The CLI exposed only the scenarios we needed. Configuration options lived in internal notes. The only dependable build path was the script sequence we used inside the Alliance. External users hit errors and missing options almost immediately. Stability was just as fragile: We had no sanitizers, no warnings-as-errors, and inconsistent CI hardware. The binaries crashed as soon as they saw unfamiliar code. The pipeline worked only when the input looked like Boost.URL. Point it at slightly different code patterns and it would segfault. Each feature landed as a custom patch, so logic duplicated across generators, and fixing one path broke another. Early releases: Release v0.0.1 captured that prototype: the early Handlebars engine, the HTML generator, the DOM refactor, and a list of APIs that only the core team could drive. v0.0.2 added structured configuration, automatic compile_commands.json, and better SFINAE handling, but the tool was still insider-only. Leadership transition: Late in 2024 I became project lead with two initial priorities: document the gaps and describe the true limits of the system. That set the 2025 baseline—a functional prototype that needed coherence, reproducibility, and trust before it could call itself a product. What 2025 later fixed were the weaknesses we saw here: configuration coherence, generator unification, schema validation, and basic options were all missing. The CLI, configuration files, and code drifted apart. Generators evolved independently with duplicated code and inconsistent naming. Editors had no schema to lean on. Extraction rules were ad hoc, which made the output incomplete. CI ran on an improvised matrix with no caching, sanitizers, or coverage, so regressions slipped through. That was the starting point. Summary: 2024 produced a working demo, not a reproducible system. Each success exposed another weak link and clarified what had to change in 2025. In short: 2024 left us with a working prototype but no coherent architecture. The system could demonstrate the concept, but not sustain or reproduce it. Every improvement exposed another weak link, and every success demanded more structure than the system was built to handle. It was a year of learning by exhaustion—and setting the stage for everything that came next. Key 2024 checkpoints align with the timeline below: %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% timeline title Prototypes 2024 Q1 : Boost.URL showcase 2024 Q2 : CLI gaps 2024 Q3 : Config + SFINAE fixes 2024 Q4 : Leadership transition 2025: From Prototype to MVP I started the year with a gap analysis that compared MrDocs to other C++ documentation pipelines. From that review I defined the minimum viable product and three priority tracks. Usability covered workflows and surface area that make adoption simple. Stability covered deterministic behavior, proper data structures, and CI discipline. Foundation covered configuration and data models that keep code, flags, and documentation aligned. The 2025 releases followed those tracks and turned MrDocs from a proof of concept into a tool that other teams can adopt. v0.0.3 — Consistency. We replaced ad-hoc behavior with a coherent system: a single source of truth for configuration kept CLI, config files, and docs in sync; generators and templates were unified so changes propagate by design; core semantic extraction (e.g., concepts, constraints, SFINAE) became reliable; and CI hardened around reproducible, tested outputs across HTML and Antora. v0.0.4 — Foundation. We introduced precise warning controls and a family of extract-* options to match established tooling, added a JSON Schema for configuration (enabling editor validation/autocomplete), delivered a robust reference system for documentation comments, brought initial inline formatting to generators, and simplified onboarding with a cross-platform bootstrap script. CI gained sanitizers, coverage checks, and modern compilers. v0.0.5 — Stabilization. We redesigned documentation metadata to support recursive inline elements, enforced safer polymorphic types with optional references and non-nullable patterns, and added user-facing improvements (sorting, automatic compilation database detection, quick reference indices, improved namespace/overload grouping, LLDB formatters). The website and documentation UI were refreshed for accessibility and responsiveness, new demos (including self-documentation) were published, and CI was further tightened with stricter policies and cross-platform bootstrap enhancements. Together, these releases executed the roadmap derived from the initial gap analysis: they aligned the moving parts, closed the most important capability gaps, and delivered a stable foundation that future work can extend without re-litigating fundamentals. %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: { &quot;primaryColor&quot;: &quot;#e4eee8&quot;, &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#baf9d9&quot;, &quot;secondaryColor&quot;: &quot;#f0eae4&quot;, &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot; }}}%% mindmap root((MVP Evolution)) v0.0.3 Config sync Shared templates CI discipline v0.0.4 Warning controls Schema Bootstrap v0.0.5 Recursive docs Nav refresh Tooling polish v0.0.3: Enforcing Consistency v0.0.3 is where MrDocs stopped being a collection of one-off special cases and became a coherent system. Before this release, features landed in a single generator and drifted from the others; extraction handled only the narrowly requested pattern and crashed on nearby ones; and options were inconsistent—some hard-coded, some missing from CLI/config, with no mechanism to keep code, docs, and flags aligned. What changed: The v0.0.3 release fixes this foundation. We introduced a single source of truth for configuration options with TableGen-style metadata: docs, the config file, and the CLI always stay in sync. We added essential Doxygen-like options to make basic projects immediately usable and filled obvious gaps in symbols and doc comments. We implemented metadata extraction for core symbol types and their information—such as template constraints, concepts, and automatic SFINAE detection. We unified generators and templates so changes propagate by design, added tagfile support and “lightweight reflection” to documentation comments as lazy DOM objects and arrays, and extended Handlebars to power the new generators. These features allowed us to create the initial version of the website and ensure the documentation is always in sync. Build and testing discipline: CI, builds, and tests were hardened. All generators were now tested, LLVM caching systems improved, and we launched our first macOS release (important for teams working on Antora UI bundles). All of this long tail of performance, correctness, and safety work turned “works on my machine” into repeatable, adoptable output across HTML and Antora. v0.0.3 was the inflection point. For the first time, developers could depend on consistent configuration, shared templates, and predictable behavior across generators. It aligned internal tools, eliminated duplicated effort, and replaced trial-and-error debugging with reproducible builds. Every improvement in later versions built on this foundation. Categorized improvements for v0.0.3 Configuration Options: enforcing consistency, reproducible builds, and transparent reporting Enforce configuration options are in sync with the JSON source of truth (a1fb8ec6, 9daf71fe) File and symbol filters (1b67a847, b352ba22) Reference and symbol configuration (a3e4477f, 30eaabc9) Extraction options (41411db2, 1214d94b) Reporting options (f994e47e, 0dd9cb45) Configuration structure (c8662b35, dcf5beef, 4bd3ea42) CLI workflows (a2dc4c78, 3c0f90df) Warnings (4eab1933, 5e586f2b, 0e2dd713) SettingsDB (225b2d50, 51639e77) Deterministic configuration (b5449741) Global configuration documentation (ec3dbf5c) Generators: unification, new features, and early refactoring Antora/HTML generator consistency (e674182f, 82e86a6c, 9154b9c5) HTML generator improvements (a28cb2f7, 064ce55a, 5f6665d8) Documentation for generators (2382e8cf, 646a1e5b) Supporting new output formats (58a79f74, 271dde57, 9d9f6652) Handlebars improvements (ebf4dbeb, be76fc07) Generator tooling (00fc84cf, 6a69747d) Navigation helpers (fdccad42) DOM optimizations (9b41d2e4) Libraries and metadata: unification, fixes, and extraction enhancements Info node visitor and traversal improvements (be86a08d, 58ab5a5e) Metadata consistency (544ee37d, 62f8a2bd, bd9c704f) Template and concept support (4b0b4a71, 57cf74de, 92aa76a4) Symbol resolution and references (f64d4a06, aa9333d4) Documentation improvements (5d3f21c8) Website and Documentation: turning features into a showcase and simplifying workflows Create website (05400c3c, 8fba2020) Use the new features to create an HTML panel demos workflow (12ceadee, d38d3e1a, c46c4a91) Unify Antora author mode playbook (999ea4f3) Generator use cases and trade-offs (2307ca6a) Correctness and simplification (4d884f43, 55214d72, b078bead, d8b7fcf4, 96484836, 62f361fb) Build, Testing, and Releases: strengthening CI, improving LLVM caching workflow, and stabilizing releases Templates are tested with golden tests (2bc09e65, 9eece731) LLVM caches and runners improvements (4c14e875, bd54dc7c, 3d92071a, 8537d3db, f3b33a47, 5982cc7e, 93487669) Enable macOS workflow (390159e3) Stabilize artifacts (5e0f628e, d1c3566e, 62736e45) Tests support individual file inputs, which improved local tests considerably (75b1bc52) Performance, correctness, and safety (a820ad79, 43e5f252, a382820f, fbcb5b2d, 6a2290cb, 49f4125f) v0.0.4: Establishing the Foundation v0.0.4 completed the core capabilities we need for production. With the moving parts aligned in v0.0.3, this release focused on the fundamentals. It added consistent warning options, extraction controls that match established tools, schema support for IDE auto-completion, a complete reference system for doc comments, and initial inline formatting in the generators. The bootstrap script became a one-step path to a working build. We also hardened the pipeline with modern CI practices—sanitizers, coverage integration, and standardized presets. Categorized improvements for v0.0.4 Configuration and Extraction: structured configuration, extraction controls, and schema validation Configuration schema (d9517e1d, 5f846c1c, ffa0d1a6) Extraction filters (0a60bb98, a7d7714d) Reference configuration (d18a8ab3) Documentation metadata (6676c1e8) Warnings and Reporting: consistent governance with CLI parity Warning controls (2a29f0a0, 6d3c1f47) Extract options (extract-{public,protected,private,inline}) (aa5a6be3) CLI defaults (d85439c3) Generators: Javadoc, inline formatting, and reference improvements Documentation reference system (4b430f9b, 73489e2b) Javadoc metadata (8dd3af67, f7e59d4c) Inline formatting (5c7490a3, d1d80745) XML generator alignment (9867e0d2, 0f890f2c) Build and CI: sanitizers, coverage, and reproducible builds Sanitizer integration (6257c747, 88954d7f) Coverage reporting (bf195759) Relocatable build (std::format) (7b871032) Bootstrap modernization (3eec9a48, 71afb87b, 524e7923) v0.0.5: Stabilization and Public Readiness v0.0.5 marked the transition toward a sustained development model and prepared the project for handoff. This release focused on presentation, polish, and reliability—ensuring that MrDocs was ready not only for internal use but for public visibility. During this period, we expanded the set of public demos, refined the website and documentation, and stabilized the infrastructure to support a growing user base. The goal was to leave the project in a state where it could continue evolving smoothly, with a stable core, clear development practices, and a professional public face. Community and visibility: Beyond the commits, this release reflected broader activity around the project. We generated and published several new demos, many of which revealed integration issues that were subsequently fixed. As more external users began adopting MrDocs, the feedback loop accelerated: bug reports, feature requests, and real-world edge cases guided much of the work. New contributors joined the team, collaboration became more distributed, and visibility increased. Around the same time, I introduced MrDocs to developers at CppCon 2025, where it received strong feedback from library authors testing it on their own projects. The tool was beginning to gain recognition as a viable, modern alternative to Doxygen. Technical progress: This release focused on correctness. We redesigned the documentation comment data structures to support recursive inline elements and render Markdown and HTML-style formatting correctly. We moved to non-nullable polymorphic types and optional references so that invariants fail at compile time rather than at runtime. User-facing updates included new sorting options, automatic compilation database detection, a quick reference index, broader namespace and overload grouping, and LLDB formatters for Clang and MrDocs symbols. We refreshed the website and documentation UI for accessibility and responsiveness, added new demos (including the MrDocs self-reference), and tightened CI with more sanitizers, stricter warning policies, and cross-platform bootstrap improvements. Together, these improvements completed the transition from a developing prototype to a dependable product. v0.0.5 established a stable foundation for others to build on—polished, documented, and resilient—so future releases could focus on extending capabilities rather than consolidating them. With this release, the project reached a point where the handoff could occur naturally, closing one chapter and opening another. Categorized improvements for v0.0.5 Metadata: documentation inlines and safety improvements Recursive documentation inlines (51e2b655) Consistent sorting options for members and namespaces (sort-members-by, sort-namespace-members-by) (f0ba28dd, a0f694dc) Non-nullable polymorphic types and optional references (c9f9ba13, 8ef3ffaf, bd3e1217, afa558a6, 6ba8ef6b) Consistent metadata class family hierarchy pattern (6d495497) MrDocsSettings includes automatic compilation database support (9afededb, a1f289de) Quick reference index (68e029c1, 940c33f4) Namespace/using/overloads grouping includes using declarations and overloads as shadows (69e1c3bc, d722b7d0, 2b59269c) Conditional explicit clauses in templated methods (2bff4e2f) Destructor overloads supported in class templates (336ad319) Using declarations include all shadow variants (88a1cebf, 9253fd8f, a7d5cf6a) show-enum-constants option (07b69e1c) Custom LLDB formatters for Clang and MrDocs symbols (069bd8f4, f83eca17, 1b39fdd7, aefc53c7) Performance, correctness, and safety (d1788049, 3bd94cff, 8a811560, 3ff37448, ad1e7baa, b10b8aa3, 482c0be8, d66da796, ec8daa11, 5234b67c, 5e879b10, 35e14c93, d5a28a89, 6878c199, 21ce3e74, 2da2081b, b528ae11) Website and Documentation: new demos and a new website New demos (cfa9eb7d, 1b930b86, c18be83e, 177fae4a, 33275050) Website and documentation refresh (35e14c93, a6437742) Self-documentation (f2a5f77e) Antora enhancements (5ed0f48f) Build, Testing, and Releases: improvements and hardening CI Toolchain and CI hardening (6257c747, 88954d7f, bf195759, ba0dcfd3) Bootstrap improvements (3eec9a48, 71afb87b, 524e7923, 4b79ef41, 7d27204e, 988e9ebc, 94a5b799, be7332cf, 4d705c96, f48bbd2f, f9363461) Performance, correctness, and safety (5aa714b2, 469f41ee, 629f1848, 2f0dd8c1, acf7c107) 2026: Beyond the MVP MrDocs now ships a working MVP, but significant foundational work remains. The priority framework is the same: start with gap analysis, shape an MVP (or now just a viable product), and rank follow-on work against that baseline. In 2025 we invested in presentation earlier than infrastructure. That inversion still raises costs: each foundational change forces rework across user-facing pieces. I do not know how the leadership model will evolve in 2026. The team might keep a single coordinator or move to shared stewardship. Regardless, the project only succeeds if we continue investing in foundational capabilities. The steps below outline the recommendations I believe will help keep MrDocs sustainable over the long term. %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: { &quot;primaryColor&quot;: &quot;#f2eadf&quot;, &quot;primaryBorderColor&quot;: &quot;#ffe8c6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#ffe8c8&quot;, &quot;secondaryColor&quot;: &quot;#e8ebf3&quot;, &quot;tertiaryColor&quot;: &quot;#eceaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot; }}}%% mindmap root((2026 Priorities)) Reflection Describe symbols Shared walkers Metadata Recursive docs Stable names Typed expressions Extensions Script helpers Plugin ABI Dependencies Curated toolchain Opt-in stubs Community Integration demos Outreach cadence Strategic Prioritization Aligning priorities is itself the highest priority. At the start of my tenure as project lead we followed a strict sequence—gap analysis, then an MVP, then a set of priorities—but that model exposed limitations once work began to land. The issue tracker does not reflect how priorities relate to each other, and as individual tickets close the priority stack does not adjust automatically. The project’s complexity now amplifies the risk: without a clear view of dependencies we can assign a high-value engineer to a task that drags several teammates into the same bottleneck, resulting in net-negative progress. Defining priorities therefore includes understanding the team’s skills, mapping how they collaborate, and making sure no one becomes a sink that blocks everyone else. Alignment across roles remains essential so the plan reflects the people who actually execute it. The tooling already exists to put this into practice. GitHub now lets us mark issues as blocked by or blocking others and to model parent/child relationships. We can use those relationships to reorganize the priorities programmatically. Once the relationships are encoded, priorities gain semantic meaning because we can explain why a small ticket matters in the larger story. Priorities become the byproduct of higher-level goals— narratives about the product—rather than a short-term static wish list of individual features. We also need to strengthen the operational tools that keep the team coordinated. Coverage in CI is still far below our other C++ Alliance projects, and the gap shows up as crashes whenever a new library explores an untested path in the codebase. Improving coverage is a priority in its own right. We can pair that effort with automation and analysis tools like ReviewDog to accelerate code-review feedback, Danger.js to enforce pull-request policies, CodeClimate or similar services for static analysis, and clang-tidy checks to catch issues earlier. Finally, we can invite other collaborators to revisit the gap analysis and MVP, including C++Alliance colleagues who specialize in marketing. Their perspective will help us assign priorities that reflect both technical dependencies and the project’s broader positioning. Reflection The corpus keeps drifting out of sync because every important path in MrDocs duplicates representation by hand. Almost every subsystem reflects data from one format to another, and almost every internal operation traverses those structures. Each time we adjust a field we have to edit dozens of call sites, and even small mistakes create inconsistent state—different copies of the “truth” that evolve independently. Reflection eliminates this churn. If we can describe the corpus once and let the code iterate over those descriptions, the boilerplate disappears, the traversals remain correct, and we stop fighting the same battle. A lightweight option would be to enforce the corpus from JSON the way we treat configuration, but the volume of metadata in AST makes that impractical. Instead, we lean on compile-time reflection utilities such as Boost.Describe and Boost.mp11. With those libraries we can convert the corpus to any representation, and each generator—including future binary or JSON targets—sees the same schema automatically. MrDocs can even emit the schema that powers each generator, keeping the schema, DOM, and documentation in sync. This approach also fixes the long-standing lag in the XML generator, where updates have historically been manual and error-prone. The following sequence diagram illustrates how reflection consolidates data flow without duplicating logic: sequenceDiagram participant AST as Clang AST participant Corpus as Typed Corpus participant Traits as Reflect Traits participant DOM as Corpus DOM participant Generators as Generators participant Clients as Integrations AST-&amp;gt;&amp;gt;Corpus: Extract symbols Corpus-&amp;gt;&amp;gt;Traits: Publish descriptors Traits-&amp;gt;&amp;gt;DOM: Build type-erased nodes DOM-&amp;gt;&amp;gt;Generators: Supply normalized schema Generators-&amp;gt;&amp;gt;Clients: Deliver outputs Clients-&amp;gt;&amp;gt;Generators: Provide feedback Generators-&amp;gt;&amp;gt;Traits: Request updates Process: We can start by describing the Symbols, Javadoc, and related classes, shipping each refactor as a dedicated PR so reviews stay contained. Each description removes custom specializations, reverts to = default where possible, and replaces old logic with static asserts that enforce invariants. We generalize the main merge logic first, then update callers such as the AST visitor that walks RecordTranche, ensuring the comments data structure matches the new descriptions. A MRDOCS_DESCRIBE_DERIVED helper can enumerate derived classes so every visit routine becomes generic. Once the C++ side is described, we rebuild the lazy DOM objects on top of Describe so their types mirror the DOM layout directly. Use cases: Redundant non-member functions like tag_invoke, operator⇔, toString, and merge collapse into shared implementations that use traits unless real customization is required. New generators—binary, JSON, or otherwise—drop in with minimal code because the schema and traversal logic already exist. The XML generator stops maintaining a private representation and simply reads the described elements. We can finally standardize naming conventions (kebab-case or camelCase) because the schema enforces them. Generating the Relax NG Compact file becomes just another output produced from the same description. A metadata walker can then discover auxiliary objects and emit DOM documentation automatically. As a side effect of integrating Boost.mp11, we can extend the tag_invoke context protocol with tuple-based helpers for mrdocs::FromValue, further narrowing the gap between concrete and DOM objects. Metadata MrDocs still carries metadata gaps that are too large to ignore. The subsections below highlight the three extraction areas that demand sustained effort; each of them blocks the rest of the system from staying consistent. Recursive blocks and inlines. Release 0.0.5 introduced the data structures for recursive Javadoc elements, but we still do not parse all of those structures. The fix is straightforward in concept—extend the CommonMark-based parser so every block and inline variant becomes a first-class node—but the implementation is long because there are many element types. We can ship this incrementally by opening issues and sub-issues, tackling one structure per PR, and starting with block elements before moving to inlines. The existing post-process documentation finalizer already contains the mechanics; we just need to wire each rule into the new documentation nodes. Legible names. The current name generator appends hash fragments to differentiate symbols lazily, which makes references unstable and awkward. We need a stable allocator that remembers which symbols claimed which names. The highest-priority symbol should receive the base name, and suffixes should cascade to less critical overloads so the visible entries stay predictable. Moving the generator into the extraction phase and storing the assignments there ensures anchors remain stable, lets us update artifacts such as the Boost.URL tagfile, and produces names that actually read well. Populate expressions. Whenever the extractor fails to recognize an expression, it falls back to the raw source string. That shortcut prevents us from applying the usual transformations, especially inside requires-expressions where implementation-defined symbols appear. We should introduce typed representations for the constructs we already understand and continue to store strings for the expressions we have not modeled yet. As coverage grows, more expressions flow through the structured pipeline, and the remaining string-based nodes shrink to the truly unknown cases. Extensions and Plugins Extensions and plugins aim at the same outcome—letting projects customize MrDocs—but they operate at different layers. Extensions run inside the application, usually through interpreters we bundle. We already ship Lua and Duktape, yet today they only power a handful of Handlebars helpers. The plan is to widen that surface: add more interpreters where it makes sense, extend helper support so extensions can participate in escaping and formatting, and give extensions the ability to consume the entire corpus. With that access, an extension can list every symbol, emit metadata in formats we do not yet support, or transform the corpus before it reaches a native generator. The same mechanism enables quality-of-life utilities, such as a generator extension that checks whether a library’s public API changed according to a policy defined in code. Plugins, by contrast, are compiled artifacts. They unlock similar customization goals, but their ABI must stay stable, and platform differences mean a plugin built on one system will not run on another. To keep the surface manageable we should expose a narrow wrapper: pass plugins a set of DOM proxies so they never depend on the underlying Info classes, use traits or versioned interfaces to handle incompatibilities, and plan the API carefully before release. Dependency Resilience Working with dependent libraries is still the most fragile part of the MrDocs workflow. Environments drift, transitive dependencies change without notice, and heavyweight projects force us to install toolchains we do not actually need. In Boost.URL alone we watch upstream Boost libraries evolve every few weeks; sometimes the code truly breaks, but just as often a new release exercises an untested path in MrDocs and triggers a crash because our coverage is still thin. Other ecosystems push the cost even higher: documenting a library that depends on LLVM can turn a three-second render into an hours-long process because the transitive LLVM headers MrDocs needs are generated at build time, so we must compile and install LLVM merely to obtain include files. CI environments regularly fail for the same reason. We already experimented with mitigation strategies and should refine them rather than abandon the ideas. Shipping a curated standard library with MrDocs removes one entire category of instability. The option will soon be disabled by default, but users can still enable it or even combine it with the system library when reproducibility matters more than access to system libraries. This mirrors how Clang ships libc++; it does not allow invalid code, it simply guarantees a known baseline. On top of that, we have preliminary support for user-defined stubs. Configuration files can provide short descriptions of expected symbols from hard-to-build dependencies, and MrDocs can inject those during extraction. For predictable patterns we can auto-generate stubs when the user opts in, synthesizing symbols rather than failing immediately. None of this accepts invalid code—the compiler still diagnoses real errors—but it shields projects from breakage when a transitive dependency tweaks implementation details or when generated headers are unavailable. The features remain optional, so teams can disable synthesis to debug the underlying issue and still benefit from the faster path when schedules are tight. Even if the project moves in another direction we should document the proposal and remove the existing stub hooks deliberately rather than letting them linger undocumented. The payoffs are clear. Boost libraries could generate documentation without cloning the entire super-project, relying on SettingsDB to produce a compilation database and skipping CMake entirely. MrDocs itself could publish reference docs without building LLVM because the required symbols would come from stubs. Releases would stop breaking every time a transitive dependency changes, and developers would regain hours currently spent firefighting. These are the stability and reproducibility gains we need if we want MrDocs to be the default tooling for large C++ ecosystems. Follow-up Issues for v0.0.6 To keep this post focused on the big-picture transition, I spun the tactical tasks into GitHub issues for the 0.0.6 milestone. They’re queued up and ready for execution whenever the team circles back to implementation. List of follow-up issues for v0.0.6 #1081 Support custom stylesheets in the HTML generator #1082 Format-agnostic Handlebars generator extension #1083 Allow SettingsDB to describe a single source file #1084 Guard against invalid source links #1085 Complete tests for all using declaration forms #1086 Explore a recursive project layout #1087 Convert ConfigOptions.json into a schema file #1088 Separate parent context and parent page #1089 List deduction guides on the record page #1090 Expand coverage for Friends #1091 Remove dependency symbols after finalization #1092 Review Bash Commands Parser #1093 Review NameInfoVisitor #1094 Improve overload-set documentation #1095 CI uses the bootstrap script #1096 Connect Antora extensions #1097 Handlebars: optimize render state #1098 Handlebars: explore template compilation #1099 Handlebars: investigate incremental rendering Acknowledgments Matheus Izvekov and Krystian Stasiowski kept the Clang integration moving. Their expertise cleared issues that would have stalled us. Gennaro Prota and Fernando Pelliccioni handled the maintenance load that kept the project on schedule. They took on the long tasks and followed them through. Robert Beeston and Julio Estrada delivered the public face of MrDocs. The site we ship today exists because they turned open-ended goals into a complete experience. Vinnie Falco, Louis Tatta, and Sam Darwin formed the backbone of my daily support. Vinnie trusted the direction and backed the plan when decisions were difficult. Louis made sure I had space to return after setbacks. Sam kept the Alliance infrastructure running so the team always had what it needed. Ruben Perez, Klemens Morgenstern, Peter Dimov, and Peter Turcan offered honest feedback whenever we needed another perspective. Their observations sharpened the product and kept collaboration positive. Joaquín M López Muñoz and Arnaud Bachelier guided me through the people side of leadership. Their advice turned complex situations into workable plans. Working alongside everyone listed here has been a privilege. Their contributions made this year possible. Conclusion The 2025 releases unified the generators, locked the configuration model, added sanitizers and coverage to CI, and introduced features that make the tool usable outside Boost.URL. The project is ready for new contributors because they can extend the code without rebuilding the basics, and downstream teams can run the CLI on large codebases and expect predictable output. While we delivered those releases, I learned that engineering progress depends on steady communication. Remote discussions often sound negative even when people agree on the goals, so I schedule short check-ins, add light signals like emojis, and keep space for conversations that are not task-driven. I also protect time to listen and ask for help when the workload gets heavy; if I lose that time, every deadline slips anyway. Final Reflections Technical conversations start negative by default, so add clear signals when you agree or appreciate the work. Assume terse feedback comes from the medium, not the person, and respond with patience. Keep informal connection habits—buddy calls, breaks, or quick chats—to maintain trust. Look after your own health and use outside support when needed. Never allow the schedule to block real listening time; reset your calendar when that happens.</summary></entry><entry><title type="html">Making the Clang AST Leaner and Faster</title><link href="http://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster.html" rel="alternate" type="text/html" title="Making the Clang AST Leaner and Faster" /><published>2025-10-20T00:00:00+00:00</published><updated>2025-10-20T00:00:00+00:00</updated><id>http://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster</id><content type="html" xml:base="http://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster.html">&lt;p&gt;Modern C++ codebases — from browsers to GPU frameworks — rely heavily on templates, and that often means &lt;em&gt;massive&lt;/em&gt; abstract syntax trees. Even small inefficiencies in Clang’s AST representation can add up to noticeable compile-time overhead.&lt;/p&gt;

&lt;p&gt;This post walks through a set of structural improvements I recently made to Clang’s AST that make type representation smaller, simpler, and faster to create — leading to measurable build-time gains in real-world projects.&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;A couple of months ago, I landed &lt;a href=&quot;https://github.com/llvm/llvm-project/pull/147835&quot;&gt;a large patch&lt;/a&gt; in Clang that brought substantial compile-time improvements for heavily templated C++ code.&lt;/p&gt;

&lt;p&gt;For example, in &lt;a href=&quot;https://github.com/NVIDIA/stdexec&quot;&gt;stdexec&lt;/a&gt; — the reference implementation of the &lt;code&gt;std::execution&lt;/code&gt; &lt;a href=&quot;http://wg21.link/p2300&quot;&gt;feature slated for C++26&lt;/a&gt; — the slowest test (&lt;a href=&quot;https://github.com/NVIDIA/stdexec/blob/main/test/stdexec/algos/adaptors/test_on2.cpp&quot;&gt;&lt;code&gt;test_on2.cpp&lt;/code&gt;&lt;/a&gt;) saw a &lt;strong&gt;7% reduction in build time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Also the &lt;a href=&quot;https://www.chromium.org/Home/&quot;&gt;Chromium&lt;/a&gt; build showed a &lt;strong&gt;5% improvement&lt;/strong&gt; (&lt;a href=&quot;https://github.com/llvm/llvm-project/pull/147835#issuecomment-3278893447&quot;&gt;source&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;At a high level, the patch makes the Clang AST &lt;em&gt;leaner&lt;/em&gt;: it reduces the memory footprint of type representations and lowers the cost of creating and uniquing them.&lt;/p&gt;

&lt;p&gt;These improvements will ship with &lt;strong&gt;Clang 22&lt;/strong&gt;, expected in the next few months.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;how-elaboration-and-qualified-names-used-to-work&quot;&gt;How elaboration and qualified names used to work&lt;/h2&gt;

&lt;p&gt;Consider this simple snippet:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;namespace NS {
  struct A {};
}
using T = struct NS::A;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The type of &lt;code&gt;T&lt;/code&gt; (&lt;code&gt;struct NS::A&lt;/code&gt;) carries two pieces of information:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It’s &lt;em&gt;elaborated&lt;/em&gt; — the &lt;code&gt;struct&lt;/code&gt; keyword appears.&lt;/li&gt;
  &lt;li&gt;It’s &lt;em&gt;qualified&lt;/em&gt; — &lt;code&gt;NS::&lt;/code&gt; acts as a &lt;a href=&quot;https://eel.is/c++draft/expr.prim.id.qual#:nested-name-specifier&quot;&gt;&lt;em&gt;nested-name-specifier&lt;/em&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s how the &lt;a href=&quot;https://compiler-explorer.com/z/WEWc4817x&quot;&gt;AST dump&lt;/a&gt; looked before this patch:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ElaboratedType 'struct NS::A' sugar
`-RecordType 'test::NS::A'
  `-CXXRecord 'A'
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;RecordType&lt;/code&gt; represents a direct reference to the previously declared &lt;code&gt;struct A&lt;/code&gt; — a kind of &lt;em&gt;canonical&lt;/em&gt; view of the type, stripped of syntactic details like &lt;code&gt;struct&lt;/code&gt; or namespace qualifiers.&lt;/p&gt;

&lt;p&gt;Those syntactic details were stored separately in an &lt;code&gt;ElaboratedType&lt;/code&gt; node that wrapped the &lt;code&gt;RecordType&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Interestingly, an &lt;code&gt;ElaboratedType&lt;/code&gt; node existed even when no elaboration or qualification appeared in the source (&lt;a href=&quot;https://compiler-explorer.com/z/ncW5bzWrc&quot;&gt;example&lt;/a&gt;). This was needed to distinguish between an explicitly unqualified type and one that lost its qualifiers through template substitution.&lt;/p&gt;

&lt;p&gt;However, this design was expensive: every &lt;code&gt;ElaboratedType&lt;/code&gt; node consumed &lt;strong&gt;48 bytes&lt;/strong&gt;, and creating one required extra work to uniquify it — an important step for Clang’s fast type comparisons.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;a-more-compact-representation&quot;&gt;A more compact representation&lt;/h2&gt;

&lt;p&gt;The new approach removes &lt;code&gt;ElaboratedType&lt;/code&gt; entirely. Instead, elaboration and qualifiers are now stored &lt;strong&gt;directly inside &lt;code&gt;RecordType&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://compiler-explorer.com/z/asz5q5YGj&quot;&gt;new AST dump&lt;/a&gt; for the same example looks like this:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;RecordType 'struct NS::A' struct
|-NestedNameSpecifier Namespace 'NS'
`-CXXRecord 'A'
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;struct&lt;/code&gt; elaboration now fits into previously unused bits within &lt;code&gt;RecordType&lt;/code&gt;, while the qualifier is &lt;em&gt;tail-allocated&lt;/em&gt; when present — making the node variably sized.&lt;/p&gt;

&lt;p&gt;This change both shrinks the memory footprint and eliminates one level of indirection when traversing the AST.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;representing-nestednamespecifier&quot;&gt;Representing &lt;code&gt;NestedNameSpecifier&lt;/code&gt;&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; is Clang’s internal representation for name qualifiers.&lt;/p&gt;

&lt;p&gt;Before this patch, it was represented by a pointer (&lt;code&gt;NestedNameSpecifier*&lt;/code&gt;) to a uniqued structure that could describe:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The global namespace (&lt;code&gt;::&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;A named namespace (including aliases)&lt;/li&gt;
  &lt;li&gt;A type&lt;/li&gt;
  &lt;li&gt;An identifier naming an unknown entity&lt;/li&gt;
  &lt;li&gt;A &lt;code&gt;__super&lt;/code&gt; reference (Microsoft extension)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For all but cases (1) and (5), each &lt;code&gt;NestedNameSpecifier&lt;/code&gt; also held a &lt;em&gt;prefix&lt;/em&gt; — the qualifier to its left.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;::XX
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This would be stored as a linked list:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[id: XX] -&amp;gt; [type: NestedClassTemplate&amp;lt;T&amp;gt;] -&amp;gt; [type: Class] -&amp;gt; [namespace: Namespace]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Internally, that meant &lt;strong&gt;seven allocations&lt;/strong&gt; totaling around &lt;strong&gt;160 bytes&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; (identifier) – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; (type) – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;TemplateSpecializationType&lt;/code&gt; – 48 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;QualifiedTemplateName&lt;/code&gt; – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; (type) – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;RecordType&lt;/code&gt; – 32 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; (namespace) – 16 bytes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The real problem wasn’t just size — it was the &lt;em&gt;uniquing cost&lt;/em&gt;. Every prospective node has to be looked up in a hash table for a pre-existing instance.&lt;/p&gt;

&lt;p&gt;To make matters worse, &lt;code&gt;ElaboratedType&lt;/code&gt; nodes sometimes leaked into these chains, which wasn’t supposed to happen and led to &lt;a href=&quot;https://github.com/llvm/llvm-project/issues/43179&quot;&gt;several&lt;/a&gt; &lt;a href=&quot;https://github.com/llvm/llvm-project/issues/68670&quot;&gt;long-standing&lt;/a&gt; &lt;a href=&quot;https://github.com/llvm/llvm-project/issues/92757&quot;&gt;bugs&lt;/a&gt;.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;a-new-smarter-nestednamespecifier&quot;&gt;A new, smarter &lt;code&gt;NestedNameSpecifier&lt;/code&gt;&lt;/h2&gt;

&lt;p&gt;After this patch, &lt;code&gt;NestedNameSpecifier&lt;/code&gt; becomes a &lt;strong&gt;compact, tagged pointer&lt;/strong&gt; — just one machine word wide.&lt;/p&gt;

&lt;p&gt;The pointer uses 8-byte alignment, leaving three spare bits. Two bits are used for kind discrimination, and one remains available for arbitrary use.&lt;/p&gt;

&lt;p&gt;When non-null, the tag bits encode:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A type&lt;/li&gt;
  &lt;li&gt;A declaration (either a &lt;code&gt;__super&lt;/code&gt; class or a namespace)&lt;/li&gt;
  &lt;li&gt;A namespace prefixed by the global scope (&lt;code&gt;::Namespace&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;A special object combining a namespace with its prefix&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When null, the tag bits instead encode:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;An empty nested name (the terminator)&lt;/li&gt;
  &lt;li&gt;The global name&lt;/li&gt;
  &lt;li&gt;An invalid/tombstone entry (for hash tables)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Other changes include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The “unknown identifier” case is now represented by a &lt;code&gt;DependentNameType&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;Type prefixes are handled directly in the type hierarchy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Revisiting the earlier example, after the patch its AST dump becomes:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;DependentNameType 'Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;::XX' dependent
`-NestedNameSpecifier TemplateSpecializationType 'Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;' dependent
  `-name: 'Namespace::Class::NestedClassTemplate' qualified
    |-NestedNameSpecifier RecordType 'Namespace::Class'
    | |-NestedNameSpecifier Namespace 'Namespace'
    | `-CXXRecord 'Class'
    `-ClassTemplate NestedClassTemplate
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This representation now requires only &lt;strong&gt;four allocations (156 bytes total):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;code&gt;DependentNameType&lt;/code&gt; – 48 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;TemplateSpecializationType&lt;/code&gt; – 48 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;QualifiedTemplateName&lt;/code&gt; – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;RecordType&lt;/code&gt; – 40 bytes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s almost half the number of nodes.&lt;/p&gt;

&lt;p&gt;While &lt;code&gt;DependentNameType&lt;/code&gt; is larger than the previous 16-byte “identifier” node, the additional space isn’t wasted — it holds cached answers to common queries such as “does this type reference a template parameter?” or “what is its canonical form?”.&lt;/p&gt;

&lt;p&gt;These caches make those operations significantly cheaper, further improving performance.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;There’s more in the patch than what I’ve covered here, including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code&gt;RecordType&lt;/code&gt; now points directly to the declaration found at creation, enriching the AST without measurable overhead.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;RecordType&lt;/code&gt; nodes are now created lazily.&lt;/li&gt;
  &lt;li&gt;The redesigned &lt;code&gt;NestedNameSpecifier&lt;/code&gt; simplified several template instantiation transforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these could warrant its own write-up, but even this high-level overview shows how careful structural changes in the AST can lead to tangible compile-time wins.&lt;/p&gt;

&lt;p&gt;I hope you found this deep dive into Clang’s internals interesting — and that it gives a glimpse of the kind of small, structural optimizations that add up to real performance improvements in large C++ builds.&lt;/p&gt;</content><author><name></name></author><category term="mizvekov," /><category term="clang" /><summary type="html">Modern C++ codebases — from browsers to GPU frameworks — rely heavily on templates, and that often means massive abstract syntax trees. Even small inefficiencies in Clang’s AST representation can add up to noticeable compile-time overhead. This post walks through a set of structural improvements I recently made to Clang’s AST that make type representation smaller, simpler, and faster to create — leading to measurable build-time gains in real-world projects. A couple of months ago, I landed a large patch in Clang that brought substantial compile-time improvements for heavily templated C++ code. For example, in stdexec — the reference implementation of the std::execution feature slated for C++26 — the slowest test (test_on2.cpp) saw a 7% reduction in build time. Also the Chromium build showed a 5% improvement (source). At a high level, the patch makes the Clang AST leaner: it reduces the memory footprint of type representations and lowers the cost of creating and uniquing them. These improvements will ship with Clang 22, expected in the next few months. How elaboration and qualified names used to work Consider this simple snippet: namespace NS { struct A {}; } using T = struct NS::A; The type of T (struct NS::A) carries two pieces of information: It’s elaborated — the struct keyword appears. It’s qualified — NS:: acts as a nested-name-specifier. Here’s how the AST dump looked before this patch: ElaboratedType 'struct NS::A' sugar `-RecordType 'test::NS::A' `-CXXRecord 'A' The RecordType represents a direct reference to the previously declared struct A — a kind of canonical view of the type, stripped of syntactic details like struct or namespace qualifiers. Those syntactic details were stored separately in an ElaboratedType node that wrapped the RecordType. Interestingly, an ElaboratedType node existed even when no elaboration or qualification appeared in the source (example). This was needed to distinguish between an explicitly unqualified type and one that lost its qualifiers through template substitution. However, this design was expensive: every ElaboratedType node consumed 48 bytes, and creating one required extra work to uniquify it — an important step for Clang’s fast type comparisons. A more compact representation The new approach removes ElaboratedType entirely. Instead, elaboration and qualifiers are now stored directly inside RecordType. The new AST dump for the same example looks like this: RecordType 'struct NS::A' struct |-NestedNameSpecifier Namespace 'NS' `-CXXRecord 'A' The struct elaboration now fits into previously unused bits within RecordType, while the qualifier is tail-allocated when present — making the node variably sized. This change both shrinks the memory footprint and eliminates one level of indirection when traversing the AST. Representing NestedNameSpecifier NestedNameSpecifier is Clang’s internal representation for name qualifiers. Before this patch, it was represented by a pointer (NestedNameSpecifier*) to a uniqued structure that could describe: The global namespace (::) A named namespace (including aliases) A type An identifier naming an unknown entity A __super reference (Microsoft extension) For all but cases (1) and (5), each NestedNameSpecifier also held a prefix — the qualifier to its left. For example: Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;::XX This would be stored as a linked list: [id: XX] -&amp;gt; [type: NestedClassTemplate&amp;lt;T&amp;gt;] -&amp;gt; [type: Class] -&amp;gt; [namespace: Namespace] Internally, that meant seven allocations totaling around 160 bytes: NestedNameSpecifier (identifier) – 16 bytes NestedNameSpecifier (type) – 16 bytes TemplateSpecializationType – 48 bytes QualifiedTemplateName – 16 bytes NestedNameSpecifier (type) – 16 bytes RecordType – 32 bytes NestedNameSpecifier (namespace) – 16 bytes The real problem wasn’t just size — it was the uniquing cost. Every prospective node has to be looked up in a hash table for a pre-existing instance. To make matters worse, ElaboratedType nodes sometimes leaked into these chains, which wasn’t supposed to happen and led to several long-standing bugs. A new, smarter NestedNameSpecifier After this patch, NestedNameSpecifier becomes a compact, tagged pointer — just one machine word wide. The pointer uses 8-byte alignment, leaving three spare bits. Two bits are used for kind discrimination, and one remains available for arbitrary use. When non-null, the tag bits encode: A type A declaration (either a __super class or a namespace) A namespace prefixed by the global scope (::Namespace) A special object combining a namespace with its prefix When null, the tag bits instead encode: An empty nested name (the terminator) The global name An invalid/tombstone entry (for hash tables) Other changes include: The “unknown identifier” case is now represented by a DependentNameType. Type prefixes are handled directly in the type hierarchy. Revisiting the earlier example, after the patch its AST dump becomes: DependentNameType 'Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;::XX' dependent `-NestedNameSpecifier TemplateSpecializationType 'Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;' dependent `-name: 'Namespace::Class::NestedClassTemplate' qualified |-NestedNameSpecifier RecordType 'Namespace::Class' | |-NestedNameSpecifier Namespace 'Namespace' | `-CXXRecord 'Class' `-ClassTemplate NestedClassTemplate This representation now requires only four allocations (156 bytes total): DependentNameType – 48 bytes TemplateSpecializationType – 48 bytes QualifiedTemplateName – 16 bytes RecordType – 40 bytes That’s almost half the number of nodes. While DependentNameType is larger than the previous 16-byte “identifier” node, the additional space isn’t wasted — it holds cached answers to common queries such as “does this type reference a template parameter?” or “what is its canonical form?”. These caches make those operations significantly cheaper, further improving performance. Wrapping up There’s more in the patch than what I’ve covered here, including: RecordType now points directly to the declaration found at creation, enriching the AST without measurable overhead. RecordType nodes are now created lazily. The redesigned NestedNameSpecifier simplified several template instantiation transforms. Each of these could warrant its own write-up, but even this high-level overview shows how careful structural changes in the AST can lead to tangible compile-time wins. I hope you found this deep dive into Clang’s internals interesting — and that it gives a glimpse of the kind of small, structural optimizations that add up to real performance improvements in large C++ builds.</summary></entry><entry><title type="html">Conan Packages for Boost</title><link href="http://cppalliance.org/dmitry/2025/10/16/dmitrys-q3-update.html" rel="alternate" type="text/html" title="Conan Packages for Boost" /><published>2025-10-16T00:00:00+00:00</published><updated>2025-10-16T00:00:00+00:00</updated><id>http://cppalliance.org/dmitry/2025/10/16/dmitrys-q3-update</id><content type="html" xml:base="http://cppalliance.org/dmitry/2025/10/16/dmitrys-q3-update.html">&lt;p&gt;Back in April my former colleague Christian Mazakas &lt;a href=&quot;https://lists.boost.org/archives/list/boost@lists.boost.org/message/SW4QNUPFHJPT46Y3OY2CFCR3F73QKLRW/&quot;&gt;has
announced&lt;/a&gt;
his work on &lt;a href=&quot;https://github.com/cmazakas/vcpkg-registry-test&quot;&gt;registry of nightly Boost packages for
vcpkg&lt;/a&gt;. That same month
&lt;a href=&quot;https://conan.io&quot;&gt;Conan&lt;/a&gt; developers have &lt;a href=&quot;https://blog.conan.io/2024/04/23/Introducing-local-recipes-index-remote.html&quot;&gt;introduced a new
feature&lt;/a&gt;
that significantly simplified providing of an alternative Conan package source.
These two events gave me an idea to create an index of nightly Boost packages
for Conan.&lt;/p&gt;

&lt;h2 id=&quot;conan-remotes&quot;&gt;Conan Remotes&lt;/h2&gt;

&lt;p&gt;Conan installs packages from a &lt;em&gt;remote&lt;/em&gt;, which is usually a web server. When
you request a package in a particular version range, the remote determines if
it has a version that satisfies that range, and then sends you the package
recipe and, if possible, compatible binaries for the package.&lt;/p&gt;

&lt;p&gt;Local-recipes-index is a new kind of Conan remote that is not actually a
remote server and is just a local directory hierarchy of this kind:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;recipes
├── pkg1
│   ├── all
│   │   ├── conandata.yml
│   │   ├── conanfile.py
│   │   └── test_package
│   │       └── ...
│   └── config.yml
└── pkg2
    ├── all
    │   ├── conandata.yml
    │   ├── conanfile.py
    │   └── test_package
    │       └── ...
    └── config.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The directory structure is based on the Conan Center’s &lt;a href=&quot;https://github.com/conan-io/conan-center-index&quot;&gt;underlying GitHub
project&lt;/a&gt;. In actuality only
the &lt;code&gt;config.yml&lt;/code&gt; and &lt;code&gt;conanfile.py&lt;/code&gt; files are necessary. The former tells Conan
where to find the package recipes for each version (and hence determines the
set of available versions), the latter is the package recipe. In theory there
could be many subdirectories for different versions, but in reality most if not
all packages simply push all version differences into data files like
&lt;code&gt;conandata.yml&lt;/code&gt; and select the corresponding data in the recipe script.&lt;/p&gt;

&lt;p&gt;My idea in a nutshell was to set up a scheduled CI job that each day would run
a script that takes Boost superproject’s latest commits from &lt;code&gt;develop&lt;/code&gt; and
&lt;code&gt;master&lt;/code&gt; branches and generates a local-recipes-index directory hierarchy. Then
to have recipes directories coming from different branches merged together, and
the result be merged with the results of the previous run. Thus, after a while
an index of Boost snapshots from each day would accumulate.&lt;/p&gt;

&lt;h2 id=&quot;modular-boost&quot;&gt;Modular Boost&lt;/h2&gt;

&lt;p&gt;The project would have been fairly simple if my goal was to &lt;em&gt;just&lt;/em&gt; provide
nightly packages for Boost. Simply take the recipe from the Conan Center
project and replace getting sources from a release archive with getting sources
from GitHub. But I also wanted to package every Boost library separately. This
is generally known as modular Boost packages (not to be confused with Boost C++
modules). There is an apparent demand for such packages, and in fact this is
exactly how vcpkg users consume Boost libraries.&lt;/p&gt;

&lt;p&gt;In addition to the direct results—the Conan packages for Boost
libraries—such project is a great test of the &lt;em&gt;modularity&lt;/em&gt; of Boost. Whether
each library properly spells out all of its dependencies, whether there’s
enough associated metadata that describes the library, whether the project’s
build files are usable without the superproject, and so on. Conan Center (the
default Conan remote) does not currently provide modular Boost packages, only
packages for monolithic Boost (although it provides options to disable building
of specific libraries). Due to that I decided to generate package recipes not
only for nightly builds, but for tagged releases too.&lt;/p&gt;

&lt;p&gt;Given that, the core element of the project is the script that creates the
index from a Boost superproject &lt;em&gt;Git ref&lt;/em&gt; (branch name or tag). Each library is
a git submodule of the superproject. Every superproject commit contains
references to specific commits in submodules’ projects. The script checks out
each such commit, determines the library’s dependencies and other properties
important for Conan, and outputs &lt;code&gt;config.yml&lt;/code&gt;, &lt;code&gt;conanfile.py&lt;/code&gt;, &lt;code&gt;conandata.yml&lt;/code&gt;,
and &lt;code&gt;test_package&lt;/code&gt; contents.&lt;/p&gt;

&lt;h2 id=&quot;versions&quot;&gt;Versions&lt;/h2&gt;

&lt;p&gt;As previously mentioned, &lt;code&gt;config.yml&lt;/code&gt; contains a list of supported versions.
After one runs the generator script that file will contain exactly one version.
You might ask, what exactly is that version? After some research I ended up
with the scheme &lt;code&gt;MAJOR.MINOR.0-a.B+YY.MM.DD.HH.mm&lt;/code&gt;, where:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code&gt;MAJOR.MINOR.0&lt;/code&gt; is the &lt;em&gt;next&lt;/em&gt; Boost release version;&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;a&lt;/code&gt; implies an alpha-version pre-release;&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;B&lt;/code&gt; is &lt;code&gt;m&lt;/code&gt; for the &lt;code&gt;master&lt;/code&gt; branch and &lt;code&gt;d&lt;/code&gt; for the &lt;code&gt;develop&lt;/code&gt; branch;&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;YY.MM.DD.HH.mm&lt;/code&gt; is the authorship date and time of the source commit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, a commit authored at 12:15 on 15th of August 2025 taken from the
&lt;code&gt;master&lt;/code&gt; branch before Boost 1.90.0 was released would be represented by the
version &lt;code&gt;1.90.0-a.m+25.08.15.12.15&lt;/code&gt;. The scheme is an example of &lt;a href=&quot;https://semver.org&quot;&gt;semantic
versioning&lt;/a&gt;. The part between the hyphen and the plus
specifies a pre-release, and the part following the plus identifies a specific
build. All parts of the version contribute to the versions order after sorting.
Importantly, pre-releases are ordered &lt;em&gt;before&lt;/em&gt; the release they predate, which
makes sense, but isn’t obvious from the first glance.&lt;/p&gt;

&lt;p&gt;I originally did not plan to put commit time into the version scheme, as the
scheduled CI job only runs once a day. But while working on the project, I also
had the package index updated on pushes into the &lt;code&gt;master&lt;/code&gt; branch, which
overwrote previously indexed versions, and that was never the intention. Also,
originally the pre-release part was just the name of the branch, which was good
enough to sort &lt;code&gt;master&lt;/code&gt; and &lt;code&gt;develop&lt;/code&gt;. But with the scope of the project
including actual Boost releases and betas, I needed beta versions to sort
after &lt;code&gt;master&lt;/code&gt; and &lt;code&gt;develop&lt;/code&gt; versions, but before releases, hence I made them
alpha versions explicitly.&lt;/p&gt;

&lt;p&gt;One may ask, why do I even care about betas? By having specific beta versions
I want to encourage more people to check out Boost libraries in beta state and
find the bugs early on. I hope that if obtaining a beta version is as easy as
simply changing one string in a configuration file, more people will check them
and that would reduce the amount of bugs shipped in Boost libraries.&lt;/p&gt;

&lt;h2 id=&quot;conan-generators&quot;&gt;Conan Generators&lt;/h2&gt;

&lt;p&gt;One of the most important Conan features in my opinion is its support for any
build system rather than for a limited selection of them. This is done via
&lt;em&gt;generators&lt;/em&gt;—utilities that Convert platform description and dependency data
into configuration files for build systems. In Conan 2.x the regular approach
is to have a set of 2 generators for a given build system.&lt;/p&gt;

&lt;p&gt;The main one is a dependencies generator, which creates files that tell the
build system how to find dependencies. For example, if you are familiar with
CMake, the &lt;code&gt;CMakeDependencies&lt;/code&gt; generator creates &lt;a href=&quot;https://cmake.org/cmake/help/latest/manual/cmake-packages.7.html#package-configuration-file&quot;&gt;config
modules&lt;/a&gt;
for every dependency.&lt;/p&gt;

&lt;p&gt;The other one is a toolchain generator. Those convert platform information into
build system configuration files which determine the compiler, computer
architecture, OS, and so on. Using CMake as an example again, the
&lt;code&gt;CMakeToolchain&lt;/code&gt; generator creates a &lt;a href=&quot;https://cmake.org/cmake/help/latest/manual/cmake-toolchains.7.html&quot;&gt;toolchain
file&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The reason for the split into 2 generators is that there are cases when you
use only one of them. For example, if you don’t have any dependencies, you
don’t need a dependencies generator. And when you are working on a project,
you might already have the necessary build system configuration files, so you
don’t need a toolchain generator.&lt;/p&gt;

&lt;p&gt;For my project I needed both for Boost’s main build system,
&lt;a href=&quot;https://www.bfgroup.xyz/b2&quot;&gt;b2&lt;/a&gt;. Boost can also be built with CMake, but
that’s still not officially supported, and is tested less rigorously.
Unfortunately, Conan 2.x doesn’t currently have in-built support for b2. It had
it in Conan 1.x, but with the major version increase they’ve removed most of
the old generators, and the PR to add it back did not go anywhere. So, I had to
implement those 2 generators for b2. Luckily, Conan supports putting such Conan
extensions into packages. So, now the package index generation script also
creates a package with b2 generators.&lt;/p&gt;

&lt;h2 id=&quot;the-current-state-and-lessons-learned&quot;&gt;The Current State and Lessons Learned&lt;/h2&gt;

&lt;p&gt;The work is still in its early stage, but the project is in a somewhat usable
state already. It is currently located
&lt;a href=&quot;https://github.com/grisumbras/boost-conan-index&quot;&gt;here&lt;/a&gt; (I plan to place it
under boostorg GitHub organisation with the Boost community’s approval, or,
failing that, under cppalliance organisation). You can clone the project and
install and use some of the Boost libraries, but not all. I have tested that
those libraries build and work on Windows, Linux, and macOS. The b2 generators
are almost feature complete at this point.&lt;/p&gt;

&lt;p&gt;My future work will be mostly dedicated to discovering special requirements of
the remaining libraries and working out ways to handle them. The most
interesting problems are handling projects with special “options” (e.g.
Boost.Context usually has to be told what the target platform ABI and binary
format are), and handling the few external dependencies (e.g. zlib and ICU).
Another interesting task is handling library projects with several binaries
(e.g. Boost.Log) and dealing with the fact that libraries can change from being
compiled to being header-only (yes, this does happen).&lt;/p&gt;

&lt;p&gt;There were also several interesting findings. At first I tried determining
dependencies from the build scripts. But that turned out to be too brittle, so
in the end I decided to use
&lt;a href=&quot;https://github.com/boostorg/boostdep/blob/master/depinst/depinst.py&quot;&gt;&lt;code&gt;depinst&lt;/code&gt;&lt;/a&gt;,
the tool Boost projects use in CI to install dependencies. This is still a bit
too simplistic, as libraries can have optional and platform dependencies. But
I will have to address this later.&lt;/p&gt;

&lt;p&gt;Switching to &lt;code&gt;depinst&lt;/code&gt; uncovered that in Boost 1.89.0 a circular dependency
appeared between Boost.Geometry and Boost.Graph. This is actually a big problem
for package managers, as they have to build all dependencies for a project
before building it, and before that do the same thing for each of the
dependencies, and this creates a paradoxical situation where you need to build
the project before you build that same project. To make such circular
dependencies more apparent in the future, I’ve added a flag to &lt;code&gt;depinst&lt;/code&gt; that
makes it exit with an error if a cycle is discovered.&lt;/p&gt;

&lt;p&gt;Overall, I think Boost modularisation is going fairly well. Every library I’ve
tried yet builds correctly without the superproject present. I hope to finish
the project soon, preferably before the 1.90.0 release.&lt;/p&gt;

&lt;p&gt;After that there’s still an interesting possible addition. Christian’s vcpkg
registry mentioned in the very beginning also had a package for a candidate
library, so that people could easily install it and try it out during the
review period. My package index could in the future also do that. Hopefully
that will motivate more people to participate in Boost reviews.&lt;/p&gt;</content><author><name></name></author><category term="dmitry" /><summary type="html">Back in April my former colleague Christian Mazakas has announced his work on registry of nightly Boost packages for vcpkg. That same month Conan developers have introduced a new feature that significantly simplified providing of an alternative Conan package source. These two events gave me an idea to create an index of nightly Boost packages for Conan. Conan Remotes Conan installs packages from a remote, which is usually a web server. When you request a package in a particular version range, the remote determines if it has a version that satisfies that range, and then sends you the package recipe and, if possible, compatible binaries for the package. Local-recipes-index is a new kind of Conan remote that is not actually a remote server and is just a local directory hierarchy of this kind: recipes ├── pkg1 │ ├── all │ │ ├── conandata.yml │ │ ├── conanfile.py │ │ └── test_package │ │ └── ... │ └── config.yml └── pkg2 ├── all │ ├── conandata.yml │ ├── conanfile.py │ └── test_package │ └── ... └── config.yml The directory structure is based on the Conan Center’s underlying GitHub project. In actuality only the config.yml and conanfile.py files are necessary. The former tells Conan where to find the package recipes for each version (and hence determines the set of available versions), the latter is the package recipe. In theory there could be many subdirectories for different versions, but in reality most if not all packages simply push all version differences into data files like conandata.yml and select the corresponding data in the recipe script. My idea in a nutshell was to set up a scheduled CI job that each day would run a script that takes Boost superproject’s latest commits from develop and master branches and generates a local-recipes-index directory hierarchy. Then to have recipes directories coming from different branches merged together, and the result be merged with the results of the previous run. Thus, after a while an index of Boost snapshots from each day would accumulate. Modular Boost The project would have been fairly simple if my goal was to just provide nightly packages for Boost. Simply take the recipe from the Conan Center project and replace getting sources from a release archive with getting sources from GitHub. But I also wanted to package every Boost library separately. This is generally known as modular Boost packages (not to be confused with Boost C++ modules). There is an apparent demand for such packages, and in fact this is exactly how vcpkg users consume Boost libraries. In addition to the direct results—the Conan packages for Boost libraries—such project is a great test of the modularity of Boost. Whether each library properly spells out all of its dependencies, whether there’s enough associated metadata that describes the library, whether the project’s build files are usable without the superproject, and so on. Conan Center (the default Conan remote) does not currently provide modular Boost packages, only packages for monolithic Boost (although it provides options to disable building of specific libraries). Due to that I decided to generate package recipes not only for nightly builds, but for tagged releases too. Given that, the core element of the project is the script that creates the index from a Boost superproject Git ref (branch name or tag). Each library is a git submodule of the superproject. Every superproject commit contains references to specific commits in submodules’ projects. The script checks out each such commit, determines the library’s dependencies and other properties important for Conan, and outputs config.yml, conanfile.py, conandata.yml, and test_package contents. Versions As previously mentioned, config.yml contains a list of supported versions. After one runs the generator script that file will contain exactly one version. You might ask, what exactly is that version? After some research I ended up with the scheme MAJOR.MINOR.0-a.B+YY.MM.DD.HH.mm, where: MAJOR.MINOR.0 is the next Boost release version; a implies an alpha-version pre-release; B is m for the master branch and d for the develop branch; YY.MM.DD.HH.mm is the authorship date and time of the source commit. For example, a commit authored at 12:15 on 15th of August 2025 taken from the master branch before Boost 1.90.0 was released would be represented by the version 1.90.0-a.m+25.08.15.12.15. The scheme is an example of semantic versioning. The part between the hyphen and the plus specifies a pre-release, and the part following the plus identifies a specific build. All parts of the version contribute to the versions order after sorting. Importantly, pre-releases are ordered before the release they predate, which makes sense, but isn’t obvious from the first glance. I originally did not plan to put commit time into the version scheme, as the scheduled CI job only runs once a day. But while working on the project, I also had the package index updated on pushes into the master branch, which overwrote previously indexed versions, and that was never the intention. Also, originally the pre-release part was just the name of the branch, which was good enough to sort master and develop. But with the scope of the project including actual Boost releases and betas, I needed beta versions to sort after master and develop versions, but before releases, hence I made them alpha versions explicitly. One may ask, why do I even care about betas? By having specific beta versions I want to encourage more people to check out Boost libraries in beta state and find the bugs early on. I hope that if obtaining a beta version is as easy as simply changing one string in a configuration file, more people will check them and that would reduce the amount of bugs shipped in Boost libraries. Conan Generators One of the most important Conan features in my opinion is its support for any build system rather than for a limited selection of them. This is done via generators—utilities that Convert platform description and dependency data into configuration files for build systems. In Conan 2.x the regular approach is to have a set of 2 generators for a given build system. The main one is a dependencies generator, which creates files that tell the build system how to find dependencies. For example, if you are familiar with CMake, the CMakeDependencies generator creates config modules for every dependency. The other one is a toolchain generator. Those convert platform information into build system configuration files which determine the compiler, computer architecture, OS, and so on. Using CMake as an example again, the CMakeToolchain generator creates a toolchain file. The reason for the split into 2 generators is that there are cases when you use only one of them. For example, if you don’t have any dependencies, you don’t need a dependencies generator. And when you are working on a project, you might already have the necessary build system configuration files, so you don’t need a toolchain generator. For my project I needed both for Boost’s main build system, b2. Boost can also be built with CMake, but that’s still not officially supported, and is tested less rigorously. Unfortunately, Conan 2.x doesn’t currently have in-built support for b2. It had it in Conan 1.x, but with the major version increase they’ve removed most of the old generators, and the PR to add it back did not go anywhere. So, I had to implement those 2 generators for b2. Luckily, Conan supports putting such Conan extensions into packages. So, now the package index generation script also creates a package with b2 generators. The Current State and Lessons Learned The work is still in its early stage, but the project is in a somewhat usable state already. It is currently located here (I plan to place it under boostorg GitHub organisation with the Boost community’s approval, or, failing that, under cppalliance organisation). You can clone the project and install and use some of the Boost libraries, but not all. I have tested that those libraries build and work on Windows, Linux, and macOS. The b2 generators are almost feature complete at this point. My future work will be mostly dedicated to discovering special requirements of the remaining libraries and working out ways to handle them. The most interesting problems are handling projects with special “options” (e.g. Boost.Context usually has to be told what the target platform ABI and binary format are), and handling the few external dependencies (e.g. zlib and ICU). Another interesting task is handling library projects with several binaries (e.g. Boost.Log) and dealing with the fact that libraries can change from being compiled to being header-only (yes, this does happen). There were also several interesting findings. At first I tried determining dependencies from the build scripts. But that turned out to be too brittle, so in the end I decided to use depinst, the tool Boost projects use in CI to install dependencies. This is still a bit too simplistic, as libraries can have optional and platform dependencies. But I will have to address this later. Switching to depinst uncovered that in Boost 1.89.0 a circular dependency appeared between Boost.Geometry and Boost.Graph. This is actually a big problem for package managers, as they have to build all dependencies for a project before building it, and before that do the same thing for each of the dependencies, and this creates a paradoxical situation where you need to build the project before you build that same project. To make such circular dependencies more apparent in the future, I’ve added a flag to depinst that makes it exit with an error if a cycle is discovered. Overall, I think Boost modularisation is going fairly well. Every library I’ve tried yet builds correctly without the superproject present. I hope to finish the project soon, preferably before the 1.90.0 release. After that there’s still an interesting possible addition. Christian’s vcpkg registry mentioned in the very beginning also had a package for a candidate library, so that people could easily install it and try it out during the review period. My package index could in the future also do that. Hopefully that will motivate more people to participate in Boost reviews.</summary></entry><entry><title type="html">Writing Docs with Visuals and Verve</title><link href="http://cppalliance.org/peter/2025/10/15/Peter-Turcan-Q3-2025.html" rel="alternate" type="text/html" title="Writing Docs with Visuals and Verve" /><published>2025-10-15T00:00:00+00:00</published><updated>2025-10-15T00:00:00+00:00</updated><id>http://cppalliance.org/peter/2025/10/15/Peter-Turcan-Q3-2025</id><content type="html" xml:base="http://cppalliance.org/peter/2025/10/15/Peter-Turcan-Q3-2025.html">&lt;p&gt;In a past life I worked in the computer journalism business, and learnt over time what attracts people to read a page. Lot’s of things are important, the font used, the spacing between letters and lines and paragraphs, even the width of a column of text is super-important for readability (so the eye does not lose track of the line it is on). Other stuff is important to, readers, especially technical readers, love tables. A table of all the networking libraries available in Boost for example, becomes a reassuring point of reference, as opposed to a bunch of text listing the libraries. Two of the most important factors in drawing readers in are headlines and images. It can take some grey matter and experience and skill to come up with a catchy headline (“Phew - what a scorcher” - is a famous example of a tabloid headline after a super hot day!). The more I worked in journalism the more I appreciated all the skills involved - and those I had and those I had not!
When it comes to images, I decided to add at least one image to all the Scenarios in the Boost User Guide (Finance, Networking, Simulation, among others). One of the skills I do not have is that of an artist - so these images mostly had to come from elsewhere. Not all though, for the deformation example in the Simulation scenario, I came up with the following image - which works I guess!&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/posts/peterturcan/deformation.png&quot; alt=&quot;Cube deformation&quot; /&gt;&lt;/p&gt;

&lt;p&gt;For other images I used AI to come up with some text based diagrams, which do work well as “images” for a technical readership. For example, the following simple flow for a Message Queue shows what is going on, Receiver 3 picking up all the inappropriately addressed messages, as well as its own.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/posts/peterturcan/message-queue.png&quot; alt=&quot;Message queue&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Other images required some research. I must admit I did not know the difference between a &lt;em&gt;petal&lt;/em&gt; and a &lt;em&gt;sepal&lt;/em&gt; until I did some research on the iris data used in the Machine Learning scenario. The following image is a composite of a picture taken by my wife of an iris in a New Mexico volcanic caldera, and a diagram generated by AI. Now I know, the sepals are the dangly bits I would have called petals before engaging in this research.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/posts/peterturcan/iris-photo.png&quot; alt=&quot;Iris components&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Hopefully these images will draw in readers and entice them to try out the scenarios, and then continue their programming journey with the support of Boost libraries.&lt;/p&gt;

&lt;p&gt;Another topic I dug into this quarter is to find examples of good and bad practices, and then to tabularize them (remember, tables are trusted references…). I started with error messages. Here is an example of a tedious error message:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;error C2679: binary '=': no operator found which takes a right-hand operand of type 'boost::gregorian::date' (or there is no acceptable conversion)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Why is it tedious? Because it is verbose and yet still doesn’t say what the user did wrong, not even the library name is in the message. A shorter, sharper and more helpful message would be:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;boost::date_time::invalid_date: &quot;2025-02-30&quot; is not a valid Gregorian date&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This message contains the library name (date_time), and the invalid input. Error message content should be a high-priority issue for API developers.&lt;/p&gt;

&lt;p&gt;Another topic added to Best Practices is simply API design itself. The Boost filesystem library contains some good examples of clear design, for example - here is a section from the Contributor Guide:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/posts/peterturcan/clear-overloads.png&quot; alt=&quot;Clear Overloads&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Of course there is an element of style, taste, personal preferences in all these issues. It is totally OK for APIs to reflect those traits of the developers, this guide is there as a check point - a resource to read over and reflect on when evaluating your own work.&lt;/p&gt;

&lt;p&gt;Talking of AI - and who isn’t? - I requested the Cpp Alliance give me an image-creating AI API account, so I could add a section to the AI Client scenario on creating images. It is a lot of fun asking an AI to create an image, though you would be correct in thinking you probably could have come up with better yourself! For example, check out this exchange:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Enter your request (ASCII diagram or text) or 'exit': Can you draw an ASCII diagram of a speedboat?&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Assistant Response:
Sure! Here's a simple ASCII representation of a speedboat:&lt;/code&gt;&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;        __/__
  _____/_____|_____
  \              /
~~~~~~~~~~~~~~~~~~~~~
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On a tad more serious note, I added topics on Reflection and Diagnostics to the User Guide. And I scan the Slack conversations for words or phrases that are new to me, and add them to the User Guide Glossary - which is a fun document on its own account. Even a document as potentially dull as a glossary can be fun to create and fun to read. Everybody likes to be entertained when they are reading, it takes the grind out of the experience.&lt;/p&gt;</content><author><name></name></author><category term="peter" /><summary type="html">In a past life I worked in the computer journalism business, and learnt over time what attracts people to read a page. Lot’s of things are important, the font used, the spacing between letters and lines and paragraphs, even the width of a column of text is super-important for readability (so the eye does not lose track of the line it is on). Other stuff is important to, readers, especially technical readers, love tables. A table of all the networking libraries available in Boost for example, becomes a reassuring point of reference, as opposed to a bunch of text listing the libraries. Two of the most important factors in drawing readers in are headlines and images. It can take some grey matter and experience and skill to come up with a catchy headline (“Phew - what a scorcher” - is a famous example of a tabloid headline after a super hot day!). The more I worked in journalism the more I appreciated all the skills involved - and those I had and those I had not! When it comes to images, I decided to add at least one image to all the Scenarios in the Boost User Guide (Finance, Networking, Simulation, among others). One of the skills I do not have is that of an artist - so these images mostly had to come from elsewhere. Not all though, for the deformation example in the Simulation scenario, I came up with the following image - which works I guess! For other images I used AI to come up with some text based diagrams, which do work well as “images” for a technical readership. For example, the following simple flow for a Message Queue shows what is going on, Receiver 3 picking up all the inappropriately addressed messages, as well as its own. Other images required some research. I must admit I did not know the difference between a petal and a sepal until I did some research on the iris data used in the Machine Learning scenario. The following image is a composite of a picture taken by my wife of an iris in a New Mexico volcanic caldera, and a diagram generated by AI. Now I know, the sepals are the dangly bits I would have called petals before engaging in this research. Hopefully these images will draw in readers and entice them to try out the scenarios, and then continue their programming journey with the support of Boost libraries. Another topic I dug into this quarter is to find examples of good and bad practices, and then to tabularize them (remember, tables are trusted references…). I started with error messages. Here is an example of a tedious error message: error C2679: binary '=': no operator found which takes a right-hand operand of type 'boost::gregorian::date' (or there is no acceptable conversion) Why is it tedious? Because it is verbose and yet still doesn’t say what the user did wrong, not even the library name is in the message. A shorter, sharper and more helpful message would be: boost::date_time::invalid_date: &quot;2025-02-30&quot; is not a valid Gregorian date This message contains the library name (date_time), and the invalid input. Error message content should be a high-priority issue for API developers. Another topic added to Best Practices is simply API design itself. The Boost filesystem library contains some good examples of clear design, for example - here is a section from the Contributor Guide: Of course there is an element of style, taste, personal preferences in all these issues. It is totally OK for APIs to reflect those traits of the developers, this guide is there as a check point - a resource to read over and reflect on when evaluating your own work. Talking of AI - and who isn’t? - I requested the Cpp Alliance give me an image-creating AI API account, so I could add a section to the AI Client scenario on creating images. It is a lot of fun asking an AI to create an image, though you would be correct in thinking you probably could have come up with better yourself! For example, check out this exchange: Enter your request (ASCII diagram or text) or 'exit': Can you draw an ASCII diagram of a speedboat? Assistant Response: Sure! Here's a simple ASCII representation of a speedboat: __/__ _____/_____|_____ \ / ~~~~~~~~~~~~~~~~~~~~~ On a tad more serious note, I added topics on Reflection and Diagnostics to the User Guide. And I scan the Slack conversations for words or phrases that are new to me, and add them to the User Guide Glossary - which is a fun document on its own account. Even a document as potentially dull as a glossary can be fun to create and fun to read. Everybody likes to be entertained when they are reading, it takes the grind out of the experience.</summary></entry><entry><title type="html">DynamicBitset Reimagined: A Quarter of Flexibility, Cleanup, and Modern C++</title><link href="http://cppalliance.org/gennaro/2025/10/14/Gennaros2025Q3Update.html" rel="alternate" type="text/html" title="DynamicBitset Reimagined: A Quarter of Flexibility, Cleanup, and Modern C++" /><published>2025-10-14T00:00:00+00:00</published><updated>2025-10-14T00:00:00+00:00</updated><id>http://cppalliance.org/gennaro/2025/10/14/Gennaros2025Q3Update</id><content type="html" xml:base="http://cppalliance.org/gennaro/2025/10/14/Gennaros2025Q3Update.html">&lt;p&gt;Over the past three months, I’ve been immersed in a deep and wide-ranging
overhaul of the Boost.DynamicBitset library. What started as a few targeted
improvements quickly evolved into a full-scale modernization effort—touching
everything from the underlying container to iterator concepts, from test
coverage to documentation style. More than 170 commits later, the library is
leaner, more flexible, and better aligned with modern C++ practices.&lt;/p&gt;

&lt;h2 id=&quot;making-the-core-more-flexible&quot;&gt;Making the core more flexible&lt;/h2&gt;

&lt;p&gt;The most transformative change this quarter was allowing users to choose the
underlying container type for &lt;code&gt;dynamic_bitset&lt;/code&gt;. Until now, the implementation
assumed &lt;code&gt;std::vector&lt;/code&gt;, which limited optimization opportunities and imposed
certain behaviors. By lifting that restriction, developers can now use
alternatives like &lt;code&gt;boost::container::small_vector&lt;/code&gt;, enabling small buffer
optimization and more control over memory layout.&lt;/p&gt;

&lt;p&gt;This change had ripple effects throughout the codebase. I had to revisit
assumptions about contiguous storage, update operators like &lt;code&gt;&amp;lt;&amp;lt;=&lt;/code&gt;, &lt;code&gt;&amp;gt;&amp;gt;=&lt;/code&gt;, and
ensure that reference stability and iterator behavior were correctly handled.&lt;/p&gt;

&lt;h2 id=&quot;introducing-c20-iterators&quot;&gt;Introducing C++20 iterators&lt;/h2&gt;

&lt;p&gt;One of the more exciting additions this quarter was support for C++20-style
iterators. These new iterators conform to the standard iterator concepts, making
&lt;code&gt;dynamic_bitset&lt;/code&gt; more interoperable with modern algorithms and range-based
utilities.&lt;/p&gt;

&lt;p&gt;I added assertions to ensure that both the underlying container and
&lt;code&gt;dynamic_bitset&lt;/code&gt; itself meet the requirements for bidirectional iteration. These
checks are enabled only when compiling with C++20 or later, and they help catch
subtle mismatches early—especially when users plug in custom containers.&lt;/p&gt;

&lt;h2 id=&quot;saying-goodbye-to-legacy-workarounds&quot;&gt;Saying goodbye to legacy workarounds&lt;/h2&gt;

&lt;p&gt;With modern compilers and standard libraries, many old workarounds are no longer
needed. I removed the &lt;code&gt;max_size_workaround()&lt;/code&gt; after confirming that major
implementations now correctly account for allocators in &lt;code&gt;max_size()&lt;/code&gt;. I also
dropped support for obsolete compilers like MSVC 6 and CodeWarrior 8.3, and for
pre-standard iostreams, cleaned up outdated macros, and removed compatibility
layers for pre-C++11 environments.&lt;/p&gt;

&lt;p&gt;These removals weren’t just cosmetic—they simplified the code and made it easier
to reason about. In many places, I replaced legacy constructs with standard
features like &lt;code&gt;noexcept&lt;/code&gt; and &lt;code&gt;std::move()&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;constexpr-support&quot;&gt;constexpr support&lt;/h2&gt;

&lt;p&gt;When it is compiled as C++20 or later, almost all functions in DynamicBitset are
now &lt;code&gt;constexpr&lt;/code&gt;.&lt;/p&gt;

&lt;h2 id=&quot;dropping-obsolete-dependencies&quot;&gt;Dropping obsolete dependencies&lt;/h2&gt;

&lt;p&gt;As part of the cleanup effort, I also removed several outdated dependencies that
were no longer justified. These included Boost.Integer (previously used by
&lt;code&gt;lowest_bit()&lt;/code&gt;), &lt;code&gt;core/allocator_access.hpp&lt;/code&gt;, and various compatibility headers
tied to pre-C++11 environments. This not only reduces compile-time overhead and
cognitive load, but also makes the library easier to audit and maintain.&lt;/p&gt;

&lt;h2 id=&quot;strengthening-the-test-suite&quot;&gt;Strengthening the test suite&lt;/h2&gt;

&lt;p&gt;A part of this quarter’s work was expanding and refining the test coverage. I
added new tests for &lt;code&gt;flip()&lt;/code&gt;, &lt;code&gt;resize()&lt;/code&gt;, &lt;code&gt;swap()&lt;/code&gt;, and &lt;code&gt;operator!=()&lt;/code&gt;. I also
ensured that input iterators are properly supported in &lt;code&gt;append()&lt;/code&gt;, and verified
that &lt;code&gt;std::hash&lt;/code&gt; behaves correctly even when two bitsets share the same
underlying container but differ in size.&lt;/p&gt;

&lt;p&gt;Along the way, I cleaned up misleading comments, shortened overly complex
conditions, and removed legacy test code that no longer reflected the current
behavior of the library. The result is a test suite that’s more robust, more
meaningful, and easier to maintain.&lt;/p&gt;

&lt;h2 id=&quot;documentation-that-speaks-clearly&quot;&gt;Documentation that speaks clearly&lt;/h2&gt;

&lt;p&gt;I’ve always believed that documentation should be treated as part of the design,
not an afterthought. This quarter, I ported the existing documentation to MrDocs
and Antora, while fixing and improving a few bits in the process. This uncovered
a few MrDocs bugs, some of which remain—but I’m hopeful.&lt;/p&gt;

&lt;p&gt;I also spent time harmonizing the style and structure of the library’s comments
and docstrings.&lt;/p&gt;

&lt;p&gt;I chose to document iterator categories rather than exposing concrete types,
which keeps the interface clean and focused on behavior rather than
implementation details.&lt;/p&gt;

&lt;h2 id=&quot;new-member-functions-and-smarter-implementations&quot;&gt;New member functions and smarter implementations&lt;/h2&gt;

&lt;p&gt;This quarter also introduced several new member functions that expand the
expressiveness and utility of &lt;code&gt;dynamic_bitset&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code&gt;push_front()&lt;/code&gt; and &lt;code&gt;pop_front()&lt;/code&gt; allow bit-level manipulation at the front of
the bitset, complementing the existing back-oriented operations.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;find_first_off()&lt;/code&gt; and &lt;code&gt;find_next_off()&lt;/code&gt; provide symmetric functionality to
their &lt;code&gt;find_first()&lt;/code&gt; counterparts, making it easier to locate unset bits.&lt;/li&gt;
  &lt;li&gt;A constructor from &lt;code&gt;basic_string_view&lt;/code&gt; was added for C++17 and later,
improving interoperability with modern string APIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Alongside these additions, I revisited the implementation of several existing
members to improve performance and clarity:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code&gt;push_back()&lt;/code&gt; and &lt;code&gt;pop_back()&lt;/code&gt; were streamlined for better efficiency.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;all()&lt;/code&gt; and &lt;code&gt;lowest_bit()&lt;/code&gt; were simplified and optimized, with the latter also
shedding its dependency on Boost.Integer.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;append()&lt;/code&gt; was fixed to properly support input iterators and avoid redundant
checks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;minor-but-impactful-cleanups&quot;&gt;Minor but impactful cleanups&lt;/h2&gt;

&lt;p&gt;A large number of small edits improved correctness, readability, and
maintainability:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Fixed the stream inserter to set &lt;code&gt;badbit&lt;/code&gt; if an exception is thrown during
output.&lt;/li&gt;
  &lt;li&gt;Changed the stream extractor to rethrow any exceptions coming from the
underlying container.&lt;/li&gt;
  &lt;li&gt;Reordered and cleaned up all &lt;code&gt;#include&lt;/code&gt; sections to use the &lt;code&gt;&quot;&quot;&lt;/code&gt; form for
Boost includes where appropriate and to keep include groups sorted.&lt;/li&gt;
  &lt;li&gt;Removed an example timing benchmark that was misleading and a number of
unneeded comments and minor typos across code and docs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These edits reduce noise and make code reviews and maintenance more pleasant.&lt;/p&gt;

&lt;h2 id=&quot;reflections&quot;&gt;Reflections&lt;/h2&gt;

&lt;p&gt;Looking back, this quarter reminded me of the value of revisiting assumptions.
Many of the workarounds and constraints that once made sense are now obsolete.
By embracing modern C++ features and simplifying where possible, we can make
libraries like &lt;code&gt;dynamic_bitset&lt;/code&gt; more powerful and more approachable.&lt;/p&gt;

&lt;p&gt;It also reinforced the importance of clarity—both in code and in documentation.
Whether it’s a test case, a comment, or a public API, precision and consistency
go a long way.&lt;/p&gt;

&lt;p&gt;The work continues, but the foundation is stronger than ever. If you’re using
&lt;code&gt;dynamic_bitset&lt;/code&gt; or thinking about integrating it into your project, I’d love to
hear your feedback.&lt;/p&gt;</content><author><name></name></author><category term="gennaro" /><summary type="html">Over the past three months, I’ve been immersed in a deep and wide-ranging overhaul of the Boost.DynamicBitset library. What started as a few targeted improvements quickly evolved into a full-scale modernization effort—touching everything from the underlying container to iterator concepts, from test coverage to documentation style. More than 170 commits later, the library is leaner, more flexible, and better aligned with modern C++ practices. Making the core more flexible The most transformative change this quarter was allowing users to choose the underlying container type for dynamic_bitset. Until now, the implementation assumed std::vector, which limited optimization opportunities and imposed certain behaviors. By lifting that restriction, developers can now use alternatives like boost::container::small_vector, enabling small buffer optimization and more control over memory layout. This change had ripple effects throughout the codebase. I had to revisit assumptions about contiguous storage, update operators like &amp;lt;&amp;lt;=, &amp;gt;&amp;gt;=, and ensure that reference stability and iterator behavior were correctly handled. Introducing C++20 iterators One of the more exciting additions this quarter was support for C++20-style iterators. These new iterators conform to the standard iterator concepts, making dynamic_bitset more interoperable with modern algorithms and range-based utilities. I added assertions to ensure that both the underlying container and dynamic_bitset itself meet the requirements for bidirectional iteration. These checks are enabled only when compiling with C++20 or later, and they help catch subtle mismatches early—especially when users plug in custom containers. Saying goodbye to legacy workarounds With modern compilers and standard libraries, many old workarounds are no longer needed. I removed the max_size_workaround() after confirming that major implementations now correctly account for allocators in max_size(). I also dropped support for obsolete compilers like MSVC 6 and CodeWarrior 8.3, and for pre-standard iostreams, cleaned up outdated macros, and removed compatibility layers for pre-C++11 environments. These removals weren’t just cosmetic—they simplified the code and made it easier to reason about. In many places, I replaced legacy constructs with standard features like noexcept and std::move(). constexpr support When it is compiled as C++20 or later, almost all functions in DynamicBitset are now constexpr. Dropping obsolete dependencies As part of the cleanup effort, I also removed several outdated dependencies that were no longer justified. These included Boost.Integer (previously used by lowest_bit()), core/allocator_access.hpp, and various compatibility headers tied to pre-C++11 environments. This not only reduces compile-time overhead and cognitive load, but also makes the library easier to audit and maintain. Strengthening the test suite A part of this quarter’s work was expanding and refining the test coverage. I added new tests for flip(), resize(), swap(), and operator!=(). I also ensured that input iterators are properly supported in append(), and verified that std::hash behaves correctly even when two bitsets share the same underlying container but differ in size. Along the way, I cleaned up misleading comments, shortened overly complex conditions, and removed legacy test code that no longer reflected the current behavior of the library. The result is a test suite that’s more robust, more meaningful, and easier to maintain. Documentation that speaks clearly I’ve always believed that documentation should be treated as part of the design, not an afterthought. This quarter, I ported the existing documentation to MrDocs and Antora, while fixing and improving a few bits in the process. This uncovered a few MrDocs bugs, some of which remain—but I’m hopeful. I also spent time harmonizing the style and structure of the library’s comments and docstrings. I chose to document iterator categories rather than exposing concrete types, which keeps the interface clean and focused on behavior rather than implementation details. New member functions and smarter implementations This quarter also introduced several new member functions that expand the expressiveness and utility of dynamic_bitset: push_front() and pop_front() allow bit-level manipulation at the front of the bitset, complementing the existing back-oriented operations. find_first_off() and find_next_off() provide symmetric functionality to their find_first() counterparts, making it easier to locate unset bits. A constructor from basic_string_view was added for C++17 and later, improving interoperability with modern string APIs. Alongside these additions, I revisited the implementation of several existing members to improve performance and clarity: push_back() and pop_back() were streamlined for better efficiency. all() and lowest_bit() were simplified and optimized, with the latter also shedding its dependency on Boost.Integer. append() was fixed to properly support input iterators and avoid redundant checks. Minor but impactful cleanups A large number of small edits improved correctness, readability, and maintainability: Fixed the stream inserter to set badbit if an exception is thrown during output. Changed the stream extractor to rethrow any exceptions coming from the underlying container. Reordered and cleaned up all #include sections to use the &quot;&quot; form for Boost includes where appropriate and to keep include groups sorted. Removed an example timing benchmark that was misleading and a number of unneeded comments and minor typos across code and docs. These edits reduce noise and make code reviews and maintenance more pleasant. Reflections Looking back, this quarter reminded me of the value of revisiting assumptions. Many of the workarounds and constraints that once made sense are now obsolete. By embracing modern C++ features and simplifying where possible, we can make libraries like dynamic_bitset more powerful and more approachable. It also reinforced the importance of clarity—both in code and in documentation. Whether it’s a test case, a comment, or a public API, precision and consistency go a long way. The work continues, but the foundation is stronger than ever. If you’re using dynamic_bitset or thinking about integrating it into your project, I’d love to hear your feedback.</summary></entry><entry><title type="html">Working on Boost.Bloom roadmap</title><link href="http://cppalliance.org/joaquin/2025/10/09/Joaquins2025Q3Update.html" rel="alternate" type="text/html" title="Working on Boost.Bloom roadmap" /><published>2025-10-09T00:00:00+00:00</published><updated>2025-10-09T00:00:00+00:00</updated><id>http://cppalliance.org/joaquin/2025/10/09/Joaquins2025Q3Update</id><content type="html" xml:base="http://cppalliance.org/joaquin/2025/10/09/Joaquins2025Q3Update.html">&lt;p&gt;During Q3 2025, I’ve been working in the following areas:&lt;/p&gt;

&lt;h3 id=&quot;boostbloom&quot;&gt;Boost.Bloom&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://www.boost.org/doc/libs/latest/libs/bloom/doc/html/bloom.html&quot;&gt;Boost.Bloom&lt;/a&gt; has been officially
released in Boost 1.89. I’ve continued working on a number of roadmap features:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Originally, some subfilters (&lt;code&gt;block&lt;/code&gt;, &lt;code&gt;fast_multiblock32&lt;/code&gt; and &lt;code&gt;fast_multiblock64)&lt;/code&gt;
implemented lookup in a branchful or early-exit way: as soon as a bit checks to zero, lookup
terminates (with result &lt;code&gt;false&lt;/code&gt;). After extensive benchmarks, I’ve changed these subfilters
to branchless execution for somewhat better performance (&lt;a href=&quot;https://github.com/boostorg/bloom/pull/42&quot;&gt;PR#42&lt;/a&gt;).
Note that &lt;code&gt;boost::bloom::filter&amp;lt;T, K, ...&amp;gt;&lt;/code&gt; is still
branchful for &lt;code&gt;K&lt;/code&gt; (the number of subfilter operations per element): in this case, branchless
execution involves too much extra work and does not compensate for the removed branch speculation.
Ivan Matek helped with this investigation.&lt;/li&gt;
  &lt;li&gt;Added &lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/bloom/doc/html/bloom.html#tutorial_bulk_operations&quot;&gt;bulk-mode operations&lt;/a&gt;
following a similar approach to what we did with Boost.Unordered concurrent containers
(&lt;a href=&quot;https://github.com/boostorg/bloom/pull/43&quot;&gt;PR#42&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;I’ve been also working on a proof of concept for a dynamic filter where the &lt;em&gt;k&lt;/em&gt; and/or &lt;em&gt;k’&lt;/em&gt; values
can be specified at run time. As expected, the dynamic filter is slower than its static
counterpart, but benchmarks show that execution times can increase by up to 2x for lookup and
even more for insertion, which makes me undecided as to whether to launch this feature.
An alternative approach is to have a &lt;code&gt;dynamic_filter&amp;lt;T&amp;gt;&lt;/code&gt; be a wrapper over a virtual interface
whose implementation is selected at run time from a static table of implementations
based on static &lt;code&gt;filter&amp;lt;T, K&amp;gt;&lt;/code&gt; with
&lt;code&gt;K&lt;/code&gt; between 1 and some maximum value (this type erasure technique is described, among
other places, in slides 157-205 of Sean Parent’s
&lt;a href=&quot;https://raw.githubusercontent.com/wiki/sean-parent/sean-parent.github.io/presentations/2013-09-11-cpp-seasoning/cpp-seasoning.pdf&quot;&gt;C++ Seasoning&lt;/a&gt;
talk): performance is much better, but this approach also has drawbacks of its own.&lt;/li&gt;
  &lt;li&gt;Reviewed a contribution fom Braden Ganetsky to make the project’s &lt;code&gt;CMakeLists.txt&lt;/code&gt;
more Visual Studio-friendly (&lt;a href=&quot;https://github.com/boostorg/bloom/pull/33&quot;&gt;PR#33&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostunordered&quot;&gt;Boost.Unordered&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Reviewed &lt;a href=&quot;https://github.com/boostorg/unordered/pull/316&quot;&gt;PR#316&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostmultiindex&quot;&gt;Boost.MultiIndex&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Reviewed &lt;a href=&quot;https://github.com/boostorg/multi_index/pull/83&quot;&gt;PR#83&lt;/a&gt;, &lt;a href=&quot;https://github.com/boostorg/multi_index/pull/84&quot;&gt;PR#84&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostflyweight&quot;&gt;Boost.Flyweight&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Fixed an internal compile error that manifested with newer compilers implementing
&lt;a href=&quot;https://wg21.link/p0522r0&quot;&gt;P0522R0&lt;/a&gt;
(&lt;a href=&quot;https://github.com/boostorg/flyweight/pull/23&quot;&gt;PR#23&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Reviewed &lt;a href=&quot;https://github.com/boostorg/flyweight/pull/22&quot;&gt;PR#22&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostpolycollection&quot;&gt;Boost.PolyCollection&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Reviewed &lt;a href=&quot;https://github.com/boostorg/poly_collection/pull/32&quot;&gt;PR#32&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boost-website&quot;&gt;Boost website&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Filed issues
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1845&quot;&gt;#1845&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1846&quot;&gt;#1846&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1851&quot;&gt;#1851&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1858&quot;&gt;#1858&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1900&quot;&gt;#1900&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1927&quot;&gt;#1927&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1936&quot;&gt;#1936&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1937&quot;&gt;#1937&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Helped with the transition of the global release notes procedure to one
based on the new website repo exclusively
(&lt;a href=&quot;https://github.com/boostorg/website-v2-docs/pull/508&quot;&gt;PR#508&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2-docs/pull/510&quot;&gt;PR#510&lt;/a&gt;). This procedure is
expected to launch in time for the upcoming Boost 1.90 release.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boost-promotion&quot;&gt;Boost promotion&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Prepared and posted around 10 messages on Boost’s X account and Reddit.
The activity on social media has grown considerably thanks to the dedication of
Rob Beeston and others.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;support-to-the-community&quot;&gt;Support to the community&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Helped Jean-Louis Leroy get Drone support for the upcoming
Boost.OpenMethod library (&lt;a href=&quot;https://github.com/boostorg/openmethod/pull/39&quot;&gt;PR#39&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Supporting the community as a member of the Fiscal Sponsorhip Committee (FSC).&lt;/li&gt;
&lt;/ul&gt;</content><author><name></name></author><category term="joaquin" /><summary type="html">During Q3 2025, I’ve been working in the following areas: Boost.Bloom Boost.Bloom has been officially released in Boost 1.89. I’ve continued working on a number of roadmap features: Originally, some subfilters (block, fast_multiblock32 and fast_multiblock64) implemented lookup in a branchful or early-exit way: as soon as a bit checks to zero, lookup terminates (with result false). After extensive benchmarks, I’ve changed these subfilters to branchless execution for somewhat better performance (PR#42). Note that boost::bloom::filter&amp;lt;T, K, ...&amp;gt; is still branchful for K (the number of subfilter operations per element): in this case, branchless execution involves too much extra work and does not compensate for the removed branch speculation. Ivan Matek helped with this investigation. Added bulk-mode operations following a similar approach to what we did with Boost.Unordered concurrent containers (PR#42). I’ve been also working on a proof of concept for a dynamic filter where the k and/or k’ values can be specified at run time. As expected, the dynamic filter is slower than its static counterpart, but benchmarks show that execution times can increase by up to 2x for lookup and even more for insertion, which makes me undecided as to whether to launch this feature. An alternative approach is to have a dynamic_filter&amp;lt;T&amp;gt; be a wrapper over a virtual interface whose implementation is selected at run time from a static table of implementations based on static filter&amp;lt;T, K&amp;gt; with K between 1 and some maximum value (this type erasure technique is described, among other places, in slides 157-205 of Sean Parent’s C++ Seasoning talk): performance is much better, but this approach also has drawbacks of its own. Reviewed a contribution fom Braden Ganetsky to make the project’s CMakeLists.txt more Visual Studio-friendly (PR#33). Boost.Unordered Reviewed PR#316. Boost.MultiIndex Reviewed PR#83, PR#84. Boost.Flyweight Fixed an internal compile error that manifested with newer compilers implementing P0522R0 (PR#23). Reviewed PR#22. Boost.PolyCollection Reviewed PR#32. Boost website Filed issues #1845, #1846, #1851, #1858, #1900, #1927, #1936, #1937. Helped with the transition of the global release notes procedure to one based on the new website repo exclusively (PR#508, PR#510). This procedure is expected to launch in time for the upcoming Boost 1.90 release. Boost promotion Prepared and posted around 10 messages on Boost’s X account and Reddit. The activity on social media has grown considerably thanks to the dedication of Rob Beeston and others. Support to the community Helped Jean-Louis Leroy get Drone support for the upcoming Boost.OpenMethod library (PR#39). Supporting the community as a member of the Fiscal Sponsorhip Committee (FSC).</summary></entry><entry><title type="html">Levelling up Boost.Redis</title><link href="http://cppalliance.org/ruben/2025/10/07/Ruben2025Q3Update.html" rel="alternate" type="text/html" title="Levelling up Boost.Redis" /><published>2025-10-07T00:00:00+00:00</published><updated>2025-10-07T00:00:00+00:00</updated><id>http://cppalliance.org/ruben/2025/10/07/Ruben2025Q3Update</id><content type="html" xml:base="http://cppalliance.org/ruben/2025/10/07/Ruben2025Q3Update.html">&lt;p&gt;I’ve really come to appreciate Boost.Redis design. With only
three asynchronous primitives it exposes all the power of Redis,
with features like automatic pipelining that make it pretty unique.
Boost.Redis 1.90 will ship with some new exciting features that I’ll
cover in this post.&lt;/p&gt;

&lt;h2 id=&quot;cancelling-requests-with-asiocancel_after&quot;&gt;Cancelling requests with asio::cancel_after&lt;/h2&gt;

&lt;p&gt;Boost.Redis implements a number of reliability measures, including reconnection.
Suppose that you attempt to execute a request using &lt;code&gt;async_exec&lt;/code&gt;,
but the Redis server can’t be contacted (for example, because of a temporary network error).
Boost.Redis will try to re-establish the connection to the failed server,
and &lt;code&gt;async_exec&lt;/code&gt; will suspend until the server is healthy again.&lt;/p&gt;

&lt;p&gt;This is a great feature if the outage is transitory. But what would happen if
the Redis server is permanently down - for example, because of deployment issue that
must be manually solved? The user will see that &lt;code&gt;async_exec&lt;/code&gt; never completes.
If new requests continue to be issued, the program will end up consuming an
unbound amount of resources.&lt;/p&gt;

&lt;p&gt;Starting with Boost 1.90, you can use &lt;code&gt;asio::cancel_after&lt;/code&gt; to set
a timeout to your requests, preventing this from happening:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Compose your request
redis::request req;
req.push(&quot;SET&quot;, &quot;my_key&quot;, 42);

// If the request doesn't complete within 30s, consider it as failed
co_await conn.async_exec(req, redis::ignore, asio::cancel_after(30s));
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For this to work, &lt;code&gt;async_exec&lt;/code&gt; must properly support
&lt;a href=&quot;https://www.boost.org/doc/libs/latest/doc/html/boost_asio/overview/core/cancellation.html&quot;&gt;per-operation cancellation&lt;/a&gt;.
This is tricky because Boost.Redis allows executing several requests concurrently,
which are merged into a single pipeline before being sent.
For the above to useful, cancelling one request shouldn’t affect other requests.
In Asio parlance, &lt;code&gt;async_exec&lt;/code&gt; should support partial cancellation, at least.&lt;/p&gt;

&lt;p&gt;Cancelling a request that hasn’t been sent yet is trivial - you just remove it from
the queue and call it a day. Cancelling requests that are in progress is more involved.
We’ve solved this by using “tombstones”. If a response encounters a tombstone,
it will get ignored. This way, cancelling &lt;code&gt;async_exec&lt;/code&gt; has always an immediate
effect, but the connection is kept in a well-defined state.&lt;/p&gt;

&lt;h2 id=&quot;custom-setup-requests&quot;&gt;Custom setup requests&lt;/h2&gt;

&lt;p&gt;Redis talks the RESP3 protocol. But it’s not the only database system that speaks it.
We’ve recently learnt that other systems, like &lt;a href=&quot;https://www.tarantool.io/en/tarantooldb/&quot;&gt;Tarantool DB&lt;/a&gt;,
are also capable of speaking RESP3. This means that Boost.Redis can be used to
interact with these systems.&lt;/p&gt;

&lt;p&gt;At least in theory. In Boost 1.89, the library uses the &lt;a href=&quot;https://redis.io/docs/latest/commands/hello/&quot;&gt;&lt;code&gt;HELLO&lt;/code&gt;&lt;/a&gt;
command to upgrade to RESP3 (Redis’ default is using the less powerful RESP2).
The command is issued as part of the reconnection loop, without user intervention.
It happens that systems like Tarantool DB don’t support &lt;code&gt;HELLO&lt;/code&gt; because they
don’t speak RESP2 at all, so there is nothing to upgrade.&lt;/p&gt;

&lt;p&gt;This is part of a larger problem: users might want to run arbitrary commands
when the connection is established, to perform setup tasks.
This might include &lt;a href=&quot;https://redis.io/docs/latest/commands/auth/&quot;&gt;&lt;code&gt;AUTH&lt;/code&gt;&lt;/a&gt; to provide
credentials or &lt;a href=&quot;https://redis.io/docs/latest/commands/select/&quot;&gt;&lt;code&gt;SELECT&lt;/code&gt;&lt;/a&gt; to choose
a database index.&lt;/p&gt;

&lt;p&gt;Until now, all you could do is configure the parameters used by the &lt;code&gt;HELLO&lt;/code&gt; command.
Starting with Boost 1.90, you can run arbitrary commands at connection startup:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// At startup, don't send any HELLO, but set up authentication and select a database
redis::request setup_request;
setup_request.push(&quot;AUTH&quot;, &quot;my_user&quot;, &quot;my_password&quot;);
setup_request.push(&quot;SELECT&quot;, 2);

redis::config cfg {
    .use_setup = true, // use the custom setup request, rather than the default HELLO command
    .setup = std::move(setup_request), // will be run every time a connection is established
};

conn.async_run(cfg, asio::detached);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This opens the door simplifying code using PubSub. At the moment, such code needs
to issue a &lt;code&gt;SUBSCRIBE&lt;/code&gt; command every time a reconnection happens, which implies
some tricks around &lt;code&gt;async_receive&lt;/code&gt;. With this feature, you can just add a &lt;code&gt;SUBSCRIBE&lt;/code&gt;
command to your setup request and forget.&lt;/p&gt;

&lt;p&gt;This will be further explored in the next months, since &lt;code&gt;async_receive&lt;/code&gt; is currently
aware of reconnections, so it might need some extra changes to see real benefits.&lt;/p&gt;

&lt;h2 id=&quot;valkey-support&quot;&gt;Valkey support&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://valkey.io/&quot;&gt;Valkey&lt;/a&gt; is a fork from Redis v7.3. At the time of writing,
both databases are mostly interoperable in terms of protocol features, but
they are being developed separately (as happened with MySQL and MariaDB).&lt;/p&gt;

&lt;p&gt;In Boost.Redis we’ve committed to supporting both long-term
(at the moment, by deploying CI builds to test both).&lt;/p&gt;

&lt;h2 id=&quot;race-free-cancellation&quot;&gt;Race-free cancellation&lt;/h2&gt;

&lt;p&gt;It is very easy to introduce race conditions in cancellation with Asio.
Consider the following code, which is typical in libraries that
predate per-operation cancellation:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;struct connection
{
    asio::ip::tcp::socket sock;
    std::string buffer;

    struct echo_op
    {
        connection* obj;
        asio::coroutine coro{};

        template &amp;lt;class Self&amp;gt;
        void operator()(Self&amp;amp; self, error_code ec = {}, std::size_t = {})
        {
            BOOST_ASIO_CORO_REENTER(coro)
            {
                while (true)
                {
                    // Read from the socket
                    BOOST_ASIO_CORO_YIELD
                    asio::async_read_until(obj-&amp;gt;sock, asio::dynamic_buffer(obj-&amp;gt;buffer), &quot;\n&quot;, std::move(self));

                    // Check for errors
                    if (ec)
                        self.complete(ec);

                    // Write back
                    BOOST_ASIO_CORO_YIELD
                    asio::async_write(obj-&amp;gt;sock, asio::buffer(obj-&amp;gt;buffer), std::move(self));

                    // Done
                    self.complete(ec);
                }
            }
        }
    };

    template &amp;lt;class CompletionToken&amp;gt;
    auto async_echo(CompletionToken&amp;amp;&amp;amp; token)
    {
        return asio::async_compose&amp;lt;CompletionToken, void(error_code)&amp;gt;(echo_op{this}, token, sock);
    }

    void cancel() { sock.cancel(); }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;There is a race condition here. &lt;code&gt;cancel()&lt;/code&gt; may actually not cancel a running &lt;code&gt;async_echo&lt;/code&gt;.
After a read or write completes, the respective handler may not be called immediately,
but queued for execution. If &lt;code&gt;cancel()&lt;/code&gt; is called within that time frame, the cancellation
will be ignored.&lt;/p&gt;

&lt;p&gt;The proper way to handle this is using per-operation cancellation, rather than a &lt;code&gt;cancel()&lt;/code&gt; method.
&lt;code&gt;async_compose&lt;/code&gt; knows about this problem and keeps state about received cancellations, so you can write:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Read from the socket
BOOST_ASIO_CORO_YIELD
asio::async_read_until(obj-&amp;gt;sock, asio::dynamic_buffer(obj-&amp;gt;buffer), &quot;\n&quot;, std::move(self));

// Check for errors
if (ec)
    self.complete(ec);

// Check for cancellations
if (!!(self.get_cancellation_state().cancelled() &amp;amp; asio::cancellation_type_t::terminal))
    self.complete(asio::error::operation_aborted);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In 1.90, the library uses this approach everywhere, so cancellation is reliable.
Keeping the &lt;code&gt;cancel()&lt;/code&gt; method is a challenge, as it involves re-wiring cancellation
slots, so I won’t show it here - but we’ve managed to do it.&lt;/p&gt;

&lt;h2 id=&quot;next-steps&quot;&gt;Next steps&lt;/h2&gt;

&lt;p&gt;I’ve got plans to keep working on Boost.Redis for a time. You can expect
more features in 1.91, like &lt;a href=&quot;https://redis.io/docs/latest/operate/oss_and_stack/management/sentinel/&quot;&gt;Sentinel&lt;/a&gt;
support and &lt;a href=&quot;https://github.com/boostorg/redis/issues/104&quot;&gt;more reliable health checks&lt;/a&gt;.&lt;/p&gt;</content><author><name></name></author><category term="ruben" /><summary type="html">I’ve really come to appreciate Boost.Redis design. With only three asynchronous primitives it exposes all the power of Redis, with features like automatic pipelining that make it pretty unique. Boost.Redis 1.90 will ship with some new exciting features that I’ll cover in this post. Cancelling requests with asio::cancel_after Boost.Redis implements a number of reliability measures, including reconnection. Suppose that you attempt to execute a request using async_exec, but the Redis server can’t be contacted (for example, because of a temporary network error). Boost.Redis will try to re-establish the connection to the failed server, and async_exec will suspend until the server is healthy again. This is a great feature if the outage is transitory. But what would happen if the Redis server is permanently down - for example, because of deployment issue that must be manually solved? The user will see that async_exec never completes. If new requests continue to be issued, the program will end up consuming an unbound amount of resources. Starting with Boost 1.90, you can use asio::cancel_after to set a timeout to your requests, preventing this from happening: // Compose your request redis::request req; req.push(&quot;SET&quot;, &quot;my_key&quot;, 42); // If the request doesn't complete within 30s, consider it as failed co_await conn.async_exec(req, redis::ignore, asio::cancel_after(30s)); For this to work, async_exec must properly support per-operation cancellation. This is tricky because Boost.Redis allows executing several requests concurrently, which are merged into a single pipeline before being sent. For the above to useful, cancelling one request shouldn’t affect other requests. In Asio parlance, async_exec should support partial cancellation, at least. Cancelling a request that hasn’t been sent yet is trivial - you just remove it from the queue and call it a day. Cancelling requests that are in progress is more involved. We’ve solved this by using “tombstones”. If a response encounters a tombstone, it will get ignored. This way, cancelling async_exec has always an immediate effect, but the connection is kept in a well-defined state. Custom setup requests Redis talks the RESP3 protocol. But it’s not the only database system that speaks it. We’ve recently learnt that other systems, like Tarantool DB, are also capable of speaking RESP3. This means that Boost.Redis can be used to interact with these systems. At least in theory. In Boost 1.89, the library uses the HELLO command to upgrade to RESP3 (Redis’ default is using the less powerful RESP2). The command is issued as part of the reconnection loop, without user intervention. It happens that systems like Tarantool DB don’t support HELLO because they don’t speak RESP2 at all, so there is nothing to upgrade. This is part of a larger problem: users might want to run arbitrary commands when the connection is established, to perform setup tasks. This might include AUTH to provide credentials or SELECT to choose a database index. Until now, all you could do is configure the parameters used by the HELLO command. Starting with Boost 1.90, you can run arbitrary commands at connection startup: // At startup, don't send any HELLO, but set up authentication and select a database redis::request setup_request; setup_request.push(&quot;AUTH&quot;, &quot;my_user&quot;, &quot;my_password&quot;); setup_request.push(&quot;SELECT&quot;, 2); redis::config cfg { .use_setup = true, // use the custom setup request, rather than the default HELLO command .setup = std::move(setup_request), // will be run every time a connection is established }; conn.async_run(cfg, asio::detached); This opens the door simplifying code using PubSub. At the moment, such code needs to issue a SUBSCRIBE command every time a reconnection happens, which implies some tricks around async_receive. With this feature, you can just add a SUBSCRIBE command to your setup request and forget. This will be further explored in the next months, since async_receive is currently aware of reconnections, so it might need some extra changes to see real benefits. Valkey support Valkey is a fork from Redis v7.3. At the time of writing, both databases are mostly interoperable in terms of protocol features, but they are being developed separately (as happened with MySQL and MariaDB). In Boost.Redis we’ve committed to supporting both long-term (at the moment, by deploying CI builds to test both). Race-free cancellation It is very easy to introduce race conditions in cancellation with Asio. Consider the following code, which is typical in libraries that predate per-operation cancellation: struct connection { asio::ip::tcp::socket sock; std::string buffer; struct echo_op { connection* obj; asio::coroutine coro{}; template &amp;lt;class Self&amp;gt; void operator()(Self&amp;amp; self, error_code ec = {}, std::size_t = {}) { BOOST_ASIO_CORO_REENTER(coro) { while (true) { // Read from the socket BOOST_ASIO_CORO_YIELD asio::async_read_until(obj-&amp;gt;sock, asio::dynamic_buffer(obj-&amp;gt;buffer), &quot;\n&quot;, std::move(self)); // Check for errors if (ec) self.complete(ec); // Write back BOOST_ASIO_CORO_YIELD asio::async_write(obj-&amp;gt;sock, asio::buffer(obj-&amp;gt;buffer), std::move(self)); // Done self.complete(ec); } } } }; template &amp;lt;class CompletionToken&amp;gt; auto async_echo(CompletionToken&amp;amp;&amp;amp; token) { return asio::async_compose&amp;lt;CompletionToken, void(error_code)&amp;gt;(echo_op{this}, token, sock); } void cancel() { sock.cancel(); } }; There is a race condition here. cancel() may actually not cancel a running async_echo. After a read or write completes, the respective handler may not be called immediately, but queued for execution. If cancel() is called within that time frame, the cancellation will be ignored. The proper way to handle this is using per-operation cancellation, rather than a cancel() method. async_compose knows about this problem and keeps state about received cancellations, so you can write: // Read from the socket BOOST_ASIO_CORO_YIELD asio::async_read_until(obj-&amp;gt;sock, asio::dynamic_buffer(obj-&amp;gt;buffer), &quot;\n&quot;, std::move(self)); // Check for errors if (ec) self.complete(ec); // Check for cancellations if (!!(self.get_cancellation_state().cancelled() &amp;amp; asio::cancellation_type_t::terminal)) self.complete(asio::error::operation_aborted); In 1.90, the library uses this approach everywhere, so cancellation is reliable. Keeping the cancel() method is a challenge, as it involves re-wiring cancellation slots, so I won’t show it here - but we’ve managed to do it. Next steps I’ve got plans to keep working on Boost.Redis for a time. You can expect more features in 1.91, like Sentinel support and more reliable health checks.</summary></entry><entry><title type="html">Decimal Goes Back to Review</title><link href="http://cppalliance.org/matt/2025/10/06/Matts2025Q3Update.html" rel="alternate" type="text/html" title="Decimal Goes Back to Review" /><published>2025-10-06T00:00:00+00:00</published><updated>2025-10-06T00:00:00+00:00</updated><id>http://cppalliance.org/matt/2025/10/06/Matts2025Q3Update</id><content type="html" xml:base="http://cppalliance.org/matt/2025/10/06/Matts2025Q3Update.html">&lt;p&gt;We are excited to announce that the Decimal (&lt;a href=&quot;https://github.com/cppalliance/decimal&quot;&gt;https://github.com/cppalliance/decimal&lt;/a&gt;) library is going back to review for inclusion in Boost from 06 to 15 October.
In preparation for this we have made quite a few changes since the indeterminate end of the first review about 9 months ago:&lt;/p&gt;

&lt;p&gt;Breaking Changes:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Based on bitwise comparisons with other similar libraries and database software, we have changed the internal encoding of our IEEE 754-compliant types&lt;/li&gt;
  &lt;li&gt;We spent about 3 months optimizing just back end integer types that are now used throughout the library, and as the internals of decimal128_t&lt;/li&gt;
  &lt;li&gt;We have changed the type names to better match conventions:
    &lt;ul&gt;
      &lt;li&gt;&lt;code&gt;decimalXX&lt;/code&gt; is now &lt;code&gt;decimalXX_t&lt;/code&gt;&lt;/li&gt;
      &lt;li&gt;&lt;code&gt;decimalXX_fast&lt;/code&gt; is now &lt;code&gt;decimal_fastXX_t&lt;/code&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;The headers have been similarly renamed (e.g. decimal32.hpp -&amp;gt; decimal32_t.hpp), and can now be used independently instead of requiring the monolith based on feedback in Review&lt;/li&gt;
  &lt;li&gt;Constructors have been simplified to reduce confusion (no more double negative logic)&lt;/li&gt;
  &lt;li&gt;The default rounding mode has changed to align with IEEE 754, with rounding bugs being squashed across the modes as well&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Other Changes:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;The documenation content has been overhauled thanks to feedback from Peter Turcan and others during the first review&lt;/li&gt;
  &lt;li&gt;The docs are no longer a single long page of Asciidoc; we have moved to Antora. Thanks to Joaquín and Christian for making it trivial to copy from Unordered to make that happen.
    &lt;ul&gt;
      &lt;li&gt;https://develop.decimal.cpp.al/&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;We now support formatting with {fmt}&lt;/li&gt;
  &lt;li&gt;Benchmarks have been expanded to include GCC &lt;code&gt;_DecimalXX&lt;/code&gt; types, and Intel’s libbid. I think people should be pleased with the results now, since that was a huge point of contention at the end of the review&lt;/li&gt;
  &lt;li&gt;We have added support for CMake pkg config for ease of use&lt;/li&gt;
  &lt;li&gt;Every post-review issue John was kind enough to consolidate and open have been addressed: https://github.com/cppalliance/decimal/issues?q=is%3Aissue%20state%3Aclosed%20label%3A%22Boost%20Review%22&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Continued Developments:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;I think the only unaddressed comment from the first review is support for hardware decimal floating point types.
There are a few rarer architectures that have native decimal floating point units like POWER10.
Is it possible to fully integrate these native types for use in the library?
Armed with a compiler farm account I have begun developing a wrapper around the native types that seems to work.
Stay tuned.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;One item that we have considered, but have not put any effort into yet would be getting the library running on CUDA platforms.
If this is a feature that you are interested in, please let us know!&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;As is always the case with Boost reviews, regardless of the outcome I am sure that we will receive lots of feedback on how to improve the library.&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are interested, much of the contents of the first review can be found in the original thread on the boost mailing list archive: https://lists.boost.org/archives/list/boost@lists.boost.org/thread/AGFOQZMJ4HKKQ5C5XDDKNJ3VJL72YTWL/&lt;/p&gt;</content><author><name></name></author><category term="matt" /><summary type="html">We are excited to announce that the Decimal (https://github.com/cppalliance/decimal) library is going back to review for inclusion in Boost from 06 to 15 October. In preparation for this we have made quite a few changes since the indeterminate end of the first review about 9 months ago: Breaking Changes: Based on bitwise comparisons with other similar libraries and database software, we have changed the internal encoding of our IEEE 754-compliant types We spent about 3 months optimizing just back end integer types that are now used throughout the library, and as the internals of decimal128_t We have changed the type names to better match conventions: decimalXX is now decimalXX_t decimalXX_fast is now decimal_fastXX_t The headers have been similarly renamed (e.g. decimal32.hpp -&amp;gt; decimal32_t.hpp), and can now be used independently instead of requiring the monolith based on feedback in Review Constructors have been simplified to reduce confusion (no more double negative logic) The default rounding mode has changed to align with IEEE 754, with rounding bugs being squashed across the modes as well Other Changes: The documenation content has been overhauled thanks to feedback from Peter Turcan and others during the first review The docs are no longer a single long page of Asciidoc; we have moved to Antora. Thanks to Joaquín and Christian for making it trivial to copy from Unordered to make that happen. https://develop.decimal.cpp.al/ We now support formatting with {fmt} Benchmarks have been expanded to include GCC _DecimalXX types, and Intel’s libbid. I think people should be pleased with the results now, since that was a huge point of contention at the end of the review We have added support for CMake pkg config for ease of use Every post-review issue John was kind enough to consolidate and open have been addressed: https://github.com/cppalliance/decimal/issues?q=is%3Aissue%20state%3Aclosed%20label%3A%22Boost%20Review%22 Continued Developments: I think the only unaddressed comment from the first review is support for hardware decimal floating point types. There are a few rarer architectures that have native decimal floating point units like POWER10. Is it possible to fully integrate these native types for use in the library? Armed with a compiler farm account I have begun developing a wrapper around the native types that seems to work. Stay tuned. One item that we have considered, but have not put any effort into yet would be getting the library running on CUDA platforms. If this is a feature that you are interested in, please let us know! As is always the case with Boost reviews, regardless of the outcome I am sure that we will receive lots of feedback on how to improve the library. If you are interested, much of the contents of the first review can be found in the original thread on the boost mailing list archive: https://lists.boost.org/archives/list/boost@lists.boost.org/thread/AGFOQZMJ4HKKQ5C5XDDKNJ3VJL72YTWL/</summary></entry></feed>