Preprints
Preprints play an increasingly important role in how the formal methods community communicates research findings. This page discusses what preprints are, why they matter for the verification research community, and how competition results — including those from VSComp — connect to the broader academic publication landscape. For published reports associated with the competition, see the publications page.

What Are Preprints?
A preprint is a version of a scholarly paper that is shared publicly before it has undergone formal peer review. The practice has a long history in mathematics and physics, where researchers have circulated preliminary versions of their work for decades. In computer science — and in formal methods in particular — the preprint culture has grown substantially over the past fifteen years, driven by the recognition that the traditional publication cycle is often too slow to keep pace with the rate of progress in the field.
Preprints are not unfinished work, though they may evolve before final publication. They are typically complete research contributions that have been written up to a standard the authors consider suitable for community scrutiny. What distinguishes them from published papers is the absence of formal peer review — the feedback loop that conference and journal publication provides. In practice, many preprints are later published in peer-reviewed venues, sometimes with revisions prompted by community feedback received during the preprint phase.
Preprints in the Formal Methods Community
The formal methods community, like much of computer science, has traditionally communicated research through conference proceedings and journal articles. Conferences such as VSTTE, CAV (Computer Aided Verification), TACAS (Tools and Algorithms for the Construction and Analysis of Systems), and FM (Formal Methods) serve as primary venues for presenting new work. The peer-review process at these venues ensures a baseline of quality and provides authors with expert feedback.
However, the conference cycle imposes delays. A paper submitted in March may not appear in proceedings until October or later. Preprint servers address this gap by allowing researchers to make their work available as soon as it is ready, while the formal review process runs in parallel. This is particularly valuable in areas where tool development moves quickly — a new verification technique demonstrated at a competition in June can be described in a preprint shared the same month, rather than waiting for the next conference deadline.
The most widely used preprint server in the formal methods community is arXiv, maintained by Cornell University. The Logic in Computer Science category on arXiv hosts a substantial and growing collection of papers on formal verification, theorem proving, model checking, and related topics. Researchers working on verification tools, proof techniques, and specification languages regularly post their work there, making it a valuable resource for anyone following developments in the field.
How Competition Results Relate to Published Research
Verification competitions generate data — about tool capabilities, about specification challenges, about the practical limits of current techniques. This data has research value beyond the competition itself. Participants who develop novel approaches to competition problems often write up their methods as research contributions, and those contributions frequently appear first as preprints before being submitted to conferences or journals.
In the context of VSComp, this connection takes several forms. A participant might use a competition problem as a case study for a new verification technique, demonstrating its strengths and limitations on a well-defined challenge. A tool developer might describe improvements to their system that were motivated by difficulties encountered during the competition. An organiser might analyse the results across multiple editions to identify trends in tool effectiveness or specification quality.
The solutions archive documents the approaches taken by participants in each edition. Where those approaches have been written up as research papers — whether published or as preprints — the corresponding entries note the connection. This cross-referencing helps readers trace the path from a competition entry to the underlying research contribution.
The Value of Open Access in Verification Research
Formal verification is a field where reproducibility is paramount. A proof is either valid or it is not, and the community expects to be able to check published claims by running the tools and examining the proof artefacts. Open access to research papers — including early preprint versions — supports this expectation by ensuring that researchers, students, and practitioners can read the work regardless of whether their institution subscribes to a particular journal or conference proceedings.
This is not a trivial concern. Many formal-methods papers appear in conference proceedings published by Springer (in the Lecture Notes in Computer Science series), ACM, or IEEE. Access to these proceedings typically requires an institutional subscription or a per-article payment. Preprints provide an alternative path: authors can post their work on a preprint server, making it freely available to anyone, while the formal version appears behind a paywall. Most publishers now permit this, and many funding agencies require it.
For the VSComp community, open access has practical significance. Participants preparing for the competition benefit from being able to read the latest research on verification techniques without access barriers. Tool developers benefit from being able to study how competing tools approach the same problems. And students — who may lack institutional access to major publishers — benefit from being able to engage with the literature of the field at no cost.
Where Researchers Share Preliminary Findings
Beyond arXiv, formal methods researchers share preliminary work through several channels. Institutional repositories maintained by universities and research labs host technical reports and working papers. Personal and research-group websites often include preprint versions of papers alongside the published versions. Some researchers use academic social networks such as ResearchGate or their institutional pages to disseminate early drafts.
Conference workshops also function as a preprint-like venue. Workshop papers are typically shorter than full conference papers, undergo lighter review, and are published more quickly. Several VSComp reports have appeared as workshop papers at VSTTE events, providing a rapid publication channel for competition results while the more detailed analysis appears later in a full paper or journal article.
For readers interested in the intersection of verification competitions and current research, the most productive approach is to follow the relevant arXiv categories, monitor the proceedings of the major formal-methods conferences, and consult the publications page for reports directly associated with VSComp editions. Together, these sources provide a comprehensive picture of how competition results feed into and draw from the broader research landscape.
Reading Preprints Effectively
Because preprints have not undergone formal peer review, readers should approach them with appropriate care. This does not mean treating them as unreliable — in practice, the quality of preprints in formal methods is generally high, and many are indistinguishable from the eventually published version. But it does mean being aware that the work may be revised, that errors may not yet have been caught, and that the conclusions may be refined in the light of subsequent feedback.
A useful practice is to check whether a preprint has since been published in a peer-reviewed venue. If so, the published version is the authoritative reference. If not, the preprint stands on its own merits, and the reader must evaluate the claims based on the evidence presented — which, in formal methods, often includes machine-checkable proof artefacts that provide their own form of validation independent of the review process.
The publications page lists formally published reports and proceedings associated with each documented edition of VSComp.
The solutions archive shows how participants approached competition challenges, with references to associated research where available.
The about page describes the competition's mission, community, and institutional context within the formal methods landscape.