Solutions Archive
This page collects the formally verified solutions submitted to VSComp across all four competition editions. You will find year-by-year groupings, notes on verification approaches used, and guidance on how to cite this work.

What Counts as a Solution
In the context of VSComp, a "solution" is more than working code. A complete solution typically consists of three interrelated artifacts: an implementation (source code annotated with specifications), a formal specification (preconditions, postconditions, invariants, or protocol-level properties), and a machine-checkable proof that the implementation satisfies the specification.
The precise format varies depending on the verification tool used. A Dafny solution, for instance, combines implementation and proof in a single annotated file. A Coq solution might separate the specification into a module signature and provide a proof script in a separate file. An Event-B solution uses a refinement chain. The competition does not prescribe a single format; what matters is that the proof is mechanically checkable.
Solutions are grouped below by competition year. Within each year, problems are listed in order, with a brief note on the verification approaches that participants employed. For the full problem descriptions, see the problems catalogue.
Solution Formats and Reproducibility
Submitted solutions have arrived in a variety of formats over the years. Common submission formats include annotated source files (for tools like Dafny, Why3, and SPARK), proof scripts (for Coq, Isabelle/HOL, and Lean), and model files (for TLA+, Event-B, and KeYmaera X). Some submissions also include build scripts, Makefiles, or README files documenting dependencies.
Reproducibility is an important consideration but also a practical challenge. Verification tools evolve between competition years, and a proof that checked successfully under, say, Dafny 2.3 may require minor adjustments to compile under a later version. Where known, we note the tool version used by the original submitter. Readers attempting to reproduce results should expect that some adaptation may be necessary, particularly for older submissions.
The competition has not imposed a single archival format. This reflects the reality of the field: the formal-methods community uses dozens of tools and proof assistants, each with its own file conventions. Standardisation efforts exist but are outside the scope of what a competition can mandate.
VSComp 2010
The inaugural edition attracted solutions in a range of verification tools and languages. Most submissions targeted the sorting and array-partitioning problems, which were the most accessible. The BST and linked-list challenges received fewer but more technically ambitious entries.
P1-2010 Sorting Verification
Solutions ranged from Hoare-logic–based annotations in tools like Dafny and Why3 to fully mechanised proofs in Coq and Isabelle/HOL. Key differences lay in how permutation properties were encoded.
P2-2010 Binary Search Tree Invariants
Most solutions used inductive proofs over the tree structure. Some participants chose to formalise a balanced variant, while others focused on unbalanced BSTs and addressed only the ordering invariant.
P3-2010 Array Partitioning
A straightforward Hoare-logic exercise for most tools. Submissions differed mainly in how they specified the partition predicate and handled equal-to-pivot elements.
P4-2010 Linked List Reversal
Separation logic–based approaches proved popular here, since reasoning about pointer aliasing is central to the problem. Some participants used frame-rule–based tools; others employed ghost state.
P5-2010 Maximum Subarray Sum
Kadane's algorithm was the typical target. Solutions varied in how they expressed the invariant relating the current-maximum and global-maximum variables.
VSComp 2014
The 2014 edition introduced concurrency and refinement challenges that required fundamentally different proof techniques. Participation was somewhat smaller for the hardest problems, but the solutions submitted were notably more sophisticated than those from 2010.
P1-2014 Concurrent Queue
Linearisability proofs formed the core of most submissions. Participants used tools ranging from CIVL to custom Owicki-Gries encodings. The choice of linearisation point was the main point of divergence.
P2-2014 Refinement Mapping
Solutions typically constructed forward simulations. Several submissions used TLA+ or Event-B to express the refinement relation and discharged proof obligations mechanically.
P3-2014 Monotonic Counter
A more accessible concurrency challenge. Most solutions relied on atomic increment semantics and a global invariant stating monotonicity. Some participants went further and proved lock-freedom.
P4-2014 Priority Queue Correctness
Heap-based implementations dominated, with proofs establishing the heap invariant inductively over insert and extract-min operations. A few submissions verified a binomial-heap variant.
P5-2014 State Machine Equivalence
The most challenging 2014 problem in terms of participation. Successful solutions constructed explicit bisimulation relations and proved the transfer conditions step by step.
VSComp 2015
The 2015 edition tested verification at a systems level. Solutions were generally longer and more complex, reflecting the realistic nature of the challenge problems. Reproducibility became a significant consideration, as several solutions depended on specific tool versions.
P1-2015 Text Editor Buffer
Gap-buffer implementations required careful invariant management for cursor position and buffer contents. Separation-logic tools proved particularly well-suited, though several Dafny submissions also succeeded.
P2-2015 DNS Server Components
This systems-level challenge tested the boundaries of most verification tools. Successful approaches decomposed the problem into modular specifications for parsing, lookup, and response construction.
P3-2015 Queue with Amortized Complexity
Dual-stack queue proofs required both functional correctness and a potential-function argument. Several participants formalised the banker's method; others adapted the physicist's method.
P4-2015 Log-Structured Storage
Crash consistency was the distinguishing element. Submissions typically modelled an abstract log and showed that append and compaction operations preserved a representation invariant even under interruption.
P5-2015 String Matching
Knuth-Morris-Pratt and naive algorithms were both popular targets. The key difficulty lay in the completeness argument — showing that no occurrence is missed.
VSComp 2017
The most recent edition tackled domains that had previously been considered out of scope for online competitions: hybrid systems, security protocols, and specification meta-reasoning. Solution quality was high, though some problems received very few submissions due to their difficulty.
P1-2017 Hybrid Controller Safety
KeYmaera X was the dominant tool for this problem, though some participants used custom differential invariant proofs. The main challenge was identifying a sufficiently strong continuous invariant.
P2-2017 Secure Message Passing
ProVerif and Tamarin were popular choices. Solutions formalised the adversary model explicitly and proved authentication and secrecy properties against active attackers.
P3-2017 Specification Completeness Check
A meta-level challenge that required participants to analyse a partial specification, identify gaps, and provide a completed version with justification. Approaches varied widely.
P4-2017 Distributed Consensus Round
TLA+ and Ivy were used by several participants. Agreement and validity properties were the primary targets, with the fault model (crash failures) carefully formalised.
P5-2017 Verified Compiler Pass
CompCert-style forward simulations were the most common technique. Participants defined a small-step operational semantics and showed that the optimisation pass preserved observable behaviour.
How Solutions Are Reviewed
Submitted solutions are evaluated by the organising committee against published criteria. The three main dimensions are correctness (does the proof actually establish the specified properties?), completeness (does the solution address all aspects of the problem, or only a subset?), and clarity (is the proof structured in a way that a knowledgeable reader can follow?).
The relative weight of these criteria has shifted over time. In 2010, correctness was the primary concern. By 2017, completeness and clarity received more emphasis, reflecting the community's growing recognition that a verified artifact is only as useful as its readability allows. The rules page documents the evaluation framework in detail.
If you reference VSComp solutions in academic work, we recommend citing both the competition edition and the specific problem. A suggested format:
Verified Software Competition (VSComp), [Year] Edition.
Problem [ID]: [Title].
Available at: https://vscomp.org/solutions/For published papers and reports related to VSComp, consult the publications page, which includes structured citation entries for each documented competition report. If you are citing a specific participant's solution, include the participant name and tool used where known.
Trends in Submitted Solutions
Looking across all four editions, several patterns emerge. The diversity of verification tools used has increased over time: the 2010 edition was dominated by a handful of tools, while 2017 saw submissions using more than a dozen distinct verification frameworks. This reflects both the maturation of newer tools and the broadening of the competition's problem domains.
Solution length has also grown. Early problems like sorting verification could be solved in a few dozen annotated lines. The systems-level challenges of 2015 and the hybrid-systems problems of 2017 regularly produced solutions spanning hundreds of lines of specification and proof. Whether this growth indicates increasing ambition, increasing tool overhead, or both, is a matter of ongoing discussion in the community.
For a detailed look at how each edition's problems were designed and what domains they targeted, see the problems catalogue and the competition years index.
View the full problem catalogue with domain tags, difficulty ratings, and artifact-type filters.
The years index provides a timeline of every edition, with links to year-specific hubs.
Academic reports and papers related to VSComp are listed on the publications page.