Competition Years

The Verified Software Competition has run four main editions between 2010 and 2017, each exploring different facets of formal software verification. This page traces that history and links to the detailed hub for every year.

Abstract timeline illustration representing the history of the Verified Software Competition editions

Four Editions, One Mission

Since its founding in 2010, VSComp has served as a proving ground for formal verification techniques applied to real software challenges. The competition is organised periodically rather than annually — each edition appears when the community has refined its tools, identified new problem domains, and gathered enough momentum to make a meaningful comparison of approaches.

Every edition follows a broadly similar structure. Organisers publish a set of challenge problems drawn from specific verification domains. Participants then submit formal solutions — typically consisting of annotated source code, specifications, and machine-checkable proofs — within a defined submission window. Solutions are evaluated against published criteria that emphasise correctness, completeness, and clarity.

The timeline below summarises what each edition covered. Click through to any year hub for full problem listings, solution archives, and format details.

2010

VSComp 2010The inaugural edition

VSComp began in 2010 as an experiment: could a distributed, online competition meaningfully evaluate formal verification skills? The organisers selected five problems that ranged from classic algorithm verification — proving the correctness of a sorting routine, for instance — to more nuanced challenges involving tree data structures and loop invariants. Participants could use any tool or language that produced machine-checkable proofs.

5 problemsSorting, Data structures, Assertional verification
SortingData structuresAssertional verification
2014

VSComp 2014Broader scope, sharper criteria

After a multi-year pause, the 2014 edition expanded the scope significantly. New problem domains included concurrent data structures and refinement proofs, demanding that participants show not just functional correctness but also behavioural equivalence between specification and implementation. Evaluation criteria became more explicit, with reviewers assessing completeness and clarity alongside raw correctness.

5 problemsConcurrency, Refinement, Functional correctness
ConcurrencyRefinementFunctional correctness
2015

VSComp 2015Practical verification at scale

The 2015 competition turned toward more realistic, systems-level challenges. Problems drew on practical scenarios — a text editor buffer, a DNS server component, and a queue whose amortized complexity had to be formally bounded. These problems tested whether verification tools and methodologies could handle the kind of messy, stateful code that working engineers write every day.

5 problemsSystems verification, Text processing, Amortized complexity
Systems verificationText processingAmortized complexity
2017

VSComp 2017The most recent edition

The 2017 edition represented the latest iteration of VSComp, incorporating lessons from every previous round. Problem domains expanded to include hybrid systems and security-sensitive protocol components. Particular emphasis was placed on specification completeness — participants were expected to formalise not only the implementation but also the assumptions under which their proofs held. This edition attracted the most geographically diverse set of submissions.

5 problemsHybrid systems, Security protocols, Specification completeness
Hybrid systemsSecurity protocolsSpecification completeness

What Happens Between Competition Years

The gaps between editions are not idle periods. Between 2010 and 2014, for example, the verification community made significant advances in SMT solver performance, new specification languages matured, and several teams published reflections on the first edition that shaped the second. Similarly, the interval between 2015 and 2017 saw the emergence of hybrid-systems verification as a tractable domain, which directly influenced the 2017 problem set.

In practical terms, this means that each edition of VSComp is not simply a repetition with new problems. The rules, evaluation criteria, and even the definition of what constitutes a "complete" solution evolve from year to year. Readers interested in these shifts can compare the rules documentation across editions.

How Verification Domains Evolved

The earliest problems focused on classic program correctness: sorting algorithms, search trees, and array manipulation with loop invariants. By 2014, concurrency and behavioural refinement entered the picture. The 2015 edition pushed toward systems-level realism — text buffers, network components, amortized-complexity proofs. And in 2017, the organisers reached into hybrid systems and security protocols, reflecting the broader research community's expanding ambitions for formal methods.

This progression is deliberate. Each edition builds on the previous one, testing whether the tools and techniques that succeeded in simpler domains can scale to harder, more heterogeneous challenges. For a complete list of every challenge across all years, see the problems catalogue.

Navigating Year Hubs

Each year hub page includes the full problem list, a summary of the submission format, links to available solutions, and any errata or clarifications that were published during or after the competition window.

Looking for Specific Problems?

If you already know which challenge you want, the problems index provides a filterable catalogue. You can also browse submitted solutions grouped by year and problem.

How the Competition Format Has Changed

The 2010 edition used a relatively informal submission process. Participants submitted their solutions within a specified window, and evaluation was carried out by the organising committee. By 2014, the process had become more structured: solutions were expected to include not only executable code and proofs, but also a written explanation of the verification strategy employed.

The 2015 and 2017 editions continued this trend toward greater rigour. Evaluation criteria were published in advance, and participants were encouraged to address all three pillars — correctness, completeness, and clarity — in their submissions. For the current evaluation framework, consult the rules page.

Regardless of the edition, the core principle remains the same: solutions must be formally verified, meaning they include machine-checkable proofs of the stated specification. Informal testing alone does not meet the competition's standard. This requirement distinguishes VSComp from conventional programming contests and places it squarely within the academic formal methods tradition.

For academic papers and reports related to VSComp editions, visit the publications page. The solutions archive contains detailed year-by-year breakdowns, and the problems catalogue provides a cross-year view of every challenge issued.