VSComp 2017

The most recent edition of the Verified Software Competition placed particular emphasis on tool interoperability and proof portability. Five problems spanned concurrent verification, protocol correctness, data-structure invariants, numeric computation, and compiler verification. This page covers the domains addressed, the submission process, the full problem listing with links to the problems catalogue and solutions archive, available artifacts, and errata.

Abstract illustration representing the 2017 edition of the Verified Software Competition

The Most Recent Edition

VSComp 2017 incorporated lessons from every previous round. The problem set was the most diverse to date, reaching into hybrid systems, security-sensitive protocols, distributed consensus, and compiler correctness — domains that would have been out of scope for a time-limited competition just a few years earlier. The organisers also placed new emphasis on two themes that cut across all five problems: tool interoperability (could a proof started in one framework be meaningfully compared with one produced in another?) and proof portability (how much of a solution was tied to a specific tool version or configuration?).

These themes reflected a broader conversation in the formal methods community. As verification tools proliferated, the question of whether results were reproducible and transferable became increasingly pressing. VSComp 2017 did not resolve this question — it is still an active area of research — but the problem selection was designed to surface the issue. A participant verifying the hybrid controller in KeY would face fundamentally different challenges than one using Isabelle/HOL, and the comparison of their approaches was itself a valuable outcome.

The competition maintained its tool-agnostic policy and three-pillar evaluation model (correctness, completeness, clarity). The rules page describes these criteria in detail. This edition attracted the most geographically diverse set of submissions of any VSComp round.

What the 2017 Edition Covered

The five problems can be grouped into four verification domains, each presenting distinct methodological challenges.

Hybrid systems. Problem P1-2017 asked participants to verify the safety of a controller that interleaves continuous physical dynamics with discrete logical decisions. This is the domain of cyber-physical systems — think aircraft autopilots, robotic controllers, or medical devices. Verification requires reasoning about differential equations alongside program logic, a combination that relatively few tools support natively.

Security protocols. Problem P2-2017 concerned a simplified secure messaging protocol. Participants had to formalise confidentiality and authentication properties under an active adversary model. Protocol verification has its own mature tradition (symbolic models, computational models, automated protocol verifiers), and the inclusion of this problem tested whether general-purpose verification tools could compete with domain-specific ones.

Specification completeness and consensus. Problem P3-2017 was unusual: rather than verifying an implementation against a specification, participants were asked to complete a partial specification. Problem P4-2017 addressed distributed consensus — verifying a single round of a fault-tolerant protocol. Both problems required reasoning at a higher level of abstraction than typical implementation verification.

Compiler correctness. Problem P5-2017 asked participants to prove that a compiler optimisation pass preserves source-language semantics. This is a well-studied area thanks to projects like CompCert, but formulating a competition-sized problem that is both tractable and meaningful is non-trivial. The problems catalogue provides detailed specifications for each challenge.

How Submissions Worked

The 2017 edition followed the established submission format. Participants submitted their solutions within a defined competition window, using any verification tool or proof assistant of their choice. Each submission was expected to include annotated source code or formal models, specifications, machine-checkable proof artifacts, and a written explanation of the verification strategy.

The strategy document was particularly important in 2017 because of the emphasis on interoperability. Reviewers were interested not only in whether a proof was valid, but also in the degree to which the approach was general. A solution that relied on a highly tool-specific automation tactic might score well on correctness but less well on the clarity axis, since it would be difficult for a reviewer using a different tool to assess the proof's robustness.

Partial solutions continued to be accepted and credited proportionally. The evaluation criteria are described on the rules page.

2017 Problem Set

Five problems were published for the 2017 edition. Each card below provides a summary; detailed descriptions and full specifications are available in the problems catalogue.

P1-2017Hybrid Controller Safety

Verify safety properties of a hybrid controller that alternates between continuous dynamics and discrete mode switches. Participants must formalise the continuous invariant and show it is preserved across mode transitions.

Hybrid systemshard
P2-2017Secure Message Passing

Verify confidentiality and authentication properties of a simplified secure messaging protocol. The specification must account for an active adversary model within stated assumptions.

Security protocolshard
P3-2017Specification Completeness Check

Given a partial specification of a component, identify missing cases and complete the specification so that no unspecified behaviours remain. A meta-level challenge that asks participants to reason about what a specification omits.

Specification completenessmedium
P4-2017Distributed Consensus Round

Verify a single round of a consensus protocol in a distributed setting with crash faults. Participants must prove agreement and validity properties under the stated fault model.

Concurrencyhard
P5-2017Verified Compiler Pass

Prove that a simple compiler optimisation pass preserves the semantics of the source language. Requires defining an operational semantics and showing a simulation relation between source and target programs.

Functional correctnesshard

Submitted Solutions

Solutions for the 2017 problems are collected in the solutions archive. The breadth of domains covered in this edition produced the most heterogeneous set of solution approaches in VSComp history. The hybrid controller problem attracted entries from teams using differential dynamic logic and hybrid automata verifiers, while the distributed consensus problem saw solutions in TLA+, Ivy, and general-purpose proof assistants.

The compiler correctness problem (P5-2017) drew submissions that ranged from full simulation proofs in Coq or Isabelle to lighter approaches using SMT-backed verification. Comparing these approaches offers insight into the current trade-offs between proof depth and development effort. The solutions page provides further detail on submission formats and citation guidance.

Competition Artifacts

Artifacts from the 2017 edition include the problem statements, any supplementary materials distributed during the competition window, and collected solution bundles. Where original documents remain available, they are listed below.

Problem Statements & Supplementary Materials

The 2017 problem statements were distributed through the competition's channels. Each statement specified the system or algorithm to be verified, the properties required, and any constraints on the verification approach. For several problems, starter models or reference specifications were provided to reduce initial setup time.

Some historical materials are provided as-is; where unavailable, the problem summaries on this page and in the problems catalogue summarise the format and requirements.

Solution Bundles

Selected solution submissions are available through the solutions archive. Bundles typically include formal models or annotated source code, proof artifacts, and the participant's strategy document.

Errata and Clarifications

The 2017 edition generated a number of clarification requests, reflecting the novelty and breadth of the problem set. The hybrid controller problem (P1-2017) prompted questions about the continuous dynamics model: whether participants should assume linear or nonlinear dynamics, and what numerical precision was required. The organisers clarified that participants should state their assumptions explicitly and that the focus was on the verification methodology rather than numerical simulation fidelity.

The secure message passing problem (P2-2017) raised questions about the adversary model. Participants asked whether the adversary was purely symbolic (Dolev-Yao style) or computational. The organisers confirmed that a symbolic adversary model was sufficient, though computational approaches were also welcome if the participant preferred that level of detail.

For the specification completeness problem (P3-2017), several participants sought clarification on what it meant for a specification to be "complete." The organisers provided a working definition: every possible input must be covered by at least one case in the specification, and the specification must be consistent (no case should lead to contradictory requirements). This clarification proved useful and was incorporated into subsequent discussions about competition format. See the 2010, 2014, and 2015 hubs for how earlier editions handled similar issues.

Tool Interoperability and Proof Portability

One of the distinguishing themes of the 2017 edition was its attention to interoperability. The organisers observed that previous editions had produced excellent solutions that were nevertheless tightly coupled to a specific tool and version. A proof checked by Dafny 2.x might not check under Dafny 3.x; a Coq development that compiled under one set of library versions might fail under another. These are not hypothetical concerns — they arise in practice whenever verified software is maintained over time.

The 2017 problem set did not impose any interoperability requirements on participants, but the evaluation committee noted the degree to which solutions were portable as a qualitative observation in their assessment. This information is reflected in some of the solution entries in the solutions archive, where reviewers commented on the generality and reproducibility of approaches.

The question of proof portability remains open. It is one of several active research directions that the competition has helped to surface, alongside specification completeness, hybrid systems verification, and the formalisation of security properties. For broader context, see the history page and the publications page.

At a Glance

Year: 2017
Problems: 5
Domains: Hybrid systems, security protocols, specification completeness, concurrency, compiler correctness
Focus: Tool interoperability and proof portability