DIFTS (Design and Implementation of Formal Tools and Systems) workshop emphasizes insightful experiences in formal tools and systems design. The first DIFTS workshop was held in 2011, DIFTS11. It provides a forum for sharing challenges and solutions that are original with ground breaking results.
Often the design and implementation of tools for formal analysis require non-trivial engineering decisions. Many challenges are faced, which often can only be met with ingenious implementation techniques. These techniques actually play a crucial role in making the idea work in practice. The workshop provides an opportunity for discussing engineering aspects and various design decisions required to put such formal tools and systems in practical use. DIFTS takes a broad view of the formal tools/systems area, and solicits contributions from domains including, but not restricted to,
This workshop encourages and appreciates system development activities, and facilitates transparency in the experimentation. It will also serve as a platform to discuss open problems and future challenges in practicing formal methods.DIFTS is co-located with FMCAD13. Call for Paper CFP (txt). Call for Participation CFPart (txt).
Papers in the following two categories are solicited: (a) system category (10 pages, double column, 11pt), and (b) tool category (8 pages, double column, 11pt).
In the system category, we invite papers that have original ideas accompanied with novel integration techniques, adequate design/implementation details, important design choices made and explored, and good experimental results.
In the tool category, we invite papers that focus primarily on the engineering aspects of some known/popular algorithm, with significant emphasis on the design/implementation details, and various design choices made to advance current state-of-the-art approaches.
The page limit for submissions in the system category is 10 pages in double column format and for submissions in the tool category is 8 pages in double column format.
Submission of papers should be made electronically in PDF format via EasyChair. Submission Page
All accepted contributions will be included in informal proceedings. High quality submissions will be considered for a special issue of journals such as FMSD (Formal Methods in System Design) or IEEE TC (Transactions on Computers).
Niklas Een and Alan Mishchenko, A Fast Reparameterization Procedure
Jiang Long, Robert Brayton and Michael Case, LEC: Learning driven data-path equivalence checking
Marco Palena, Gianpiero Cabodi and Alan Mishchenko, Trading-off Incrementality and Dynamic Restart of Multiple Solvers in IC3
Mathias Preiner, Aina Niemetz and Armin Biere, Lemmas on Demand for Lambdas
Sonali Dutta, Moshe Y. Vardi and Deian Tabakov, CHIMP: a Tool for Assertion-Based Dynamic Verification of SystemC Models
In-Ho Moon and Kevin Harer, Abstraction-Based Livelock/Deadlock Checking for Hardware Verification
|Breakfast and Registration|
1st Invited Talk by Masahiro Fujita, The University of Tokyo, Japan.
There have been intensive researches on debugging hardware as well as software. Some are very ad-hoc and based on simple heuristics, but others utilize formal methods and are mathematically modeled. In this talk, we first review various proposals on debugging from historical viewpoints, and then summarize the state-of-the-art in terms of both diagnosis and automatic correction of designs. In particular we show various approaches with SAT-based formulations of diagnosis and correction problems. We also discuss about them in relation to manufacturing test techniques. That is, if the design errors are within the pre-determined types and/or areas, there could be very efficient ways to formally verify, diagnosis and correction methods with small numbers of test vectors. In the last part of the talk, future perspectives including post-silicon issues are discussed.
Mathias Preiner, Aina Niemetz and Armin Biere
We generalize the lemmas on demand decision procedure for array logic as implemented in Boolector to handle non-recursive and non-extensional lambda terms. We focus on the implementation aspects of our new approach and discuss the involved algorithms and optimizations in more detail. Further, we show how arrays, array operations and SMT-LIB v2 macros are represented as lambda terms and lazily handled with lemmas on demand. We provide experimental results that demonstrate the effect of native lambda term support within an SMT solver and give an outlook on future work.
Marco Palena, Gianpiero Cabodi and Alan Mishchenko
+ Trading-off Incrementality and Dynamic Restart of Multiple Solvers in IC3
Paper (PDF) Slides (PDF)
This paper addresses the problem of SAT solver per- formance in IC3, one of the major recent breakthroughs in Model Checking algorithms. Unlike other Bounded and Unbounded Model Checking algorithms, IC3 is characterized by numerous SAT solver queries on small sets of problem clauses. Besides algorithmic issues, the above scenario poses serious performance challenges for SAT solver configuration and tuning. We discuss and compare multiple vs. single solver implementations of IC3, with an incremental solver approach. As well known in other ap- plication fields, finding a good compromise between learning and overhead is key to performance. We address solver cleanup and restart heuristics, as well as clause database minimality, based on on-demand clause loading: transition relation clauses are loaded in solver based on structural dependency and phase analysis. We also compare different solutions for multiple specialized solvers, and we provide an experimental evaluation on benchmarks from the HWMCC suite. Though not finding a clear winner, the work outlines several potential improvements for a portfolio-based verification tool with multiple engines and tunings.
Jiang Long, Robert Brayton and Michael Case
In the LEC system, we employ a learning-driven approach for solving combinational data-path equivalence checking problems. The data-path logic is specified using Boolean and word-level op- erators in VHDL/Verilog The benchmarks come from C-to-RTL equivalence checking problems found in an industrial setting. The difficulty of such problems originates from the algebraic transformations conducted on the data-path logic for highly optimized implementations. Without these high level knowledge, existing techniques in bit-level equivalence checking and QF BV SMT solvers are not able to solve these problems effectively. It is crucial to reverse engineer such algebraic transformations to bring more similarities to the two sides of the logic. Extracting algebraic logic embedded in a cloud of Boolean and word-level arithmetic operators is difficult in general. In order to solve such problems, LEC incorporates a compositional proof methodology and conducts analysis beyond bit and word level by incorporating algebraic reasoning through polynomial reconstruction. LEC uses an open architecture to allow new solver techniques to be integrated progressively. It builds sub-model trees, using recursive transformations to simplify and expose the actual bottleneck arithmetic logic in an equivalence proof. In addition to a set of rewriting rules that normalize the arithmetic operators, LEC supports conditional rewriting where the correctness of a rewriting is dependent on the existence of invariants in the design itself. LEC utilize functional and structural information of the data-path logic to recognizing and reconstructing algebraic transformations. We use a case-study to illustrate the steps that were used to extract the arithmetic embedded in a data-path design as a linear sum of signed integers, and show all the procedures that compositionally led to a successful equivalence.
2nd Invited Talk by Rance Cleaveland, Reactive Systems Inc., USA
+ Approximate Formal Verification using Model-based Testing
In model-based testing, (semi-)formal models of systems are used to drive the derivation of test cases to be applied to the system-under-test (SUT). The technology has long been a part of the traditional hardware-design workflows, and it is beginning to find application in embedded-software development processes also. In automotive and land-vehicle control-system design in particular, models in languages such as MATLAB(r) / Simulink(r) / Stateflow(r) are used to drive the testing of the software used to control vehicle behavior, with tools like Reactis(r), developed by a team including the speaker, providing automated test-case generation support for this endeavor. This talk will discuss how test-case generation capabilities may also be used to help verify that models meet formal specifications of their behavior. The method we advocate, Instrumentation-Based Verification (IBV), involves the formalizaton of behavior specifications as models that are used to instrument the model to be verified, and the use of coverage testing of the instrumented model to search for specification violations. The presentation will discuss the foundations of IBV, the test-generation approach and other features in Reactis that are used to support IBV, and the results of several case studies involving the use of the methods.
Niklas Een and Alan Mishchenko
Reparameterization, also known as range pre- serving logic synthesis, replaces a logic cone by another logic cone, which has fewer inputs while producing the same output combinations as the original cone. It is expected that a smaller circuit leads to a shorter verification time. This paper describes an approach to reparameterization, which is faster but not as general as the previous work. The new procedure is particularly well-suited for circuits derived by localization abstraction.
In-Ho Moon and Kevin Harer
Livelock/deadlock is a well known and important problem in both hardware and software systems. In hardware verifi- cation, a livelock is a situation where the state of a design changes within only a smaller subset of the states reachable from the initial states of the design. Deadlock is a special case in which there is only one state in a livelock. However, livelock/deadlock checking has never been actively used in hardware verification in practice, mainly due to the com- plexity of the computation which involves finding strongly connected components. This paper presents a practical abstraction-based live- lock/deadlock checking algorithm for hardware verification. The proposed livelock/deadlock checking works on FSMs rather than the whole design. For each FSM, we make an abstract machine of manageable size from the cone of influ- ence of the FSM. Once a livelock is found on an abstract machine, the livelock is justified on the concrete machine with trace concretization. Experimental results shows that the proposed abstraction-based livelock checking finds real livelock errors in industrial designs.
Sonali Dutta, Moshe Y. Vardi and Deian Tabakov
CHIMP is a tool for assertion-based dynamic verification of SystemC models. The various features of CHIMP include automatic generation of monitors from temporal assertions, automatic instrumentation of the model-under-verification (MUV), and three-way communication among the MUV, the generated monitors, and the SystemC simulation kernel during the monitored execution of the instrumented MUV. Empirical results show that CHIMP puts minimal runtime overhead on the monitored execution of the MUV. A newly added path in CHIMP results in a significant (over 75%) reduction of average monitor generation and compilation time. The average size of the monitors is reduced by over 60%, without increasing runtime overhead.
3rd Invited Talk by Dhiraj Goswami, Synopsys, USA
+ Stimulus generation, enhancement and debug in constraint random verification
Verification cost in an IC design team occupies 60-80% of the entire working resources and efforts. Functional verification, posed at the foremost stage of the IC design flow, determines the customers' ability to find bugs quickly and thereby their time-to-results (TTR) and cost-of-results (COR). Consequently, functional verification has been the focus of the EDA industry for the last several decades.
Constrained random simulation methodologies have become increasingly popular for functional verification of complex designs, as an alternative to directed-test based simulation. In a constrained random simulation methodology, random vectors are generated to satisfy certain operating constraints of the design. These constraints are usually specified as part of a testbench program (using industry-standard testbench languages, like SystemVerilog from Synopsys, e from Cadence, OpenVera, etc.). The testbench automation tool (TBA) is then expected to generate random solutions for specified random variables, such that all the specified constraints over these random variables are satisfied. These random solutions are then used to generate valid random stimulus for the Design Under Verification (DUV). This stimulus is simulated using industry-standard simulation tools, like VCS from Synopsys, NC-Verilog from Cadence, etc. The results of simulation are then typically examined within the testbench program to monitor functional coverage, which gives a measure of confidence on the verification quality and completeness.
In this talk we will review the challenges of stimulus and configuration generation using constraint random verification methodology. We will also explore why state-of-the-art debug solutions are important to handle complexity and improve the quality of stimulus.