July 19, 2020, Los Angeles, California, USA
3rd Workshop on Formal Methods for ML-Enabled Autonomous Systems
Affiliated with CAV 2020
Abstracts (optional): April 19 (deadline extended)
Full papers: April 26 (deadline extended)
Author notification: June 4
Workshop: July 19
🎥 Youtube stream: https://youtu.be/65O8mN439Jc
*Note that the invited talk by Chih-Hong Cheng will not be streamed
We would like to thank everyone that participated in the workshop. You can watch the recorded sessions on YouTube, through the link listed above. The slides from Chih-Honh Cheng's invited talk are available here.
The workshop's informal proceedings are now available here.
We hope to see you at next year's workshop!
Scope and Topics of Interest
In recent years, deep learning has emerged as a highly effective way for creating real-world software, and is revolutionizing the way complex systems are being designed all across the board. In particular, this new approach is being applied to autonomous systems (e.g., autonomous cars, aircraft), achieving exciting results that are beyond the reach of manually created software. However, these significant changes have created new challenges when it comes to explainability, predictability and correctness: Can I explain why my drone turned right at that angle? Can I predict what it will do next? Can I know for sure that my autonomous car will never accelerate towards a pedestrian? These are questions with far-reaching consequences for safety, accountability and public adoption of ML-enabled autonomous systems. One promising avenue for tackling these difficulties is by developing formal methods capable of analyzing and verifying these new kinds of systems.
The goal of this workshop is to facilitate discussion regarding how formal methods can be used to increase predictability, explainability, and accountability of ML-enabled autonomous systems. The workshop welcomes results ranging from concept formulation (by connecting these concepts with existing research topics in verification, logic and game theory), through algorithms, methods and tools for analyzing ML-enabled systems, to concrete case studies and examples.
The workshop will also include a special session and discussion on the VNNLIB initiative, aimed at creating a standard format and a benchmark library for neural network verification.
The topics covered by the workshop include, but are not limited to, the following:
Formal specifications for systems with ML components
SAT-based and SMT-based methods for analyzing systems with deep neural network components
Mixed-integer Linear Programming and optimization-based methods for the verification of systems with deep neural network components
Testing approaches for ML components
Statistical approaches to the verification of systems with ML components
Approaches for enhancing the explainability of ML-based systems
Techniques for analyzing hybrid systems with ML components
Verification of quantized and low-precision neural networks
Paper Submission and Proceedings
Three categories of submissions are invited:
Original papers: contain original research and sufficient detail to assess the merits and relevance of the submission. For papers reporting experimental results, authors are strongly encouraged to make their data available.
Presentation-only papers: describe work recently published or submitted. We see this as a way to provide additional access to important developments that the workshop attendees may be unaware of.
Extended abstracts: given the informal style of the workshop, we strongly encourage the submission of preliminary reports of work in progress. These reports may range in length from very short to full papers, and will be judged based on the expected level of interest for the community.