top of page

FoMLAS 2020
July 19, 2020, Los Angeles, California, USA

3rd Workshop on Formal Methods for ML-Enabled Autonomous Systems

Affiliated with CAV 2020

Important Dates

  • Abstracts (optional): April 19 (deadline extended)

  • Full papers: April 26 (deadline extended)

  • Author notification: June 4

  • Workshop: July 19

🎥 Youtube stream:

*Note that the invited talk by Chih-Hong Cheng will not be streamed

We would like to thank everyone that participated in the workshop. You can watch the recorded sessions on YouTube, through the link listed above. The slides from Chih-Honh Cheng's invited talk are available here

The workshop's informal proceedings are now available here.

We hope to see you at next year's workshop!

Scope and Topics of Interest

In recent years, deep learning has emerged as a highly effective way for creating real-world software, and is revolutionizing the way complex systems are being designed all across the board. In particular, this new approach is being applied to autonomous systems (e.g., autonomous cars, aircraft), achieving exciting results that are beyond the reach of manually created software. However, these significant changes have created new challenges when it comes to explainability, predictability and correctness: Can I explain why my drone turned right at that angle? Can I predict what it will do next? Can I know for sure that my autonomous car will never accelerate towards a pedestrian? These are questions with far-reaching consequences for safety, accountability and public adoption of ML-enabled autonomous systems. One promising avenue for tackling these difficulties is by developing formal methods capable of analyzing and verifying these new kinds of systems.

The goal of this workshop is to facilitate discussion regarding how formal methods can be used to increase predictability, explainability, and accountability of ML-enabled autonomous systems. The workshop welcomes results ranging from concept formulation (by connecting these concepts with existing research topics in verification, logic and game theory), through algorithms, methods and tools for analyzing ML-enabled systems, to concrete case studies and examples.


The workshop will also include a special session and discussion on the VNNLIB initiative, aimed at creating a standard format and a benchmark library for neural network verification.

The topics covered by the workshop include, but are not limited to, the following:

  • Formal specifications for systems with ML components

  • SAT-based and SMT-based methods for analyzing systems with deep neural network components

  • Mixed-integer Linear Programming and optimization-based methods for the verification of systems with deep neural network components

  • Testing approaches for ML components

  • Statistical approaches to the verification of systems with ML components

  • Approaches for enhancing the explainability of ML-based systems

  • Techniques for analyzing hybrid systems with ML components

  • Verification of quantized and low-precision neural networks

Paper Submission and Proceedings

Three categories of submissions are invited:

  • Original papers: contain original research and sufficient detail to assess the merits and relevance of the submission. For papers reporting experimental results, authors are strongly encouraged to make their data available.

  • Presentation-only papers: describe work recently published or submitted. We see this as a way to provide additional access to important developments that the workshop attendees may be unaware of.

  • Extended abstracts: given the informal style of the workshop, we strongly encourage the submission of preliminary reports of work in progress. These reports may range in length from very short to full papers, and will be judged based on the expected level of interest for the community.

All accepted papers will be posted online as part of informal proceedings on the day of the conference. 

Papers in all categories will be peer-reviewed. Papers should be submitted as a single column, standard-conforming PDF, using  the LNCS style. The suggested page limit is 10 pages, not counting references. Technical details may be included in an appendix to be read at the reviewers' discretion (also not counted towards the page limit). The review process will be single-blind.

To submit a paper, use EasyChair.

Invited Talks

The workshop will feature an invited talk by Chih-Hong Cheng (Denso Automotive Deutschland GmbH).

Title: Research Challenges and Opportunities towards Safe Autonomous Driving

Abstract: The race towards autonomous driving has lasted for years. I argue that the chance for the contestants to close the safety gap lies in scientifically rooted engineering methods that can efficiently improve the quality of AI. In particular, I consider a structural approach (via GSN) to argue the quality of Neural Networks (NN) with NN-specific dependability metrics. A systematic analysis by considering the quality of data collection, training, testing, and operation allows us to identify many unsolved research questions: (1) How to create a weaker form of data completeness while admitting the combinatorial explosion of scenarios? (2) How to overcome the specification problem for formal verification, where input as images can be hard to characterize? (3) How to justify that a decision made by the network is supported by prior similarities in the training data? Some of my conducted prior work will be concisely mentioned with a statement on their limitations.

Bio: Currently, Chih-Hong Cheng is a technical manager at DENSO, researching safety technologies for autonomous driving. Before DENSO, he held research positions in fortiss and ABB Corporate Research. He received his doctoral degree in CS from the Technical University of Munich.

The workshop will also feature a special VNN-LIB session given by Armando Tacchella (University of Genoa). VNN-LIB is a new standard being developed for neural network verification benchmarks, inspired by similar standards in related communities (e.g., SMT-LIB). 

bottom of page