[TYPES/announce] RV 2021 call for papers - RV 2021 the 21st international conference on runtime verification

Dana Fisman fisman at seas.upenn.edu
Mon Mar 8 03:34:44 EST 2021


CFP for RV'21 (21ST INTERNATIONAL CONFERENCE ON RUNTIME VERIFICATION)



*Important Dates*

   - Abstract submission: May 13, 2021
   - Paper submission: May 20, 2021
   - Author notification: July 12, 2021
   - Camera-ready version: Aug 2, 2021
   - Conference date: Oct 11-14, 2021 (Location: Los Angeles or online,
   depending on COVID19 situation)

Website: https://www.cs.bgu.ac.il/~rv21/ Scope

Runtime verification is concerned with the monitoring and analysis of the
runtime behaviour of software and hardware systems. Runtime verification
techniques are crucial for system correctness, reliability, and robustness;
they provide an additional level of rigor and effectiveness compared to
conventional testing and are generally more practical than exhaustive
formal verification. Runtime verification can be used prior to deployment,
for testing, verification, and debugging purposes, and after deployment for
ensuring reliability, safety, and security and for providing fault
containment and recovery as well as online system repair.

Topics of interest to the conference include, but are not limited to:

   - specification languages for monitoring
   - monitor construction techniques
   - program instrumentation
   - logging, recording, and replay
   - combination of static and dynamic analysis
   - specification mining and machine learning over runtime traces
   - monitoring techniques for concurrent and distributed systems
   - runtime checking of privacy and security policies
   - metrics and statistical information gathering
   - program/system execution visualization
   - fault localization, containment, recovery and repair
   - dynamic type checking and assurance cases
   - runtime verification for autonomy and runtime assurance

Application areas of runtime verification include cyber-physical systems,
safety/mission critical systems, enterprise and systems software, cloud
systems, autonomous and reactive control systems, health management and
diagnosis systems, and system security and privacy, among others.
 Submissions

All papers and tutorials will appear in the conference proceedings in an
LNCS volume. Submitted papers and tutorials must use the LNCS/Springer
style detailed here:

http://www.springer.de/comp/lncs/authors.html

Papers must be original work and not be submitted for publication
elsewhere. Papers must be written in English and submitted electronically
(in PDF format) using the EasyChair submission page here:

https://easychair.org/conferences/?conf=rv21

The page limitations mentioned below include all text and figures, but
exclude references. Additional details omitted due to space limitations may
be included in a clearly marked appendix, that will be reviewed at the
discretion of reviewers, but not included in the proceedings.

At least one author of each accepted paper and tutorial must register and
attend RV 2021 to present.
  PapersThere are four categories of papers which can be submitted:
regular, short, tool demo, and benchmark papers. Papers in each category
will be reviewed by at least 3 members of the Program Committee.

   - *Regular Papers* (up to 16 pages, not including references) should
   present original unpublished results. We welcome theoretical papers, system
   papers, papers describing domain-specific variants of RV, and case studies
   on runtime verification.
   - *Short Papers* (up to 8 pages, not including references) may present
   novel but not necessarily thoroughly worked out ideas, for example emerging
   runtime verification techniques and applications, or techniques and
   applications that establish relationships between runtime verification and
   other domains.
   - *Tool Demonstration Papers* (up to 8 pages, not including references)
   should present a new tool, a new tool component, or novel extensions to
   existing tools supporting runtime verification. The paper must include
   information on tool availability, maturity, selected experimental results
   and it should provide a link to a website containing the theoretical
   background and user guide. Furthermore, we strongly encourage authors to
   make their tools and benchmarks available with their submission.
   - *Benchmark papers* (up to 8 pages, not including references) should
   describe a benchmark, suite of benchmarks, or benchmark generator useful
   for evaluating RV tools. Papers should include information as to what the
   benchmark consists of and its purpose (what is the domain), how to obtain
   and use the benchmark, an argument for the usefulness of the benchmark to
   the broader RV community and may include any existing results produced
   using the benchmark. We are interested in both benchmarks pertaining to
   real-world scenarios and those containing synthetic data designed to
   achieve interesting properties. Broader definitions of benchmark e.g. for
   generating specifications from data or diagnosing faults are within scope.
   We encourage benchmarks that are tool agnostic, especially if they have
   been used to evaluate multiple tools. We also welcome benchmarks that
   contain verdict labels and with rigorous arguments for correctness of these
   verdicts, and benchmarks that are demonstrably challenging with respect to
   the state-of-the-art tools. Benchmark papers must be accompanied by an
   easily accessible and usable benchmark submission. Papers will be evaluated
   by a separate benchmark evaluation panel who will assess the benchmarks
   relevance, clarity, and utility as communicated by the submitted paper.


The Program Committee of RV 2021 will give a Springer-sponsored *Best Paper
Award* to one eligible regular paper.



*Special Journal Issue!* The Program Committee of RV 2021 will invite a
selection of accepted papers to submit extended versions to a special issue
of the International Journal on Software Tools for Technology Transfer
(STTT).



*Tutorial track*

Tutorials are two-to-three-hour presentations on a selected topic.
Additionally, tutorial presenters will be offered to publish a paper of up
to 20 pages in the LNCS conference proceedings. A proposal for a tutorial
must contain the subject of the tutorial, a proposed timeline, a note on
previous similar tutorials (if applicable) and the differences to this
incarnation, and biographies of the presenters. The proposal must not
exceed 2 pages.



*Program committee *

Lu Feng, University of Virginia (co-Chair)
Dana Fisman, Ben Gurion University (co-Chair)
Houssam Abbas, Oregon State University
Wolfgang Ahrendt, Chalmers University of Technology
Domenico Bianculli, SnT Centre - University of Luxembourg
Borzoo Bonakdarpour, Michigan State University
Radu Calinescu, University of York
Chih-Hong Cheng, DENSO AUTOMOTIVE Deutschland GmbH
Jyotirmoy Deshmukh, University of Southern California
Georgios Fainekos, Arizona State University
Yliès Falcone, University of Grenoble Alpes, CNRS
Chuchu Fan, MIT
Thomas Ferrère, Imagination Technologies
Bernd Finkbeiner, CISPA Helmholtz Center for Information Security
Adrian Francalanza, University of Malta
Sylvain Hallé, Université du Québec à Chicoutimi
Klaus Havelund, Jet Propulsion Laboratory
Bettina Könighofer, Technical University of Graz
Morteza Lahijanian, University of Colorado, Boulder
Axel Legay, UCLouvain
Martin Leucker, University of Luebeck
Chung-Wei Lin, National Taiwan University
David Lo, Singapore Management University
Leonardo Mariani, University of Milano Bicocca
Nicolas Markey, IRISA, CNRS & INRIA & University of Rennes
Laura Nenzi, University of Trieste
Dejan Nickovic, Austrian Institute of Technology
Gordon Pace, University of Malta
Nicola Paoletti, University of London
Dave Parker, University of Birmingham
Doron Peled, Bar Ilan University
Violet Ka Pun, Western Norway University of Applied Sciences
Giles Reger, The University of Manchester
Cesar Sanchez, IMDEA Software Institute
Gerardo Schneider, Chalmers University of Gothenburg
Julien Signoles, CEA LIST
Oleg Sokolsky, University of Pennsylvania
Stefano Tonetta, FBK-irst
Hazem Torfah, University of California, Berkeley
Dmitriy Traytel, University of Copenhagen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://LISTS.SEAS.UPENN.EDU/pipermail/types-announce/attachments/20210308/37639833/attachment-0001.htm>


More information about the Types-announce mailing list