[TYPES] ASE05 Workshop Software Assurance Tools Techniques and Metrics CfP

Yunwen Ye yunwen at l3d.cs.colorado.edu
Thu Jul 28 15:39:52 EDT 2005

  National Institute of Standards and Technology (NIST) workshop on

	  Software Assurance Tools, Techniques, and Metrics

			  7-8 November 2005

		       Co-located with ASE 2005
		     Long Beach, California, USA
Funded in part by the Department of Homeland Security (DHS), the
National Institute of Standards and Technology (NIST) started a
long-term, ambitious project to improve software security assurance
tools.  Security is the ability of a system to maintain the
confidentiality, integrity, and availability of information processed
and stored by a computer.  Software security assurance tools are those
that help software be more secure by building security into software
or determining how secure software is.  Among the project's goals are
  (1) develop a taxonomy of software security flaws and vulnerabilities, 
  (2) develop a taxonomy of software security assurance (SSA) tool
        functions and techniques which detect or prevent flaws, and
  (3) develop testable specifications of SSA functions and explicit
        tests to evaluate how closely tools implement the functions.
	The test material include reference sets of buggy code.
These goals extend into all phases of the software life cycle from
requirements capture through design and implementation to operation
and auditing.

The goal of the workshop is to convene researchers, developers, and
government and industrial users of SSA tools to
   * discuss and refine the taxonomy of flaws and the taxonomy of
      functions, which are under development,
   * come to a consensus on which SSA functions should first have
      specifications and standard tests developed,
   * gather SSA tools suppliers for "target practice" on reference
      datasets of code, and
   * identify gaps or research needs in SSA functions.


Sets of code with known flaws and vulnerabilities, with corresponding
correct versions, can be references for tool testing to make research
easier and to be a standard of evaluation.  Working with others, we
will bring reference datasets of many types of code, like Java, C,
binaries, and bytecode.  We welcome contributions of code you've used.

To help validate the reference datasets, we solicit proposals not
exceeding 2 pages to participate in SSA tool "target practice" on the
datasets.  Tools can range from university projects to commercial
products.  Participation is intended to demonstrate the state of the
art in finding flaws, consequently the proposals should not be
marketing write-ups, but should highlight technical contributions:
techniques used, precision achieved, classes of vulnerabilities
detected, suggestions for extensions to and improvements of the
reference datasets, etc.  Participants are expected to provide their
own equipment.


SATTM encourages contributions describing basic research, novel
applications, and experience relevant to SSA tools and their
evaluation.  Topics of particular interest are:

        - Benchmarks or reference datasets for SSA tools
	- Comparisons of tools
        - ROI effectiveness of SSA functions
        - Flaw catching effectiveness of SSA functions
	- Evaluating SSA tools
	- Gaps or research needs in SSA functions
	- SSA tool metrics
        - Software security assurance metrics
	- Surveys of SSA tools
        - Relation between flaws and the techniques that catch them
        - Taxonomy of software security flaws and vulnerabilities
        - Taxonomy of SSA functions or techniques


Papers should not exceed 8 pages in the conference format
(http://www.acm.org/sigs/pubs/proceed/template.html).  Papers
exceeding the length restriction will not be reviewed.  Papers will be
reviewed by at least two program committee members.  All papers should
clearly identify their novel contributions.  All papers should be
submitted electronically in PDF format by 19 August 2005. Information
regarding electronic submission will be available at the workshop
web site.


Accepted papers will be published in the workshop proceedings.  The
workshop proceedings, along with a summary of discussions and the
output of the reference dataset "target practice", will be published
as a NIST Special Publication.


Freeland Abbott		Georgia Tech
Jim Alves-Foss		U. Idaho
Paul Ammann		George Mason U.
Paul E. Black           NIST
Elizabeth Fong		NIST
Michael Hicks		U. Maryland
Michael Kass            NIST
Michael Koo		NIST
Richard Lippmann	MIT
Robert A. Martin        MITRE Corp.
W. Bradley Martin	NSA
Samuel Redwine		James Madison U.
Larry D. Wagoner	NSA
Jeffrey M. Voas		SAIC


19 Aug:  Paper and tool proposal submission deadline
19 Sep:  Paper and proposal notification
15 Oct:  Final camera-ready copy due
7-8 Nov: Workshop

More information about the Types-list mailing list