2nd Workshop on
Validation, Analysis and Evolution of Software Tests

March 20, 2018 | co-located with SANER 2018, Campobasso, Italy

Call for Papers

Aims, scope and topics of interest.

Software projects accumulate large sets of test cases, encoding valuable expert knowledge about the software under test to the extent of many person years. Over time the reliability of the tests decreases, and they become difficult to understand and maintain. Extra effort is required for repairing broken tests and for adapting test suites and models to evolving software systems.

The International Workshop on Validation, Analysis and Evolution of Software Tests (VST) is an unique event bringing together academics, industrial researchers, and practitioners for exchanging experiences, solutions and new ideas in applying methods, techniques and tools from software analysis, evolution and reengineering to advance the state of the art in test development and maintenance.

The workshop invites high quality submissions related, but are not limited, to:

 ●  Test minimization and simplification

 ●  Fault localization and automated repair

 ●  Change analysis for software tests

 ●  Test visualization and validation

 ●  Documentation analysis

 ●  Bug report analysis

 ●  Test evolution

 ●  Test case generation

 ●  Model-based testing

 ●  Combinations of the topics above


  Download Call for Papers (txt) | Flyer (pdf) | Poster (pdf)

o

Important Dates

Anywhere on earth.

Abstract submission deadline (extended) January 15, 2018 AoE

Paper submission deadline (extended) January 19, 2018 AoE

Notifications February 9, 2018

Camera Ready February 22, 2018

Submission

Instructions and submission site.

We encourage submissions on the topics mentioned above with a page limit of five pages, IEEE format. Papers will be by reviewed by at least three program committee members following a full double-blind review process. Paper selection is based on scientific originality, novelty, and the potential to generate interesting discussions.

We will also allow fast abstracts and tool demo papers of one or two pages for presentation at the workshop, but not to be published in written form.

Submission Instructions

  • Papers must not exceed the page limit of five pages (including all text, references, appendices, and figures)

  • Papers must conform to the IEEE formatting guidelines for conference proceedings

  • Papers must be prepared for a full double-blind review process (author names and affiliations must be omitted, references to the authors' own work must be in the third person, and any naming or referencing that would give away the authors' identity has to be avoided)

  • Papers must be original work that has neither appeared elsewhere for publication nor which is under review for another publication

  • Papers must be submitted in PDF format via EasyChair at https://easychair.org/conferences/?conf=vst2018

  Submit your paper at: VST 2018 EasyChair submission site

Program

Location and schedule.

08.00-08.50 - Registration
08.50-09.00 - Welcome
 
09.00-10.00
Keynote: Summarization Techniques for Code, Change, Testing and User Feedback
Sebastiano Panichella


Keynote Slides
close

Summarization Techniques for Code, Change, Testing and User Feedback

Abstract - Most of today's industries, from engineering to agriculture to health, are run on software. In such a context, ensuring software quality play an important role in most of current working environment and have a direct impact in any scientific and technical discipline. Software maintenance and testing have the crucial goal to find or discover possible software bugs (or defects) as early as possible, enabling software quality assurance. However, software maintenance and testing are very expensive and time-consuming activities for developers. For this reason, in the last years, several researchers in the field of Software Engineering (SE) devoted their effort in conceiving tools for boosting developers productivity during such development, maintenance and testing tasks. In this talk, I will first discuss some empirical work we performed to understand the main socio-technical challenges developers face when joining a new software project. I will discuss how to address them with the use of appropriate recommender systems aimed at supporting developers during program comprehension and maintenance tasks. Then, I'll show how 'Summarization Techniques' are an ideal technology for supporting developers when performing testing and debugging activities. Finally, I will summarize the main research advances, the current open challenges/problems and possible future directions to exploit for boosting developers productivity.

 
10.00-10.30
Detecting Duplicate Examples in Behaviour Driven Development Specifications
Leonard Peter Binamungu, Suzanne M. Embury, and Nikolaos Konstantinou
close

Detecting Duplicate Examples in Behaviour Driven Development Specifications

Abstract - In Behaviour-Driven Development (BDD), the behaviour of the software to be built is specified as a set of example interactions with the system, expressed using a "Given-When-Then" structure. The examples are written using customer language, and are readable by end-users. They are also executable, and act as tests that determine whether the implementation matches the desired behaviour or not. This approach can be effective in building a common understanding of the requirements, but it can also face problems. When the suites of examples grow large, they can be difficult and expensive to change. Duplication can creep in, and can be challenging to detect manually. Current tools for detecting duplication in code are also not effective for BDD examples. Moreover, human concerns of readability and clarity can rise. We present an approach for detecting duplication in BDD suites that is based around dynamic tracing, and describe an evaluation based on three open source systems.

Leonard Peter Binamungu

Suzanne M. Embury

Nikolaos Konstantinou

(University of Manchester, UK)

10.30-11.00 - Coffee Break
 
11.00-11.30
Automated Generation of Requirements-Based Test Cases for an Adaptive Cruise Control System
Adina Aniculaesei, Peer Denecke, Falk Howar, and Andreas Rausch
close

Automated Generation of Requirements-Based Test Cases for an Adaptive Cruise Control System

Abstract - Checking that a complex software system conforms to an extensive catalogue of requirements is an elaborate and costly task which cannot be managed only through manual testing anymore. In this paper, we construct an academic case study in which we apply automated requirements-based test case generation to the protoype of an adaptive cruise control system. We focus on two main research goals with respect to our method: (1) how much code coverage can be obtained and (2) how many faults can be found using the generated test cases. We report on our results as well as on the lessons learned.

Adina Aniculaesei

Peer Denecke

Falk Howar

Andreas Rausch

(TU Clausthal, Germany)

 
11.30-12.00
A Retrospective of Production and Test Code Co-evolution in an Industrial Project
Claus Klammer, Georg Buchgeher, and Albin Kern
close

A Retrospective of Production and Test Code Co-evolution in an Industrial Project

Abstract - Checking that a complex software system conforms to an extensive catalogue of requirements is an elaborate and costly task which cannot be managed only through manual testing anymore. In this paper, we construct an academic case study in which we apply automated requirements-based test case generation to the protoype of an adaptive cruise control system. We focus on two main research goals with respect to our method: (1) how much code coverage can be obtained and (2) how many faults can be found using the generated test cases. We report on our results as well as on the lessons learned.

Claus Klammer

Georg Buchgeher

(Software Competence Center Hagenberg, Austria)

Albin Kern

(ENGEL, Austria)

 
12.00-12.30
Evaluating the Efficiency of Continuous Testing during Test-Driven Development
Serge Demeyer, Benoît Verhaeghe, Anne Etien, Nicola Anquetil, and Stéphane Ducasse
close

Evaluating the Efficiency of Continuous Testing during Test-Driven Development

Abstract - Continuous testing is a novel feature within modern programming environments, where unit tests constantly run in the background providing early feedback about breaking changes. One of the more challenging aspects of such a continuous testing tool is choosing the heuristic which selects the tests to run based on the changes recently applied. To help tool builders select the most appropriate test selection heuristic, we assess their efficiency in a continuous testing context. We observe on two small but representative cases that a continuous testing tool generates significant reductions in number of tests that need to be executed. Nevertheless, these heuristics sometimes result in false negatives, thus in rare occasions discard pertinent tests.

Serge Demeyer

Benoît Verhaeghe

Anne Etien

Nicola Anquetil

Stéphane Ducasse

(University of Antwerp, Belgium; Inria, France)

12.30-14.00 - Lunch break

Organization

Chairs and program committee.

Program Committee

Cyrille Artho, KTH Royal Institute of Technology, Sweden (co-chair)

Árpád Beszédes, University of Szeged, Hungary

Vahid Garousi, Wageningen University, Netherlands

Mohammad Ghafari, University of Bern, Switzerland

Milos Gligoric, University of Texas at Austin, USA

Michaela Greiler, Microsoft, USA

Falk Howar, Technical University Clausthal, Germany

Takashi Kitamura, National Institute of Advanced Industrial Science and Technology (AIST), Japan

Teng Long, Google, USA

Lei Ma, Harbin Institute of Technology, China

Leonardo Mariani, University of Milan Bicocca, Italy

Rudolf Ramler, Software Competence Center Hagenberg, Austria (co-chair)

Martina Seidl, Johannes Kepler University Linz, Austria

Contact

Get In touch.

Email us
Top