Cleveland
Cleveland
Cleveland
Cleveland

Welcome to SCAM'19!

SCAM 2019 will be held in Cleveland, OH, USA co-located with ICSME 2019.

The aim of the International Working Conference on Source Code Analysis & Manipulation (SCAM) is to bring together researchers and practitioners working on theory, techniques and applications which concern analysis and/or manipulation of the source code of computer systems. While much attention in the wider software engineering community is properly directed towards other aspects of systems development and evolution, such as specification, design and requirements engineering, it is the source code that contains the only precise description of the behaviour of the system. The analysis and manipulation of source code thus remains a pressing concern.

Definition of ‘Source Code’

For the purpose of clarity ‘source code’ is taken to mean any fully executable description of a software system. It is therefore so-construed as to include machine code, very high level languages and executable graphical representations of systems. The term ‘analysis’ is taken to mean any automated or semi automated procedure which takes source code and yields insight into its meaning. The term ‘manipulation’ is taken to mean any automated or semi-automated procedure which takes and returns source code.

Shared Keynote with VISSOFT

Oege de Moor, CEO and Founder, Semmle

Automating Variant Analysis at Scale

Please check back later for updates, and follow us on Twitter to keep informed.

Accepted Papers

Research Track

  • Abhishek Tiwari, Jyoti Prakash, Sascha Groß and Christian Hammer. LUDroid: A Large Scale Analysis of Android - Web Hybridization
  • Abu Naser Masud and Federico Ciccozzi. Towards constructing the SSA form using reaching definitions over dominance frontiers
  • Anthony Peruma, Mohamed Wiem Mkaouer, Michael J. Decker and Christian Newman. Contextualizing Rename Decisions using Refactorings and Commit Messages
  • Bin Lin, Csaba Nagy, Gabriele Bavota, Andrian Marcus and Michele Lanza. On The Quality of Identifiers in Test Code
  • Diego Marcilio, Carlo A. Furia, Rodrigo Bonifacio and Gustavo Pinto. Automatically Generating Fix Suggestions in Response to Static Code Analysis Warnings
  • Gian Luca Scoccia, Anthony Peruma, Virginia Pujols, Ivano Malavolta and Daniel Krutz. Permission Issues in Open-source Android Apps: An Exploratory Study
  • Hailong Zhang, Sufian Latif, Raef Bassily and Atanas Rountev. Introducing Privacy in Screen Event Frequency Analysis for Android Apps
  • Jeffrey Yackley, Marouane Kessentini, Gabriele Bavota, Vahid Alizadeh and Bruce Maxim. Simultaneous Refactoring and Regression Testing: A Multi-Tasking Approach
  • Kirsten Bradley and Mike Godfrey. A Study on the Effects of Exception Usage in Open-Source C++ Systems
  • Marcel Steinbeck, Rainer Koschke and Marc Rüdel. Movement Patterns and Trajectories in Three-Dimensional Software Visualization
  • Marcus Kessel and Colin Atkinson. On the Efficacy of Dynamic Behavior Comparison for Judging Functional Equivalence
  • Matheus Paixão and Paulo Henrique Maia. Rebasing Considered Harmful: A Large-scale Investigation in Modern Code Review
  • Md Masudur Rahman, Saikat Chakraborty, Gail Kaiser and Baishakhi Ray. Toward Optimal Selection of Information Retrieval Models for Software Engineering Tasks
  • Nicolas Harrand, César Soto-Valero, Martin Monperrus and Benoit Baudry. The Strengths and Behavioral Quirks of Java Bytecode Decompilers
  • Ruxandra Bob and Tim Storer. Behave Nicely! Automatic Generation of Code for Behaviour Driven Development Test Suites
  • Salvatore Geremia, Gabriele Bavota, Rocco Oliveto, Michele Lanza and Massimiliano Di Penta. Characterizing Leveraged Stack Overflow Posts
  • Seongmin Lee, David Binkley, Nicolas Gold, Robert Feldt and Shin Yoo. MOAD: Modeling Observation-based Approximate Dependency
  • Soumaya Rebai, Ousaama Ben Sghaier, Vahid Alizadeh, Marouane Kessentini and Meriem Chater. Interactive Refactoring Documentation Bot
  • Tim Henderson, Yigit Kucuk and Andy Podgurski. Evaluating Automatic Fault Localization Using Markov Processes
  • Vahid Alizadeh, Houcem Fehri and Marouane Kessentini. Less is More: From Multi-Objective to Mono-Objective Refactoring via Developers Knowledge Extraction
  • Vineeth Kashyap, Jason Ruchti, Lucja Kot, Emma Turetsky, Rebecca Swords, Shih An Pan, Julien Henry, David Melski and Eric Schulte. Automated Customized Bug-Benchmark Generation

Engineering Track

  • Wanessa Teotônio, Pablo Gonzalez, Paulo Maia and Pedro Muniz. WAL: a Tool for Diagnosing Accessibility Issues and Evolving Legacy Web Systems at Runtime
  • Marcus Kessel and Colin Atkinson. Automatically Curated Datasets
  • Isaac M. M. Gomes, Daniel Coutinho and Marcelo Schots. No Accounting for Taste: Supporting Developers' Individual Choices of Coding Styles
  • Bernhard J. Berger, Karsten Sohr and Rainer Koschke. The Architectural Security Tool Suite ArchSec

RENE Track

  • Amit Kumar Mondal, Banani Roy and Kevin A. Schneider. An Exploratory Study on Automatic Architectural Change Analysis Using Natural Language Processing Techniques

Program

Click here for further information about the presentations and the conference format.

Conference Format

SCAM 2019 will follow the working conference format that is meant to stimulate thought-provoking discussions by keeping presentations short and focused, while reserving 30 minutes at the end of each session for a plenary discussion about the session's topic.

Presentations in both tracks must therefore respect a time limit of 15 minutes. There will be 3 minutes of time for one or two clarification questions after each presentation. Longer questions will be postponed till the end of the session, at which point all presenters are invited to the front of the room again.

The session chair will help respect the time limits, and will manage discussion and questions from the audience. The list of session chairs is available here

If you have further questions, please do not hesitate to contact the program chairs

sli.do

We will use sli.do to organize the discussion and questions. Event codes will be available soon.

You can use the web app to join the conversation. Sli.do is also available from your favorite app store.

Shared Keynote with VISSOFT

Automating Variant Analysis at Scale

Abstract: When a security incident happens, security teams identify the root cause and if it is in the code, they suggest a fix to the product team. In addition, they look for other instances of the same coding mistake, not just in the same code base, but throughout a software portfolio - this process is called “variant analysis”. Variant analysis is a search problem, but today it is often performed manually with simple tools like grep. I’ll discuss our experience creating a query engine for code that enables security experts to quickly perform deep, accurate analysis, in the form of concise queries that can be easily modified and shared. The technology is currently run on over 135,000 open source projects on LGTM.com. I’ll present some concrete examples of vulnerabilities that were discovered this way. There are interesting challenges in visualising the results of such deep analyses on huge code bases, and even more so when they’re run at scale across 10s of thousands of repositories.

Bio: Oege de Moor is the CEO and Founder of Semmle. Semmle's mission is to secure software, together: security researchers, developers and the community. From 1994 to 2014, Oege was a professor of computer science at the University of Oxford, where he did research in programming languages and tools. Semmle's products are used by Microsoft, Google, NASA, Uber, NASDAQ, Credit Suisse, Dell, and many other leading software organisations. It has offices in Oxford, Copenhagen, Valencia, New York, San Francisco and Seattle. The technology at Semmle is a fun combination of deep theory (if you like lattice theory, you'll like our engine), good engineering (making it work on some of the largest code bases on the planet) and cool applications (like the 0-days we report in open source). Semmle is always on the look-out for new team members.

Call for Research Track Papers

The 19th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2019) aims to bring together researchers and practitioners working on theory, techniques, and applications that concern analysis and/or manipulation of the source code of software systems. The term "source code" refers to any fully executable description of a software system, such as machine code, (very) high-level languages, and executable graphical representations of systems. The term "analysis" refers to any (semi-)automated procedure that yields insight into source code, while "manipulation" refers to any automated or semi-automated procedure that takes and returns source code. While much attention in the wider software engineering community is directed towards other aspects of systems development and evolution, such as specification, design, and requirements engineering, it is the source code that contains the only precise description of the behavior of a system. Hence, the analysis and manipulation of source code remains a pressing concern for which SCAM 2019 solicits high quality paper submissions.

Covered Topics and Paper Formats

We welcome submission of papers that describe original and significant work in the field of source code analysis and manipulation. Topics of interest include, but are not limited to:

  • program transformation and refactoring
  • static and dynamic analysis
  • natural language analysis of source code artifacts
  • repository, revision, and change analysis
  • source level metrics
  • decompilation
  • bug location and prediction
  • security vulnerability analysis
  • source-level testing and verification
  • clone detection
  • concern, concept, and feature localization and mining
  • program comprehension
  • bad smell detection
  • abstract interpretation
  • program slicing
  • source level optimization
  • energy efficient source code

SCAM explicitly solicits results from any theoretical or technological domain that can be applied to these and similar topics. Submitted papers should describe original, unpublished, and significant work and must not have been previously accepted for publication nor be concurrently submitted for review in another journal, book, conference, or workshop. Papers must not exceed 12 pages (the last 2 pages can be used for references only) and must conform to the IEEE proceedings paper format guidelines. Templates in Latex and Word are available on IEEE's website. All submissions must be in English.

The papers should be submitted electronically in PDF format via EasyChair. Submission will be reviewed by at least three members of the program committee, judging the paper on its novelty, quality, importance, evaluation, and scientific rigor. If the paper is accepted, at least one author must attend the conference and present the paper.

This year, we follow a double-blind reviewing process. Submitted papers must adhere to the following rules:

  • Author names and affiliations must be omitted. (The track co-chairs will check compliance before reviewing begins.)
  • References to authors' own related work must be in the third person. (For example, not "We build on our previous work..." but rather "We build on the work of...")

Please see the Double-Blind Reviewing FAQ for more information and guidance.

SCAM 2019 also features a replication and negative results paper track for soliciting reproductibility and negative results papers and an engineering paper track for papers that report on the design and implementation of tools for source code analysis and manipulation.

Proceedings

All accepted papers will appear in the proceedings which will be available through the IEEE Digital Library.

Special Issue

A set of the best papers from SCAM 2019 will be invited to be considered for revision, extension, and publication in a special issue of Journal of Systems and Software.

Important Dates for Research Papers

Abstract Deadline:June 13, 2019
Paper Deadline:June 17, 2019
Notification:July 12, 2019
Camera Ready:July 31, 2019

 

Call for Replication and Negative Results Papers

The 19th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM) will be hosting a Replication and Negative Result (RENE) track for the first time in 2019. This track provides a venue for researchers to submit papers reporting (1) replications of previous empirical studies (including controlled experiments, case studies, and surveys) and (2) important and relevant negative or null results (i.e., results that failed to show an effect, but help to eliminate useless hypotheses, therefore reorienting researchers on more promising research paths) related to source code analysis and manipulation (see list of topics in Technical Research Track).

*Replications studies*: The papers in this category must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. Such submissions should apply the approach on at least a partially new data sets (open-source or proprietary). This also means that it is possible to use available infrastructures to conduct measurements and experiments but with different/extended datasets and different conditions, scenarios, etc. Replication studies can either strengthen the results of the original study by increasing external validity with additional data or provide new insights into the variables that may impact the results. A replication paper should clearly report on results that the authors were able to reproduce as well as on the aspects of the work that were irreproducible.

*Negative results papers*: In this category we seek papers that report on negative results. We seek negative results for all types of software engineering research related to source code and manipulation (qualitative, quantitative, case study, experiment, etc.). Negative results are important contributions to scientific knowledge because they allow us to prune our hypothesis space. As Walter Tichy writes, "Negative results, if trustworthy, are extremely important for narrowing down the search space. They eliminate useless hypotheses and thus reorient and speed up the search for better approaches."

Evaluation Criteria

Both Reproducibility Studies and Negative Results submissions will be evaluated according to the following standards:

  • Depth and breadth of the empirical studies
  • Clarity of writing
  • Appropriateness of conclusions
  • Amount of useful, actionable insights
  • Deep discussion regarding the implications of the negative results or new results obtained with reproducibility studies
  • Availability of artifacts
  • Underlying methodological rigor and detailed description of procedures. For example, a negative result due primarily to misaligned expectations or due to lack of statistical power (small samples) is not a good submission. The negative result should be a result of a lack of effect, not lack of methodological rigor.
  • Clear descriptions of the differences between the original setup and the one used in the study (for the case of reproducibility studies).
  •  
  • Most importantly, we expect that replication studies clearly point out the artifacts the study is built upon, and to provide the links to all the artifacts in the submission (the only exception will be given to those papers that reproduce the results on proprietary datasets that can not be publicly released).The paper should describe any changes to the original study design made during the replication, along with a justification for each change. The papers should contain a discussion section that compares the findings of the original and replication studies and describe the new knowledge gained from the replication along with any lessons learned from performing the replication. Partial replications are also welcome as long as the paper clearly states which parts of the study were replicated and which parts are new.

Submission Instructions

Submissions must be original, in the sense that the findings and writing have not been previously published or under consideration elsewhere. Papers must not exceed 10 pages for the main text, inclusive of figures, tables, appendices; references only may be included on up to 2 additional pages. The paper must conform to the IEEE proceedings paper format guidelines and must be clearly marked as a RENE paper. Templates in Latex and Word are available on IEEE's website. All submissions must be in English.

The papers should be submitted electronically in PDF format via EasyChair. Submission will be reviewed by at least three members of the program committee, judging the paper on its novelty, quality, importance, evaluation, and scientific rigor. If the paper is accepted, at least one author must attend the conference and present the paper.

This year, we follow a double-blind reviewing process. Submitted papers must adhere to the following rules:

  • Author names and affiliations must be omitted. (The track co-chairs will check compliance before reviewing begins.)
  • References to authors' own related work must be in the third person. (For example, not "We build on our previous work..." but rather "We build on the work of...")
Please see the Double-Blind Reviewing FAQ for more information and guidance.

Proceedings

All accepted papers will appear in the proceedings which will be available through the IEEE Digital Library

Important Dates

Abstract Deadline:June 20, 2019
Paper Deadline:June 25, 2019
Notification:July 30, 2019
Camera Ready:TBD

 

Call for Engineering Track Papers

In addition to the research track (see separate CFP), the 19th IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2019) will also feature an Engineering track. This track welcomes six-page papers that report on the design and implementation of tools for source code analysis and manipulation, as well as libraries, infrastructure, and the real world studies enabled by these advances. To be clear, this is not the addition of a new track to SCAM but rather a significant expansion to the scope of the tools track of previous SCAMs.

What artefacts qualify as “engineering track” material?

  • tools: software (or hardware!) programs that facilitate SCAMmy activities.
  • libraries: reusable API-enabled frameworks for the above.
  • infrastructure: while libraries are purely software, infrastructure can include projects that provide/facilitate access to data and analysis.
  • data: reusable datasets for other researchers to replicated and innovate with.
  • real world studies enabled by these advances. Here the focus is on how the {tool,infrastructure, etc} enabled the study, and not so much the study itself. The novelty of the research question is less important than the engineering challenges faced in the study.

A successful SCAM engineering track paper should:

  1. Fall under the topics mentioned for the SCAM 2019 research track.
  2. Discuss engineering work artefacts that have NOT been published before. However, previous work involving the tool, but for which the tool was not the main contribution, are acceptable.
  3. Motivate the use cases (and hence the existence) of the engineering work.
  4. Relate the engineering project to earlier work, if any.
  5. Describe the experiences gained in developing this contribution.

Optionally (and encouraged):

  1. Any empirical results or user feedback is welcome.
  2. Contain the URL of a website where the tool/library/data etcetera can be downloaded, together with example data and clear installation guidelines, preferably but not necessarily open source.
  3. Contain the URL to a video demonstrating the usage of the contribution.

Note that the submission length has a limit of six pages, in contrast to the two to four pages of traditional tool demo papers. The expectation is that authors use the space to discuss artefact motivation, design, and use cases in much more detail. For example, a use case would be well illustrated by a demo scenario with screenshots.

Each submission will be reviewed by members of the engineering track program committee. Authors of accepted papers will be required to present their artefacts at the conference. All accepted engineering track papers will be published in the conference proceedings. The key criterion for acceptance is that the paper should (a) follow the above mentioned guidelines and (b) make an original contribution that can benefit practitioners in the field now and/or others designing and building artefacts for source code analysis and manipulation. The artefacts can range from an early research prototype to a polished product ready for deployment. Papers about commercial products are allowed, as long as the guidelines described above are followed.

Videos and other demo material may be taken into account by reviewers as they review the paper. However, such material will not become part of the permanent record of the conference, so the paper should be self contained. In order to preserve the anonymity of the reviewers, such material should be hosted on an anonymous public source (e.g., Tube), or made available in such a way that the tools chair can download them once and redistribute them to reviewers

Proceedings

All accepted papers will appear in the proceedings which will be published by the IEEE Computer Society Press.

Special Issue

A set of the best papers from SCAM 2019 will be invited to be considered for revision, extension, and publication in a special issue of Journal of Systems and Software.

Important Dates

Abstract Deadline:June 21, 2019
Paper Deadline:June 24, 2019
Notification:July 12, 2019
Camera Ready:TBD

Submission

The submission should be maximum six pages, in IEEE format, submitted via EasyChair. Please use the IEEE templates in preparing your manuscripts:

General Chair
Research Track Program Co-Chairs
Replication and Negative Results Track Co-Chairs
Engineering Track Program Co-Chairs
Proceedings Co-Chairs
Local Chair
Finance Chair
Awards Committee Co-Chairs
Publicity Chair
Social Media Chair
Web Chair

Research Track

Chairs
Members

 

RENE Track

Chairs
Members

Engineering Track

Chairs
Members

SCAM Steering Committee

Charter

The International Working Conference on Source Code Analysis & Manipulation (SCAM) is governed by the steering committee following a community ratified steering committee charter (v1.2, adopted in 2012).

Cleveland, OH, USA

More info via ICSME

Fun & Merchandise

Sponsorship Opportunities

What is SCAM?

  • Flagship gathering of the source code analysis community
  • Attended by a focused gathering of 50 to 80 members of academia, industry, and government
  • Professional development: keynotes by field leaders, latest research, engineering track

Benefits to Supporters

  • Excellent recruiting venue for highly qualified software engineers
  • Invitation to Social Gatherings
  • Organization's logo on SCAM publicity materials including:
    • Conference website, proceedings, and program
    • Signage and banners at the conference
  • Provide corporate information to attendees

Support Level Opportunities

  • Three levels of support:
    • Silver
    • Gold
    • Platinum
  • Sponsorship can be associated with a specific conference activity.

Banquet Address

  • An opportunity at the conference banquet to give a brief introduction to your company.
Level Social Functions Conference Passes Logo on Publicity Gift in the Bag Banquet Address
Platinum 3
Gold 2
Silver 1

For further information, please contact the general chair, Chanchal K. Roy