Talk 1: What is a typical use case of the tool?
Talk 2: We deal with graphs a lot in software analysis. Any general experience/recommendations on applying AI methods on graphs?
Talk 2: There are probably many more pairs of packages that are not connected (neg) than those that are connected (pos). Do you balance the training data? How?
Talk 1: How do you plan to engage developers in using your platform?
Talk 2: Did you try with other ML techniques (even though SVM worked well)?
Talk 2: As I understood, you use only one version for the training, the previous one. Do you think that using more?
Talk 2: What are the "content" features? Are they project-specific?
Talk 2: Considering that the problem seems to be unbalanced, did you consider analyzing the false negatives?
Talk 3: How long is a period in your experiment and what do you think is the impact of period length selection?
Talk 3: How do you aggregate developers metrics into file level/commit level metrics?
Talk 4: If a method stereotype changes, is it really the same method?
Why is it so important to have examples of vulnerabilities? Is it possible to identify vulnerabilities just by studying the system itself?
Talk 3: If a method stereotype changes, is it really the same method?
Talk 4: Is a command method related to the command design pattern?
Talk 1 : could it be possible to take into account distributions specific versions (with backported patches for security) ?
Talk 2: SVM is very sensible to parameter tuning, how do you handle this issue in your use case?
Talk 2: Can you handle smells other than dependency cycles?
Talk 1: how much text analysis is required to match IDs? What about false positives?