Share
et al, Tim Menzies, Tim Williams Menzies, Laurie Williams, Thomas Zimmermann, Tom Zimmermann
Perspectives on Data Science for Software Engineering - POD TITLE
English · Paperback / Softback
Description
Informationen zum Autor Tim Menzies, Full Professor, CS, NC State and a former software research chair at NASA. He has published 200+ publications, many in the area of software analytics. He is an editorial board member (1) IEEE Trans on SE; (2) Automated Software Engineering journal; (3) Empirical Software Engineering Journal. His research includes artificial intelligence, data mining and search-based software engineering. He is best known for his work on the PROMISE open source repository of data for reusable software engineering experiments. Klappentext Perspectives on Data Science for Software Engineering presents the best practices of seasoned data miners in software engineering. The idea for this book was created during the 2014 conference at Dagstuhl, an invitation-only gathering of leading computer scientists who meet to identify and discuss cutting-edge informatics topics. . At the 2014 conference, the concept of how to transfer the knowledge of experts from seasoned software engineers and data scientists to newcomers in the field highlighted many discussions. While there are many books covering data mining and software engineering basics, they present only the fundamentals and lack the perspective that comes from real-world experience. This book offers unique insights into the wisdom of the community's leaders gathered to share hard-won lessons from the trenches. . Ideas are presented in digestible chapters designed to be applicable across many domains. Topics included cover data collection, data sharing, data mining, and how to utilize these techniques in successful software projects. Newcomers to software engineering data science will learn the tips and tricks of the trade, while more experienced data scientists will benefit from war stories that show what traps to avoid. Inhaltsverzeichnis Introduction Perspectives on data science for software engineering Software analytics and its application in practice Seven principles of inductive software engineering: What we do is different The need for data analysis patterns (in software engineering) From software data to software theory: The path less traveled Why theory matters Success Stories/Applications Mining apps for anomalies Embrace dynamic artifacts Mobile app store analytics The naturalness of software Advances in release readiness How to tame your online services Measuring individual productivity Stack traces reveal attack surfaces Visual analytics for software engineering data Gameplay data plays nicer when divided into cohorts A success story in applying data science in practice There's never enough time to do all the testing you want The perils of energy mining: measure a bunch, compare just once Identifying fault-prone files in large industrial software systems A tailored suit: The big opportunity in personalizing issue tracking What counts is decisions, not numbers-Toward an analytics design sheet A large ecosystem study to understand the effect of programming languages on code quality Code reviews are not for finding defects-Even established tools need occasional evaluation Techniques Interviews Look for state transitions in temporal data Card-sorting: From text to themes Tools! Tools! We need tools! Evidence-based software engineering Which machine learning method do you need? Structure your unstructured data first!: The case of summarizing unstructured data with tag clouds Parse that data! Practical tips for preparing your raw data for analysis Natural language processing is no free lunch Aggregating empirical evidence for more trustworthy decisions If it is software engineering, it is (probably) a Bayesian factor Becoming Goldilocks: Privacy and data sharing...
List of contents
Introduction
Perspectives on data science for software engineering
Software analytics and its application in practice
Seven principles of inductive software engineering: What we do is different
The need for data analysis patterns (in software engineering)
From software data to software theory: The path less traveled
Why theory matters
Success Stories/Applications
Mining apps for anomalies
Embrace dynamic artifacts
Mobile app store analytics
The naturalness of software
Advances in release readiness
How to tame your online services
Measuring individual productivity
Stack traces reveal attack surfaces
Visual analytics for software engineering data
Gameplay data plays nicer when divided into cohorts
A success story in applying data science in practice
There's never enough time to do all the testing you want
The perils of energy mining: measure a bunch, compare just once
Identifying fault-prone files in large industrial software systems
A tailored suit: The big opportunity in personalizing issue tracking
What counts is decisions, not numbers-Toward an analytics design sheet
A large ecosystem study to understand the effect of programming languages on code quality
Code reviews are not for finding defects-Even established tools need occasional evaluation
Techniques
Interviews
Look for state transitions in temporal data
Card-sorting: From text to themes
Tools! Tools! We need tools!
Evidence-based software engineering
Which machine learning method do you need?
Structure your unstructured data first!: The case of summarizing unstructured data with tag clouds
Parse that data! Practical tips for preparing your raw data for analysis
Natural language processing is no free lunch
Aggregating empirical evidence for more trustworthy decisions
If it is software engineering, it is (probably) a Bayesian factor
Becoming Goldilocks: Privacy and data sharing in "just right” conditions
The wisdom of the crowds in predictive modeling for software engineering
Combining quantitative and qualitative methods (when mining software data)
A process for surviving survey design and sailing through survey deployment
Wisdom
Log it all?
Why provenance matters
Open from the beginning
Reducing time to insight
Five steps for success: How to deploy data science in your organizations
How the release process impacts your software analytics
Security cannot be measured
Gotchas from mining bug reports
Make visualization part of your analysis process
Don't forget the developers! (and be careful with your assumptions)
Limitations and context of research
Actionable metrics are better metrics
Replicated results are more trustworthy
Diversity in software engineering research
Once is not enough: Why we need replication
Mere numbers aren't enough: A plea for visualization
Don't embarrass yourself: Beware of bias in your data
Operational data are missing, incorrect, and decontextualized
Data science revolution in process improvement and assessment?
Correlation is not causation (or, when not to scream "Eureka!”)
Software analytics for small software companies: More questions than answers
Software analytics under the lamp post (or what star trek teaches us about the importance of asking the right questions)
What can go wrong in software engineering experiments?
One size does not fit all
While models are good, simple explanations are better
The white-shirt effect: Learning from failed expectations
Simpler questions can lead to better insights
Continuously experiment to assess values early on
Li
Product details
Authors | et al, Tim Menzies, Tim Williams Menzies, Laurie Williams, Thomas Zimmermann, Tom Zimmermann |
Publisher | ELSEVIER SCIENCE BV |
Languages | English |
Product format | Paperback / Softback |
Released | 30.04.2016 |
EAN | 9780128042069 |
ISBN | 978-0-12-804206-9 |
No. of pages | 408 |
Dimensions | 195 mm x 235 mm x 20 mm |
Series |
Morgan Kaufmann |
Subject |
Education and learning
> Teaching preparation
> Vocational needs
|
Customer reviews
No reviews have been written for this item yet. Write the first review and be helpful to other users when they decide on a purchase.
Write a review
Thumbs up or thumbs down? Write your own review.