Dr. Jannik Fischbach

München, Bayern, Deutschland Kontaktinformationen
922 Follower:innen 500+ Kontakte

Anmelden, um das Profil zu sehen

Berufserfahrung und Ausbildung

  • Netlight

Gesamte Berufserfahrung von Dr. Jannik Fischbach anzeigen

Jobbezeichnung, Beschäftigungsdauer und mehr ansehen.

oder

Wenn Sie auf „Weiter“ klicken, um Mitglied zu werden oder sich einzuloggen, stimmen Sie der Nutzervereinbarung, der Datenschutzrichtlinie und der Cookie-Richtlinie von LinkedIn zu.

Bescheinigungen und Zertifikate

Ehrenamt

Veröffentlichungen

  • A Live Extensible Ontology of Quality Factors for Textual Requirements

    IEEE 30th International Requirements Engineering Conference

    Abstract: Quality factors like passive voice or sentence length are commonly used in research and practice to evaluate the quality of natural language requirements since they indicate defects in requirements artifacts that potentially propagate to later stages in the development life cycle. However, as a research community, we still lack a holistic perspective on quality factors. This inhibits not only a comprehensive understanding of the existing body of knowledge but also the effective use…

    Abstract: Quality factors like passive voice or sentence length are commonly used in research and practice to evaluate the quality of natural language requirements since they indicate defects in requirements artifacts that potentially propagate to later stages in the development life cycle. However, as a research community, we still lack a holistic perspective on quality factors. This inhibits not only a comprehensive understanding of the existing body of knowledge but also the effective use and evolution of these factors. To this end, we propose an ontology of quality factors for textual requirements, which includes (1) a structure framing quality factors and related elements and (2) a central repository and web interface making these factors publicly accessible and usable. We contribute the first version of both by applying a rigorous ontology development method to 105 eligible primary studies and construct a first version of the repository and interface. We illustrate the usability of the ontology and invite fellow researchers to a joint community effort to complete and maintain this knowledge repository. We envision our ontology to reflect the community's harmonized perception of requirements quality factors, guide reporting of new quality factors, and provide central access to the current body of knowledge.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Automated Generation of Test Models from Semi-Structured Requirements

    IEEE 27th International Requirements Engineering Conference Workshops (REW)

    Context: Model-based testing is an instrument for automated generation of test cases. It requires identifying requirements in documents, understanding them syntactically and semantically, and then translating them into a test model. One light-weight language for these test models are Cause-Effect-Graphs (CEG) that can be used to derive test cases.

    Problem: The creation of test models is laborious and we lack an automated solution that covers the entire process from requirement detection…

    Context: Model-based testing is an instrument for automated generation of test cases. It requires identifying requirements in documents, understanding them syntactically and semantically, and then translating them into a test model. One light-weight language for these test models are Cause-Effect-Graphs (CEG) that can be used to derive test cases.

    Problem: The creation of test models is laborious and we lack an automated solution that covers the entire process from requirement detection to test model creation. In addition, the majority of requirements is expressed in natural language (NL), which is hard to translate to test models automatically.

    Principal Idea: We build on the fact that not all NL requirements are equally unstructured. We found that 14 % of the lines in requirements documents of our industry partner contain "pseudo-code"-like descriptions of business rules. We apply Machine Learning to identify such semi-structured requirements descriptions and propose a rule-based approach for their translation into CEGs.

    Contribution: We make three contributions: (1) an algorithm for the automatic detection of semi-structured requirements descriptions in documents, (2) an algorithm for the automatic translation of the identified requirements into a CEG and (3) a study demonstrating that our proposed solution leads to 86% time savings for test model creation without loss of quality.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Automatic creation of acceptance tests by extracting conditionals from requirements: NLP approach and case study

    Journal of Systems and Software

    Abstract: Acceptance testing is crucial to determine whether a system fulfills end-user requirements. However, the creation of acceptance tests is a laborious task entailing two major challenges: (1) practitioners need to determine the right set of test cases that fully covers a requirement, and (2) they need to create test cases manually due to insufficient tool support. Existing approaches for automatically deriving test cases require semi-formal or even formal notations of requirements…

    Abstract: Acceptance testing is crucial to determine whether a system fulfills end-user requirements. However, the creation of acceptance tests is a laborious task entailing two major challenges: (1) practitioners need to determine the right set of test cases that fully covers a requirement, and (2) they need to create test cases manually due to insufficient tool support. Existing approaches for automatically deriving test cases require semi-formal or even formal notations of requirements, though unrestricted natural language is prevalent in practice. In this paper, we present our tool-supported approach CiRA (Conditionals in Requirements Artifacts) capable of creating the minimal set of required test cases from conditional statements in informal requirements. We demonstrate the feasibility of CiRA in a case study with three industry partners. In our study, out of 578 manually created test cases, 71.8 % can be generated automatically. Additionally, CiRA discovered 80 relevant test cases that were missed in manual test case design. CiRA is publicly available at www.cira.bth.se/demo/.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Automatic Detection of Causality in Requirement Artifacts: the CiRA Approach

    International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2021)

    🏆 Awarded as the Best Research Paper at the conference 🏆

    Abstract: System behavior is often expressed by causal relations in requirements (e.g., If event 1, then event 2). Automatically extracting this embedded causal knowledge supports not only reasoning about requirements dependencies, but also various automated engineering tasks such as seamless derivation of test cases. However, causality extraction from natural language is still an open research challenge as existing approaches…

    🏆 Awarded as the Best Research Paper at the conference 🏆

    Abstract: System behavior is often expressed by causal relations in requirements (e.g., If event 1, then event 2). Automatically extracting this embedded causal knowledge supports not only reasoning about requirements dependencies, but also various automated engineering tasks such as seamless derivation of test cases. However, causality extraction from natural language is still an open research challenge as existing approaches fail to extract causality with reasonable performance. We understand causality extraction from requirements as a two-step problem: First, we need to detect if requirements have causal properties or not. Second, we need to understand and extract their causal relations. At present, though, we lack knowledge about the form and complexity of causality in requirements, which is necessary to develop a suitable approach addressing these two problems. We conduct an exploratory case study with 14,983 sentences from 53 requirements documents originating from 18 different domains and shed light on the form and complexity of causality in requirements. Based on our findings, we develop a tool-supported approach for causality detection (CiRA). This constitutes a first step towards causality extraction from NL requirements. We report on a case study and the resulting tool-supported approach for causality detection in requirements. Our case study corroborates, among other things, that causality is, in fact, a widely used linguistic pattern to describe system behavior, as about a third of the analyzed sentences are causal. We further demonstrate that our tool CiRA achieves a macro-F1 score of 82 % on real word data and that it outperforms related approaches with an average gain of 11.06 % in macro-Recall and 11.43 % in macro-Precision. Finally, we disclose our open data sets as well as our tool to foster the discourse on the automatic detection of causality in the RE community.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Automatic ESG Assessment of Companies by Mining and Evaluating Media Coverage Data: NLP Approach and Tool

    2023 IEEE International Conference on Big Data (BigData)

    [Context:] Society increasingly values sustainable corporate behaviour, impacting corporate reputation and customer trust. Hence, companies regularly publish sustainability reports to shed light on their impact on environmental, social, and governance (ESG) factors. [Problem:] Sustainability reports are written by companies and therefore considered a company-controlled source. Contrarily, studies reveal that non-corporate channels (e.g., media coverage) represent the main driver for ESG…

    [Context:] Society increasingly values sustainable corporate behaviour, impacting corporate reputation and customer trust. Hence, companies regularly publish sustainability reports to shed light on their impact on environmental, social, and governance (ESG) factors. [Problem:] Sustainability reports are written by companies and therefore considered a company-controlled source. Contrarily, studies reveal that non-corporate channels (e.g., media coverage) represent the main driver for ESG transparency. However, analysing media coverage regarding ESG factors is challenging since (1) the amount of published news articles grows daily, (2) media coverage data does not necessarily deal with an ESG-relevant topic, meaning that it must be carefully filtered, and (3) the majority of media coverage data is unstructured. [Research Goal:] We aim to automatically extract ESG-relevant information from textual media reactions to calculate an ESG score for a given company. Our goal is to reduce the cost of ESG data collection and make ESG information available to the general public. [Contribution:] Our contributions are three-fold: First, we publish a corpus of 432,411 news headlines annotated as being environmental-, governance-, social-related, or ESG-irrelevant. Second, we present our tool-supported approach called ESG-Miner, capable of automatically analysing and evaluating corporate ESG performance headlines. Third, we demonstrate the feasibility of our approach in an experiment and apply the ESG-Miner on 3000 manually labelled headlines. Our approach correctly processes 96.7% of the headlines and shows great performance in detecting environmental-related headlines and their correct sentiment.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • CATE: CAusality Tree Extractor from Natural Language Requirements

    IEEE 29th International Requirements Engineering Conference Workshops (REW)

    Abstract: Causal relations (If A, then B) are prevalent in requirements artifacts. Automatically extracting causal relations from requirements holds great potential for various RE activities (e.g., automatic derivation of suitable test cases). However, we lack an approach capable of extracting causal relations from natural language with reasonable performance. In this paper, we present our tool CATE (CAusality Tree Extractor), which is able to parse the composition of a causal relation as a…

    Abstract: Causal relations (If A, then B) are prevalent in requirements artifacts. Automatically extracting causal relations from requirements holds great potential for various RE activities (e.g., automatic derivation of suitable test cases). However, we lack an approach capable of extracting causal relations from natural language with reasonable performance. In this paper, we present our tool CATE (CAusality Tree Extractor), which is able to parse the composition of a causal relation as a tree structure. CATE does not only provide an overview of causes and effects in a sentence, but also reveals their semantic coherence by translating the causal relation into a binary tree. We encourage fellow researchers and practitioners to use CATE at https://causalitytreeextractor.com/

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Causality in requirements artifacts: prevalence, detection, and impact

    Requirements Engineering Journal

    Abstract: Causal relations in natural language (NL) requirements convey strong, semantic information. Automatically extracting such causal information enables multiple use cases, such as test case generation, but it also requires to reliably detect causal relations in the first place. Currently, this is still a cumbersome task as causality in NL requirements is still barely understood and, thus, barely detectable. In our empirically informed research, we aim at better understanding the notion…

    Abstract: Causal relations in natural language (NL) requirements convey strong, semantic information. Automatically extracting such causal information enables multiple use cases, such as test case generation, but it also requires to reliably detect causal relations in the first place. Currently, this is still a cumbersome task as causality in NL requirements is still barely understood and, thus, barely detectable. In our empirically informed research, we aim at better understanding the notion of causality and supporting the automatic extraction of causal relations in NL requirements. In a first case study, we investigate 14.983 sentences from 53 requirements documents to understand the extent and form in which causality occurs. Second, we present and evaluate a tool-supported approach, called CiRA, for causality detection. We conclude with a second case study where we demonstrate the applicability of our tool and investigate the impact of causality on NL requirements. The first case study shows that causality constitutes around 28 % of all NL requirements sentences. We then demonstrate that our detection tool achieves a macro-F1 score of 82 % on real-world data and that it outperforms related approaches with an average gain of 11.06 % in macro-Recall and 11.43 % in macro-Precision. Finally, our second case study corroborates the positive correlations of causality with features of NL requirements. The results strengthen our confidence in the eligibility of causal relations for downstream reuse, while our tool and publicly available data constitute a first step in the ongoing endeavors of utilizing causality in RE and beyond.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • CiRA: A Tool for the Automatic Detection of Causal Relationships in Requirements Artifacts

    Joint Workshops of the 27th International Conference on Requirements Engineering (REFSQ)

    Abstract: Requirements often specify the expected system behavior by using causal relations (e.g., If A, then B). Automatically extracting these relations supports, among others, two prominent RE use cases: automatic test case derivation and dependency detection between requirements. However, existing tools fail to extract causality from natural language with reasonable performance. In this paper, we present our tool CiRA (Causality detection in Requirements Artifacts), which represents a first…

    Abstract: Requirements often specify the expected system behavior by using causal relations (e.g., If A, then B). Automatically extracting these relations supports, among others, two prominent RE use cases: automatic test case derivation and dependency detection between requirements. However, existing tools fail to extract causality from natural language with reasonable performance. In this paper, we present our tool CiRA (Causality detection in Requirements Artifacts), which represents a first step towards automatic causality extraction from requirements. We evaluate CiRA on a publicly available data set of 61 acceptance criteria (causal: 32; non-causal: 29) describing the functionality of the German Corona-Warn-App. We achieve a macro F1 score of 83%, which corroborates the feasibility of our approach.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Explanation Needs in App Reviews: Taxonomy and Automated Detection

    IEEE 31st International Requirements Engineering Conference Workshops (REW)

    Explainability, i.e. the ability of a system to explain its behavior to users, has become an important quality of software-intensive systems. Recent work has focused on methods for generating explanations for various algorithmic paradigms (e.g., machine learning, self-adaptive systems). There is relatively little work on what situations and types of behavior should be explained. There is also a lack of support for eliciting explainability requirements. In this work, we explore the need for…

    Explainability, i.e. the ability of a system to explain its behavior to users, has become an important quality of software-intensive systems. Recent work has focused on methods for generating explanations for various algorithmic paradigms (e.g., machine learning, self-adaptive systems). There is relatively little work on what situations and types of behavior should be explained. There is also a lack of support for eliciting explainability requirements. In this work, we explore the need for explanation expressed by users in app reviews. We manually coded a set of 1,730 app reviews from 8 apps and derived a taxonomy of Explanation Needs. We also explore several approaches to automatically identify Explanation Needs in app reviews. Our best classifier identifies Explanation Needs in 486 unseen reviews of 4 different apps with a weighted F-score of 86%. Our work contributes to a better understanding of users' Explanation Needs. Automated tools can help engineers focus on these needs and ultimately elicit valid Explanation Needs.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Fine-grained causality extraction from natural language requirements using recursive neural tensor networks

    IEEE 29th International Requirements Engineering Conference Workshops (REW)

    Context: Causal relations (e.g., If A, then B) are prevalent in functional requirements. For various applications of AI4RE, e.g., the automatic derivation of suitable test cases from requirements, automatically extracting such causal statements are a basic necessity.

    Problem: We lack an approach that is able to extract causal relations from natural language requirements in fine-grained form. Specifically, existing approaches do not consider the combinatorics between causes and effects…

    Context: Causal relations (e.g., If A, then B) are prevalent in functional requirements. For various applications of AI4RE, e.g., the automatic derivation of suitable test cases from requirements, automatically extracting such causal statements are a basic necessity.

    Problem: We lack an approach that is able to extract causal relations from natural language requirements in fine-grained form. Specifically, existing approaches do not consider the combinatorics between causes and effects. They also do not allow to split causes and effects into more granular text fragments (e.g., variable and condition), making the extracted relations unsuitable for automatic test case derivation.

    Objective & Contributions: We address this research gap and make the following contributions: First, we present the Causality Treebank, which is the first corpus of fully labeled binary parse trees representing the composition of 1,571 causal requirements. Second, we propose a fine-grained causality extractor based on Recursive Neural Tensor Networks. Our approach is capable of recovering the composition of causal statements written in natural language and achieves a F1 score of 74% in the evaluation on the Causality Treebank. Third, we disclose our open data sets as well as our code to foster the discourse on the automatic extraction of causality in the RE community.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • How Do Practitioners Interpret Conditionals in Requirements?

    International Conference on Product-Focused Software Process Improvement (PROFES)

    Context: Conditional statements like “If A and B then C” are core elements for describing software requirements. However, there are many ways to express such conditionals in natural language and also many ways how they can be interpreted. We hypothesize that conditional statements in requirements are a source of ambiguity, potentially affecting downstream activities such as test case generation negatively.

    Objective: Our goal is to understand how specific conditionals are interpreted by…

    Context: Conditional statements like “If A and B then C” are core elements for describing software requirements. However, there are many ways to express such conditionals in natural language and also many ways how they can be interpreted. We hypothesize that conditional statements in requirements are a source of ambiguity, potentially affecting downstream activities such as test case generation negatively.

    Objective: Our goal is to understand how specific conditionals are interpreted by readers who work with requirements.

    Method: We conduct a descriptive survey with 104 RE practitioners and ask how they interpret 12 different conditional clauses. We map their interpretations to logical formulas written in Propositional (Temporal) Logic and discuss the implications.

    Results: The conditionals in our tested requirements were interpreted ambiguously. We found that practitioners disagree on whether an antecedent is only sufficient or also necessary for the consequent. Interestingly, the disagreement persists even when the system behavior is known to the practitioners. We also found that certain cue phrases are associated with specific interpretations.

    Conclusion: Conditionals in requirements are a source of ambiguity and there is not just one way to interpret them formally. This affects any analysis that builds upon formalized requirements (e.g., inconsistency checking, test-case generation). Our results may also influence guidelines for writing requirements.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Integrated and Iterative Requirements Analysis and Test Specification: A Case Study at Kostal

    ACM/IEEE 24th International Conference on Model Driven Engineering Languages and Systems (MODELS)

    Abstract: Currently, practitioners follow a top-down approach in automotive development projects. However, recent studies have shown that this top-down approach is not suitable for the implementation and testing of modern automotive systems. Specifically, practitioners increasingly fail to specify requirements and tests for systems with complex component interactions (e.g., e-mobility systems). In this paper, we address this research gap and propose an integrated and iterative scenario-based…

    Abstract: Currently, practitioners follow a top-down approach in automotive development projects. However, recent studies have shown that this top-down approach is not suitable for the implementation and testing of modern automotive systems. Specifically, practitioners increasingly fail to specify requirements and tests for systems with complex component interactions (e.g., e-mobility systems). In this paper, we address this research gap and propose an integrated and iterative scenario-based technique for the specification of requirements and test scenarios. Our idea is to combine both a top-down and a bottom-up integration strategy. For the top-down approach, we use a behavior-driven development (BDD) technique to drive the modeling of high-level system interactions from the user's perspective. For the bottom-up approach, we discovered that natural language processing (NLP) techniques are suited to make textual specifications of existing components accessible to our technique. To integrate both directions, we support the joint execution and automated analysis of system-level interactions and component-level behavior. We demonstrate the feasibility of our approach by conducting a case study at Kostal (Tierl supplier). The case study corroborates, among other things, that our approach supports practitioners in improving requirements and test specifications for integrated system behavior.

    Veröffentlichung anzeigen
  • Requirements quality research: a harmonized theory, evaluation, and roadmap

    Requirements Engineering Journal

    High-quality requirements minimize the risk of propagating defects to later stages of the software development life cycle. Achieving a sufficient level of quality is a major goal of requirements engineering. This requires a clear definition and understanding of requirements quality. Though recent publications make an effort at disentangling the complex concept of quality, the requirements quality research community lacks identity and clear structure which guides advances and puts new findings…

    High-quality requirements minimize the risk of propagating defects to later stages of the software development life cycle. Achieving a sufficient level of quality is a major goal of requirements engineering. This requires a clear definition and understanding of requirements quality. Though recent publications make an effort at disentangling the complex concept of quality, the requirements quality research community lacks identity and clear structure which guides advances and puts new findings into an holistic perspective. In this research commentary, we contribute (1) a harmonized requirements quality theory organizing its core concepts, (2) an evaluation of the current state of requirements quality research, and (3) a research roadmap to guide advancements in the field. We show that requirements quality research focuses on normative rules and mostly fails to connect requirements quality to its impact on subsequent software development activities, impeding the relevance of the research. Adherence to the proposed requirements quality theory and following the outlined roadmap will be a step toward amending this gap.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Semi-Automated Labeling of Requirement Datasets for Relation Extraction

    14th Workshop on Building and Using Comparable Corpora (BUCC 2021)

    Creating datasets manually by human annotators is a laborious task that can lead to biased and inhomogeneous labels. We propose a flexible, semi-automatic framework for labeling data for relation extraction. Furthermore, we provide a dataset of preprocessed sentences from the requirements engineering domain, including a set of automatically created as well as hand-crafted labels. In our case study, we compare the human and automatic labels and show that there is a substantial overlap between…

    Creating datasets manually by human annotators is a laborious task that can lead to biased and inhomogeneous labels. We propose a flexible, semi-automatic framework for labeling data for relation extraction. Furthermore, we provide a dataset of preprocessed sentences from the requirements engineering domain, including a set of automatically created as well as hand-crafted labels. In our case study, we compare the human and automatic labels and show that there is a substantial overlap between both annotations.

    Veröffentlichung anzeigen
  • SPECMATE: Automated Creation of Test Cases from Acceptance Criteria

    IEEE 13th International Conference on Software Testing, Validation and Verification (ICST)

    Abstract: In the agile domain, test cases are derived from acceptance criteria to verify the expected system behavior. However, the design of test cases is laborious and has to be done manually due to missing tool support. Existing approaches for automatically deriving tests require semi-formal or even formal notations of acceptance criteria, though informal descriptions are mostly employed in practice. In this paper, we make three contributions: (1) a case study of 961 user stories providing…

    Abstract: In the agile domain, test cases are derived from acceptance criteria to verify the expected system behavior. However, the design of test cases is laborious and has to be done manually due to missing tool support. Existing approaches for automatically deriving tests require semi-formal or even formal notations of acceptance criteria, though informal descriptions are mostly employed in practice. In this paper, we make three contributions: (1) a case study of 961 user stories providing an insight into how user stories are formulated and used in practice, (2) an approach for the automatic extraction of test cases from informal acceptance criteria and (3) a study demonstrating the feasibility of our approach in cooperation with our industry partner. In our study, out of 604 manually created test cases, 56 % can be generated automatically and missing negative test cases are added.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Towards Causality Extraction from Requirements

    IEEE 28th International Requirements Engineering Conference

    Abstract: System behavior is often based on causal relations between certain events (e.g. If event 1, then event 2). Consequently, those causal relations are also textually embedded in requirements. We want to extract this causal knowledge and utilize it to derive test cases automatically and to reason about dependencies between requirements. Existing NLP approaches fail to extract causality from natural language (NL) with reasonable performance. In this paper, we describe first steps towards…

    Abstract: System behavior is often based on causal relations between certain events (e.g. If event 1, then event 2). Consequently, those causal relations are also textually embedded in requirements. We want to extract this causal knowledge and utilize it to derive test cases automatically and to reason about dependencies between requirements. Existing NLP approaches fail to extract causality from natural language (NL) with reasonable performance. In this paper, we describe first steps towards building a new approach for causality extraction and contribute: (1) an NLP architecture based on Tree Recursive Neural Networks (TRNN) that we will train to identify causal relations in NL requirements and (2) an annotation scheme and a dataset that is suitable for training TRNNs. Our dataset contains 212,186 sentences from 463 publicly available requirement documents and is a first step towards a gold standard corpus for causality extraction. We encourage fellow researchers to contribute to our dataset and help us in finalizing the causality annotation process. Additionally, the dataset can also be annotated further to serve as a benchmark for other RE-relevant NLP tasks such as requirements classification.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Transfer Learning for Mining Feature Requests and Bug Reports from Tweets and App Store Reviews

    IEEE 29th International Requirements Engineering Conference Workshops (REW)

    Abstract: Identifying feature requests and bug reports in user comments holds great potential for development teams. However, automated mining of RE-related information from social media and app stores is challenging since (1) about 70% of user comments contain noisy, irrelevant information, (2) the amount of user comments grows daily making manual analysis unfeasible, and (3) user comments are written in different languages. Existing approaches build on traditional machine learning (ML) and…

    Abstract: Identifying feature requests and bug reports in user comments holds great potential for development teams. However, automated mining of RE-related information from social media and app stores is challenging since (1) about 70% of user comments contain noisy, irrelevant information, (2) the amount of user comments grows daily making manual analysis unfeasible, and (3) user comments are written in different languages. Existing approaches build on traditional machine learning (ML) and deep learning (DL), but fail to detect feature requests and bug reports with high Recall and acceptable Precision which is necessary for this task. In this paper, we investigate the potential of transfer learning (TL) for the classification of user comments. Specifically, we train both monolingual and multilingual BERT models and compare the performance with state-of-the-art methods. We found that monolingual BERT models outperform existing baseline methods in the classification of English App Reviews as well as English and Italian Tweets. However, we also observed that the application of heavyweight TL models does not necessarily lead to better performance. In fact, our multilingual BERT models perform worse than traditional ML methods.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • What Makes Agile Test Artifacts Useful? An Activity-Based Quality Model from a Practitioners' Perspective

    IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)

    🏆 Awarded as the Best Industry Paper at the conference 🏆

    Background: The artifacts used in Agile software testing and the reasons why these artifacts are used are fairly well-understood. However, empirical research on how Agile test artifacts are eventually designed in practice and which quality factors make them useful for software testing remains sparse.

    Aims: Our objective is two-fold. First, we identify current challenges in using test artifacts to understand why certain…

    🏆 Awarded as the Best Industry Paper at the conference 🏆

    Background: The artifacts used in Agile software testing and the reasons why these artifacts are used are fairly well-understood. However, empirical research on how Agile test artifacts are eventually designed in practice and which quality factors make them useful for software testing remains sparse.

    Aims: Our objective is two-fold. First, we identify current challenges in using test artifacts to understand why certain quality factors are considered good or bad. Second, we build an Activity-Based Artifact Quality Model that describes what Agile test artifacts should look like.

    Method: We conduct an industrial survey with 18 practitioners from 12 companies operating in seven different domains. Results: Our analysis reveals nine challenges and 16 factors describing the quality of six test artifacts from the perspective of Agile testers. Interestingly, we observed mostly challenges regarding language and traceability, which are well-known to occur in non-Agile projects.

    Conclusions: Although Agile software testing is becoming the norm, we still have little confidence about general do's and don'ts going beyond conventional wisdom. This study is the first to distill a list of quality factors deemed important to what can be considered as useful test artifacts.

    Andere Autor:innen
    Veröffentlichung anzeigen

Auszeichnungen/Preise

  • Ernst Denert Software-Engineering-Prize

    Gesellschaft für Informatik e.V.

    The Software Engineering Department of the German Informatics Society (GI) awards the prize annually in cooperation with the Swiss Informatics Society (SI) and the Austrian Computer Society (OCG) for an outstanding contribution to software engineering. The most important criteria for the award are the applicability and practical orientation of the research.

  • Best Research Paper Award

    International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ)

  • Best Industry Paper Award

    IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)

  • Best Graduate Award (Business Information Systems)

    Duale-Hochschule Baden-Württemberg Mannheim

Sprachen

  • Deutsch

    Muttersprache oder zweisprachig

  • Englisch

    Verhandlungssicher

  • Schwedisch

    Grundkenntnisse

Dr. Jannik Fischbachs vollständiges Profil ansehen

  • Herausfinden, welche gemeinsamen Kontakte Sie haben
  • Sich vorstellen lassen
  • Dr. Jannik Fischbach direkt kontaktieren
Mitglied werden. um das vollständige Profil zu sehen

Ebenfalls angesehen

Entwickeln Sie mit diesen Kursen neue Kenntnisse und Fähigkeiten