Logo

    Benefits and Challenges of Model-Based Systems Engineering

    enJuly 23, 2021
    What was the main topic of the podcast episode?
    Summarise the key points discussed in the episode?
    Were there any notable quotes or insights from the speakers?
    Which popular books were mentioned in this episode?
    Were there any points particularly controversial or thought-provoking discussed in the episode?
    Were any current events or trending topics addressed in the episode?

    About this Episode

    Nataliya (Natasha) Shevchenko and Mary Popeck, both senior researchers in the CERT Division at Carnegie Mellon University’s Software Engineering Institute, discuss the use of model-based systems engineering (MBSE), which, in contrast to document-centric engineering, puts models at the center of system design. MBSE is used to support the requirements, design, analysis, verification, and validation associated with the development of complex systems.

    Recent Episodes from Software Engineering Institute (SEI) Podcast Series

    Using Large Language Models in the National Security Realm

    Using Large Language Models in the National Security Realm

    At the request of the White House, the Office of the Director of National Intelligence (ODNI) began exploring use cases for large language models (LLMs) within the Intelligence Community (IC). As part of this effort, ODNI sponsored the Mayflower Project at Carnegie Mellon University’s Software Engineering Institute (SEI) from May 2023 through September 2023. The Mayflower Project attempted to answer the following questions:

    1. How might the IC set up a baseline, stand-alone LLM?
    2. How might the IC customize LLMs for specific intelligence use cases?
    3. How might the IC evaluate the trustworthiness of LLMs across use cases?

    In this SEI Podcast, Shannon Gallagher, AI engineering team lead, and Rachel Dzombak, special advisor to the director of the SEI’s AI Division, discuss the findings and recommendations from the Mayflower Project and provides additional background information about LLMs and how they can be engineered for national security use cases.

    Atypical Applications of Agile and DevSecOps Principles

    Atypical Applications of Agile and DevSecOps Principles

    Modern software engineering practices of Agile and DevSecOps have provided a foundation for producing working software products faster and more reliably than ever before. Far too often, however, these practices do not address the non-software concerns of business mission and capability delivery even though these concerns are critical to the successful delivery of a software product. Through our work with government organizations, we have found that expanding DevSecOps beyond product development enables other teams to increase their capabilities and improve their processes. Agile methodologies are also being used for complex system and hardware developments. In this podcast from the Carnegie Mellon University Software Engineering Institute, Lyndsi Hughes, a senior systems engineer and David Sweeney, an associate software developer, both with the SEI CERT Division, share their experiences leveraging DevSecOps pipelines in atypical situations in support of teams focused on the capability delivery and business mission for their organizations.

    When Agile and Earned Value Management Collide: 7 Considerations for Successful Interaction

    When Agile and Earned Value Management Collide: 7 Considerations for Successful Interaction

    Increasingly in government acquisition of software-intensive systems, we are seeing programs using Agile development methodology and earned value management. While there are many benefits to using both Agile and EVM, there are important considerations that software program managers must first address. In this podcast, Patrick Place, a senior engineer, and Stephen Wilson, a test engineer, both with the SEI Agile Transformation Team, discuss seven considerations for successful use of Agile and EVM.

     

    The Impact of Architecture on Cyber-Physical Systems Safety

    The Impact of Architecture on Cyber-Physical Systems Safety

    As developers continue to build greater autonomy into cyber-physical systems (CPSs), such as unmanned aerial vehicles (UAVs) and automobiles, these systems aggregate data from an increasing number of sensors. However, more sensors not only create more data and more precise data, but they require a complex architecture to correctly transfer and process multiple data streams. This increase in complexity comes with additional challenges for functional verification and validation, a greater potential for faults, and a larger attack surface. What’s more, CPSs often cannot distinguish faults from attacks. To address these challenges, researchers from the SEI and Georgia Tech collaborated on an effort to map the problem space and develop proposals for solving the challenges of increasing sensor data in CPSs. In this podcast from the Carnegie Mellon University Software Engineering Institute, Jerome Hugues, a principal researcher in the SEI Software Solutions Division, discusses this collaboration and its larger body of work, Safety Analysis and Fault Detection Isolation and Recovery (SAFIR) Synthesis for Time-Sensitive Cyber-Physical Systems.

    ChatGPT and the Evolution of Large Language Models: A Deep Dive into 4 Transformative Case Studies

    ChatGPT and the Evolution of Large Language Models: A Deep Dive into 4 Transformative Case Studies

    To better understand the potential uses of large language models (LLMs) and their impact, a team of researchers at the Carnegie Mellon University Software Engineering Institute CERT Division conducted four in-depth case studies. The case studies span multiple domains and call for vastly different capabilities. In this podcast, Matthew Walsh, a senior data scientist in CERT, and Dominic Ross, Multi-Media Design Team lead, discuss their work in developing the four case studies as well as limitations and future uses of ChatGPT.

    The Cybersecurity of Quantum Computing: 6 Areas of Research

    The Cybersecurity of Quantum Computing: 6 Areas of Research

    Research and development of quantum computers continues to grow at a rapid pace. The U.S. government alone spent more than $800 million on quantum information science research in 2022. Thomas Scanlon, who leads the data science group in the SEI CERT Division, was recently invited to be a participant in the Workshop on Cybersecurity of Quantum Computing, co-sponsored by the National Science Foundation (NSF) and the White House Office of Science and Technology Policy, to examine the emerging field of cybersecurity for quantum computing. In this podcast from the Carnegie Mellon University Software Engineering Institute, Scanlon discusses how to create the discipline of cyber protection of quantum computing and outlines six areas of future research in quantum cybersecurity.

    User-Centric Metrics for Agile

    User-Centric Metrics for Agile

    Far too often software programs continue to collect metrics for no other reason than that is how it has always been done. This leads to situations where, for any given environment, a metrics program is defined by a list of metrics that must be collected. A top-down, deterministic specification of graphs or other depictions of data required by the metrics program can distract participants from the potentially useful information that the metrics reveal and illuminate. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Will Hayes, who leads the Agile Transformation Team, and Patrick Place, a principal engineer on that team, discuss with principal researcher Suzanne Miller, how user stories can help put development in the context of who is using the system and lead to a conversation about why a specific metric is being collected. 

    The Product Manager’s Evolving Role in Software and Systems Development

    The Product Manager’s Evolving Role in Software and Systems Development

    In working with software and systems teams developing technical products, Judy Hwang, a senior software engineer in the SEI CERT Division, observed that teams were not investing the time, resources and effort required to manage the product lifecycle of a successful product. These activities include thoroughly exploring the problem space by talking to users, assessing existing solutions, understanding the competition, and positioning the product to create value for customers. In this podcast from the Carnegie Mellon University Software Engineering Institute, Hwang talks with principal researcher Suzanne Miller about the importance of implementing foundational product management principles in software and systems development and offers resources for audience members who looking to strengthen their Agile product delivery practices.

    Measuring the Trustworthiness of AI Systems

    Measuring the Trustworthiness of AI Systems

    The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.

    Actionable Data in the DevSecOps Pipeline

    Actionable Data in the DevSecOps Pipeline

    In this podcast from the Carnegie Mellon University Software Engineering Institute, Bill Nichols and Julie Cohen talk with Suzanne Miller about how automation within DevSecOps product-development pipelines provides new opportunities for program managers (PMs) to confidently make decisions with the help of readily available data.

    As in commercial companies, DoD PMs are accountable for the overall cost, schedule, and performance of a program. The PM’s job is even more complex in large programs with multiple software-development pipelines where cost, schedule, performance, and risk for the products of each pipeline must be considered when making decisions, as well as the interrelationships among products developed on different pipelines. Nichols and Cohen discuss how PMs can collect and transform unprocessed DevSecOps development data into useful program-management information that can guide decisions they must make during program execution. The ability to continuously monitor, analyze, and provide actionable data to the PM from tools in multiple interconnected pipelines of pipelines can help keep the overall program on track.

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io