Wednesday, April 15, 2020

Intelligent Design Control: Design Definition

Design Decomposition Methods

The FDA 1998 design control depiction, while useful, does not consider how development methods have changed over the last 20 years.  Some examples of this include:
  • integration of multi-discipline design efforts...  How should software development processes (i.e. Agile), and mechanical (i.e. requirements, user needs) be integrated into a cohesive design definition?
  • integration of system modeling (sysml or Arcadia methods).  How are logical and physical model understanding built into the design inputs?
  • Integration of risk management and analysis methods.  ISO 14971 compliance (Hazard, Harm, and Hazardous situation integration), Fault tree analysis, root cause analysis and the like.
  • Simulation integration.  1-D analysis, CFD, FEA, are a few methods that provide an understanding of what risk controls should be applied to a design, and why they should be applied.
  • Standards integration.  A great wealth of information is available in standards documents related to best practice testing and design considerations.

Agile Methods

Ever since the agile manefesto was published in 2001, agile software development methods have been at the forefront of product development theory. 

In fact, there are now more than 12 different embodiments of the Agile method employed in the design and development of software.  The tremendous success of these methods in rapidly providing working software has led to discussion about how these methods should be integrated into the physical product development world. 

Agile's preference for development of Code over documentation, and an almost visceral negative response to the prescriptive creation of requirements, at first seems antithetical to the typical regulated mechanical/electrical development process.  However, upon further review, the regulation is more interested in establishing a set of design controls, which can easily be integrated into the development definition model.

The Agile deconstruction is typically as follows:
  • Outcome
  • Feature
  • Epic
  • Use Case
  • Control(s)

Arcadia Method

RFLP is a model widely adopted in the automotive industry.  The acronym stands for Requirement, Functional, Logical, Physical.  This model describes a way to progressively move from ideas, and logic to a functional understanding and finally the physical representation.  This method is largely competitive with the UML/Sysml architecture which has become popular with the advent of several modeling tools, and originates in the software industry.


Arcadia Method

UML/Sysml

UML, or the Unified Modeling Language is a general-purpose software engineering modeling language intended to visualize the design of a system.  Sysml, or Systems Modeling Language is essentially an extension of UML which incorporates other typical product development tools like requirements, and utilizes terminology more acceptable to engineering disciplines outside of software engineering.




Figure SysML Method

Risk Management - Regulation vs. Possible Complexity

Each of the regulated industries has a set of procedures related to handling product risk.  The governing document in Medical device are ISO-14971 - Medical Devices - Application of risk management to medical devices.  It outlines how medical devices should be evaluated for risk, and comment on how designs should be logically deconstructed.  In addition to the standard, there are tools which build risk evaluation models, and can provide a rich understanding of causal chains, statistical probabilities, and failure mechanisms.  One such example is the PHM MADe product.

Simulation Integration

Simulation is another critical tool in evaluating system controls.  There are a variety of simulation tools that allow an engineer to develop flow charts representing systems, and simulate component performance including; motors, supply lines, electrical communication busses.  Simulation tools of this nature are critical to defining controls on component selection.

Standard Integration and Product "Norms"

It is common in regulated environments to build historical development experience into sourcing documents.  A few examples of organizations who write and maintain these standards are IEC, ISO, ASTM, AAMI, and BSI.  The goal is to standardize historical learning in such a way as to reduce the likelyhood of design failure which have been identified in previous engineering work across the industry.

The difficulty with these documents is how to build/trace them into the product design controls.  We have several goals in creating such an integration:
  • a structured methodology for assuring all issues raised have been addressed in the new design
  • the ability to negotiate differences between several standards.  It is not uncommon that several standards will have conflicting requirements.
  • a way to abstract the sources from the design control in order to hold changes made at the library level until the product is ready for change
There are several ways to effectively manage these issues.  In our template we chose the following methodology.


Product Source Abstraction


Another common method is to add an additional level of abstraction called a standard "Norm" or Normative Standard.  The negotiation of several standards then occurs at the Norm level and is applied to the product.  In this way the Norms impact the design, and the negotiation between standards is made in advance based on the product category/type.  The intention in this case is to normalize the standard applicaiton decisions across a large enterprise.  

Integration

With the significant complexity represented by these very different analysis tools, one might become overwhelmed by how to integrate these methods into a cohesive understanding.  I think one way to handle it is to think of the various methods as filters to the controls.  In this way the several methods can be integrated into like models.

Care should be give to assure product functionality provides the flexibility needed to integrate the various development methods.

Link to my webinar:  Intelligent Design Control Master Series:  Design Definition









Medical Device Design Control Introduction

What does a successful design control system look like?  I think there are several structural questions we should consider when thinking about the content of an intelligent design control system.
  1. How can we optimize the data quality and visibility?
  2. Where is the cost of a system?  In the engineering time invested.  In order to provide excellence in ROI we must investigate how to salvage design work for use in another project.  We call this advanced structure re-use.
  3. Finally, there is tremendous value in accelerating the velocity of regulatory approval.
So, what does "High quality data" mean?

We can break this down into two terms, rationalized and validated.

Validated is usually pretty well understood.  We want a formalized workflow followed and the content approved by appropriate personnel.  If this is done, each element has been reviewed by knowledgeable people, and is therefore of greater value than items which have not been reviewed.
The tougher part of this is rationalized.

What does it mean to have rationalized data?  The central idea here is whether the data is usable.  In order for the data to be usable it should have the following attributes:
  • Appropriate granularity - If, for example, I need to understand how many screws are in an assembly, but the data only includes the top level device assembly detail, the database becomes useless for the intended use.
  • Visibility - this requirements goes to data access.  Are you able to report on the structure?
  • Integration and Contextualization - Finding the right piece of data in a 2 Tb database can be the most difficult part of using a data base.  What are the Hazardous Situations that can lead to this harm?  Defining the Hazardous situations someplace in the data structure is truly not helpful.  Using relationships between aspects of the data is tremendously powerful in making access to data FAST.
  • Fast Access - The data needs to be inter-related, and searchable, using business intelligence.
We will get back to this in a bit, but keep this discussion in the back of your mind as we start to build a design controls system, because the system won't be worth much if we don't think about how it will be used, and the value it will provide as the product is used, launched, and re-used throughout it's lifecycle.

When building a system around the regulatory requirements the discussion can be reduced to five categories of work.
  • Design Definition
  • Risk Management (ISO 14971 compliance)
  • Verification and Validation
  • Design Transfer (Creation of DHF and DMR)
  • and Regulatory Submissions
The first is design deconstruction, or our definition of good product.  Design and development planning, design input, design output, and design review can be rolled into this bucket.  The FDA has provided a guidance document (FDA Design Control Guidance 1998) to help guide companies construct a logical framework.  In this document they provide a graphic outlining the expectations for how design definition should be presented to the agency (Figure 1 - Application of Design Controls).

Application of Design Controls, FDA 1998 Design Control Guidance

There are a great variety of design decomposition methods including RFLP, SysML and others.  We will discuss many of these methods in our next blog, however, the FDA guidance is as good a framework as any to get started.

That is all for today, but I look forward to future chats.

I have a video on this topic posted on Youtube.  Follow the link below for access.

Intelligent Design Control Master Series: Design Verification and Validation

Generation of the evidence that a device complies with the design controls' definition of "good" is a critical piece of the Intelligent Design Control methodology.  We need not only to provide traceability to the test cases, but the test cases should trace to the appropriate level in the product "V", it should distinguish between verification and validation, and finally the test results.

Documents and Work Items

We have the ability to create test cases in more than one location.  The can either be built in the overall repository or in the context of documents.  Documents are essentially containers holding work item content and other contextual discussions.  In general, medical device regulators expect the test protocols be approved prior to execution of the test.  It is therefore convenient to use documents containing test cases to provide a scope of test runs which can be pre-approved in a Part 11 compliant change management system

Providing context to the test cases in the form of free text outside the requirement is a convenient feature to provide information that is lengthy and repeated for many of the test cases defined in the document.  For this reason documents are a key re-usable artifact used to develop a testing library. 

Test Cases

Polarion includes a special work item defined as a test case.  Test cases include automation required to determine a set of test steps, their description and an expected result.

Test Steps

The test steps are then used in the test run to walk the tester through the items in need of verification, and capture any necessary objective evidence of compliance.

Test Protocol Document


As discussed above, the scope of the test cases can be established by the test protocol, the protocol ran through an approval workflow, and the test run applied to the document test cases.  Now the creation of a test report is a trivial bit of automation provided by the reporting engine.

Test Runs

Once a protocol has been developed and approved, it is common then to execute a test run.  The typical logic for this activity is shown below.

Testing Logic
Test runs then become the data set which contains the evidence of verification and validation approval.  The following is an example of a test run execution interface.

Test Run

Test Reporting

Once the test has been executed, the test results are included in the test case traceability, and are now available for reporting.  One example of such reporting is the standard test result dashboard shown below.



Test Results Document

The system interface provides a test engineer with all results needed to determine the current level of product compliance with performance requirements.  In addition, the data can now be automatically built into test report documents without further formatting.




Test Reporting

As you can see, integration of verification and validation testing with the product design controls provides a rapid, accurate method for assuring a complete test suite, and recovery of contextual evidence.

Next up is integration of risk.