1
\$\begingroup\$

We have barely started the process towards medical device certification, and as part of that I am having a hard time wrapping my head around the appropriate way to construct a failure mode effect analysis (FMEA) document for a couple of electronic devices that are past the midpoint in the design process. These devices are to be used in research collaborations, so we are trying to follow a best-effort path that would allow us to design medically-approved devices in future iterations of the design. Our current designs will not be put through a certification process, but we are putting the required methodology in place to make this happen.

Regulatory agencies don't seem to care much about the specifics of the methodology you use, but they will hold you accountable to whatever you choose. This creates a double-jeopardy, you run the risk of specifying too much detail which makes the process unwieldy or specifying too little which makes it useless.

I know that the basic bottom-up approach, which has been mostly discarded, is wasteful and of little use in the middle of the design process. But the top-down approach can lack specificity and might be hard to map to actual components in an schematic.

I have the impression that creating these FMEA documents must follow a systematic process in which every element has gone through some level of review, classification, categorization, and quantization. With the final FMEA worksheets at the end of the process. I now need to put together an FMEA worksheet with none of this in place, so I am trying to figure out how to best proceed.

As an experienced engineer, failure modes have always been in the back of my mind when designing, but all of the documentation I have on FMEA seems artificial or incomplete at best. I know this risks becoming a matter of opinion, but here are the broad questions hoping that some of you have much more experience with this process:

What are the steps required to put together a workable FMEA/FTA process?

If I have an overall system with different levels of subsystems, and I have different categories of harms to a patient, user or the device itself, I believe that these are different trees that interact with each other. How to best divide the problem so that those interactions are captured without falling into the exponential growth due to the many-to-many relational problem?

What methodologies/tools are used for FMEA/FTA?

Intuitively I feel that I need some form of relational database that would allow me to reduce repetitions and easily examine the interactions. There seem to be some existing FMEA tools that simplify this process, but not knowing the specifics it is hard for me to evaluate the usefulness of the tools themselves.

FTA software tools seem more understandable and perhaps useful to me, but I am not sure of how compatible are the methodologies.

How can I better integrate the FMEA methodology into the specification phase of the design?

Right now to me it looks like an afterthought that is put together just to satisfy regulatory requirements after you have schematics and populated boards in hand. But it is clear to me that it should be implemented up-front and made interactive to better inform the design process itself.

\$\endgroup\$

1 Answer 1

2
\$\begingroup\$

Although I have never done an FMEA before design was completed, I did informally during design and bench test verify when there were unknowns then correct the design.

My motus operandi is to have great specifications, then a Design (DVT) and Process validation (PVT) with written design specs to create these plans. Meeting all acceptance criteria on-time and on-budget to me means a perfect design. Of course, experience dictates how well you score on these.

For me, the best methodology is to identify all these requirements in the specs before starting and update during the process and design reviews.

Flaws of a product then use best practices to avoid them.

e.g. Functional Performance, Handling, Interference, User Experience, Consistency, Reliability to environmental stress.

This depends greatly on level of experience of similar products completed in volume.

There are 3 catagories of failure;

  1. Bad design
  2. Bad parts (QA deviation)
  3. Bad process (MFG process controls)

Each are somebody's design responsibility, delegated and verified by the designer.

Generally, any design has margin and the best measure is called process capability or \$C_{pk}\$ which is >=1.33 for 4 sigma. Cpk measures how close you are to your target and how consistent you are to around your average performance.

Each category has a list of environmental variables and performance will vary with each and all of them. So the start of every good design must know these environmental variables and design around them;

  • Climatic: Operating,Non-operating and Shipping ['C and %RH] max,min
  • Electrical:
    • ingress, egress (unintended): conducted, (AC,DC), radiated EMF (f)[V/m], MMF (f)[A/m], ESD, Lightning, load dump, impulse radiated and modulated RF, Hipot withstanding, leakage current, etc.
  • Mechanical: Operating, Non-operating , Shipping which include drop and vibration tests and in some cases acceleration for ( aerospace)
  • Altitude: Is sometimes a factor when air pressure affects results.

    • Design margins to reduce risk such as derating ceramic capacitance with DC bias, and RMS current/max to be ~50% with max hot spot of 80'C at max ambient.

Once you have good specs with functional for the system and sub-systems and circuits, breakdown each into a 1 page test verification with expect margins defined from input and output tolerances. Then write a 1 page DVT for each test showing a picture of test method, a summary and the results.

For accelerated reliability margin, I used HALT and HASS methods to either test beyond specs or test to failure for each of the above environmental specs.

This includes drop and vibration tests with over-voltage at max temp and measuring some critical weak link margin in each test.

Depending on the design budget, and support staff to review a design and perform tests, this can be done cheaply or very expensive. I have done it both ways. For a modem with 6 channel embedded data logger going into high volume production, I might charge $15k for a 2 week test to confirm all the functional and performance specs with a write-up of results and recommendations. This includes measuring hotspots , reverse engineering the design with component tolerances and then perform drop tests, and thermal tests with dry ice and a heater or vibration to measure electro-mechanical weakness during operation.

The most important part for me was to quickly determine the most likely faults in a design and a way to measure margin before failure or if unknown test beyond until failure, but non-destructive margin testing is ideal.

You cannot predict future failures of incompetence but by proper Design for Manufacturability (DFM), Design for Cost (DFC) and Design for Testability (DFT) you can more likely reduce your level of risk.

There are many forms for DFEA, but from over 45 years of successful designs, I have never used them before. Although from testing many OEM designs for Burroughs/Unisys from OEMs like Seagate, Fujitsu, Hitachi, Toshiba a factory review with management on their process controls and document flow was an integral part of my DVT methods, as well as an MTBF test with statistical analysis.

You could have a great looking product but if any of these fall short including Fault coverage in self-test, board test and PCB test, then you may be doing a lot of FMEA forms.

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.