Wednesday, February 24, 2010

Answers to Webcast Questions - Leveraging ICH Q9 / ISO 14971 in Support of IS Compliance

Thanks to everyone who attended the webcast "Leveraging ICH Q9 / ISO 14971 in Support of IS Compliance" and who submitted questions. The recording is now on-line and subscribers can download the slides from the Business & Decision website as usual.

Listed below are the questions that we didn't have time for in the live webcast, along with the answers we promised to provide.


Q. Do you find that IT teams want to take the time to conduct proper risk assessments?
A. It all depends on the risk assessment process and model, whether it is scaled appropriately to the project / system and how well trained the IT team is. Assessing the risk severity is best left to the quality / regulatory and business subject matter experts, leaving the IT staff to think about technical risk scenarios and the risk likelihood and detectability.
Most professional IT staff evaluate and mitigate risk on an automatic basis, at least as far as the technology is concerned. For example, if it’s a critical business system the IT team will usually suggest redundant discs or mirroring to a DR site as a matter of course. In many cases you need them to reverse engineer their logic and document the rationale for their decisions using appropriately scaled tools and templates.
If you can make it clear to the IT staff that their expertise is valued and respected, that we just want them to rationalize and document their decisions with a process that isn’t too onerous we usually find that there is good buy-in

Q. Why do all your risk diagrams or maps make a low impact/high probability event equivalent to a high impact/low probability event....surely this is both misleading and dangerous.
A. They’re not our diagrams and maps – they are from the GAMP® Guide or GAMP® Good Practice Guides. Using the GAMP® risk assessment model gives Risk Class 2 for both high severity/low likelihood and low severity/high likelihood.
Equating severity and likelihood in the way wouldn’t be wise and could possibly increase the possibility of an unacceptable risk being seen as acceptable when considering the hazards associated with a medical device or risk to a patient through the use of a new drug. However, GAMP® attempts to provide a relatively simple risk assessment model which is cost effective when used in the implementation of computerized systems.
What wasn’t shown in the project example included in this webcast were the specific criteria used to qualitatively assess risk severity and risk likelihood, and which erred in the side of caution for this relatively high risk project/system.

Q. Can you comment on how pressure testing a system can provide data on probability of failure?
A. Assuming that ‘pressure testing’ relates to the stress testing of software rather than the pressure testing of a process vessel, it can only provide a limited set of data on the probability of failure. Because software does not change over time (assuming effective change control and configuration management processes) stress testing has little value in terms of the software functionality. Boundary, structural (path & branch) and negative case testing has more value here and should provide data on the failure modes of the software rather than the probability of failure.
Where stress testing can be useful is in looking at the probability of failure of the infrastructure i.e. network constraints, CPU capacity, storage speed and capacity. Stress testing can provide not only a useful idea of the probability of failure, but should allow users to identify the circumstances (loading) that lead to a particular failure mode and then define sensible limits which should not be exceeded.

Q. Do you think that proper selection of risk analysis technique (like DFMEA, FTA) greatly improves risk management of medical device companies?
A. Yes, absolutely. Both ICH Q9 and ISO 14971 talk about the appropriate selection of appropriate risk assessment models and tools and ICH Q9 Annex I provides a useful discussion on this topic.

Thanks again to everyone who joined us for the webcast and we look forward to catching up for the next webcasts.

Thursday, February 18, 2010

Answers to Webcast Questions - Using Compliant ERP E-Records in Support of Regulatory Compliance

In yesterday's webcast Using Compliant ERP E-Records in Support of Regulatory Compliance, there were a couple of technical questions around the use of E-Records in Oracle E-Business Suite that we didn't get time to answer.

Thanks to our colleagues at Oracle for supporting the webcast and their help in answering these questions.

Q. Are new Oracle E-Business E-Record enabled events being added to the 11.5.10 release or just Release 12?
A. New developments are focused on Oracle E-Business Suite Release 12 and most of the recent E-Record enabled events are part of the Release 12 functionality e.g. Manufacturing Execution System. Release 11.5.10 is entering the maintenance mode of its life cycle so although some Release 12 functionality was previously ported back to 11.5.10, do not expect much, if any, new functional development on 11.5.10 moving forward


Q. In a earlier Business & Decision webcast (Testing Best Practices: 5 Years of the GAMP Good Practice Guide), it was suggested to get testing documentation from the vendor. What can Oracle provide to help minimize our internal testing?
A. As we discussed on the E-Records webcast, Oracle E-Business Suite customers can access automated test scripts that will run against the E-Business Suite Vision data set from the Oracle support site (formerly MetaLink). Just log in and search on "Test Starter Kit".
For clients implementing Oracle E-Business Suite using Oracle Accelerators test scripts are also generated by the Oracle Accelerator tool and these are specific to the client's configured instance generated by the Oracle Accelerator tool (see webcast "Compliant ERP Implementation in the Regulated Life Sciences Industry" for more information).

Thanks to all of you for your questions and remember that you can submit questions at any time on validation@businessdecision.com or erp@businessdecision.com, or by following the 'Ask an Expert' links on the website

Friday, February 12, 2010

New Life Sciences Index Announced

Based upon some useful 'vox pop' information collected by Business & Decision's webcast and on-line surveys, plus information from other sources, we have now started an on-line set of Life Sciences indices, revealing interesting information and trends for both Regulated Companies and suppliers to the Life Sciences industry.

This has just gone live at Life Sciences Index and we will be adding new indices over the coming weeks. If you have any data that you would like to share or would like to see, e-mail us at life.sciences@businessdecision.com and we'll see what we can do.

Thursday, February 11, 2010

Risk Likelihood of New Software

Here's a question submitted to validation@businessdecision.com - which we thought deserved a wider airing.

Q. To perform a Risk Assessment you need experience about the software performance. In the case of new software without previous history, how can you handle it?

A. We are really talking about the risk likelihood dimension of risk assessment here

GAMP suggests that when determining the risk likelihood you look at the ‘novelty’ of the supplier and the software (we sometimes use the opposite term – maturity – but we’re talking about the same thing).

If you have no personal experience with the software you can conduct market research – are there any reviews on the internet, any discussions on discussion boards or is there a software user group the Regulated Company could join? All of this will help to determine whether or not the software is ‘novel’ in the Life Sciences industry, whether it has been used by other Regulated Companies and whether there are any specific, known problems that will be the source of an unacceptable risk (or a risk that cannot be mitigated).

If it is a new product from a mature supplier then you can only assess risk based on the defect / support history of the supplier's previous products and an assessment of their quality management system. If it a completely new supplier to the market then you should conduct an appropriate supplier assessment and would generally assume high risk likelihood, at least until a history is established through surveillance audits and use of the software.

All of these pieces of information should feed into your initial high level risk assessment and be considered as part of your validation planning. When working with ‘novel’ suppliers or software it is usual for the Regulated Company to provide more oversight and independent verification.

At the level of a detailed functional risk assessment the most usual approach is to be guided by software categories – custom software (GAMP Category 5) is generally seen as having a higher risk likelihood than configurable software (GAMP Category 4), but this is not always the case (some configuration can be very complex)- our recent webcast on "Scaling Risk Assessment in Support of Risk Based Validation" has some more ideas on risk likelihood determination which you might find useful.

Wednesday, February 10, 2010

Answers to Webcast Questions - Testing Best Practices: 5 Years of the GAMP Good Practice Guide

The following answers are provided to questions submitted during the "Testing Best Practices: 5 Years of the GAMP Good Practice Guide" and which we did not have time to answer while we were live.


Can we thank you all for taking the time to submit such interesting questions.

Q. Retesting: What is your opinion on retesting requirements when infrastructure components are upgraded? i.e. O/S patches, database upgrades, web server upgrades
A. The GAMP "IT Infrastructure Control and Compliance" Good Practice Guide specifically addresses this question. In summary, this recommends a risk-based approach to the testing of infrastructure patches, upgrades etc. Based on risk severity, likelihood and detectability this may require little or no testing, will sometime require testing in a Test/QA instance or in some cases they may or should be rolled out to the Production environment (e.g. anti-virus updates). Remember - with a risk-based approach there is no 'one-size-fits-all' approach.
 
Q. No value add for independent review and oversight? Why not staff SQE's?
A. Assuming that 'SQE' is Software Quality Expert, we would agree that independent review by such SQE's does add value, specifically because they are experts in software and should understand software testing best practices. Where we do question the value of quality reviews (based on current gidance) is where the Quality Unit has no such expertise to draw upon. In these cases the independent Quality Unit still has a useful value add role to play, but this is an oversight role, ensuring that test processes and procedures are followed (by review of Test Strategies/Plans/Reports and/or periodic review or internal audit)

Q. What FDA guidance was being referred to re: QA review of test scripts etc not being necessary?
A. The FDA Final Guidance document “General Principles of Software Validation” doesn’t specifically state that QA review of test scripts is not necessary, but like the GAMP “Testing of GxP Systems“ Good Practice Guide, GAMP 5 and ASTM E2500, it places the emphasis on independent PEER review. i.e. by suitably qualified, trained or experienced peers (e.g. software developers, testers etc) who are able to independently review test cases. Although QA IT people may well have the necessary technical background to play a useful part in this process (guiding, supporting etc) this is not always the case for the independent Quality Unit who are primarily responsible for product (drug, medical device etc) quality.
 
Q. Do the regulators accept the concept of risk-based testing?
A. As we stated in response to a similar question in the webcast, regulatory authorities generally accept risk-based testing when it is done well. There is a concern amongst some regulators (US FDA and some European inspectors) that in some cases risk-assessments are being used to justify decisions that are actually taken based on timescale or cost constraints.
In the case of testing, the scope and rigor of testing is sometimes determined in advance and the risk assessment (risk criteria, weightings etc) are 'adjusted' to give the desired answer e.g. "Look - we don't need to do any negative case testing after al!"
The better informed regulators are aware of this issue, but where testing is generally risk-based our experience is that this is viewed positively by most inspectors.
 
Q. Do you think that there a difference in testing good practices in different sectors e.g pharma vs. medical device vs. biomedical?
A. There shouldn't be, but in reality the history of individual Divisions in the FDA (and European Agencies) means that there are certain hot topics in some sectors e.g.
  • Because of well understood failures to perform regressions analysis and testing the CBER are very hot on this topic in blood banking.
  • Because of the relatively high risk of software embedded in medical devices, some inspectors place a lot of focus on structural testing.
Although this shouldn't change the scope or rigor of the planned testing it is necessary that the testing is appropriate to the nature of the software and the risk, and that project documentation shows that valid regulatory concerns are addressed. It is therefore useful to be aware of sector specific issues, hot topics and terminology.

Q. Leaving GMP systems aside and referring to GxP for IT, Clinical and Regulatory applications. How do you handle a vendors minimum hardware spec for an application in a virtual environment?
We have found that vendors overstate the minimums (# of CPUs, CPU spec, minimum RAM, disk space usage, etc.) by a huge margin when comparing actual usage after a system is in place.
A large pharma I used to work for put a standard VM build of 512k RAM and to increase it if needed.  This was waived for additional  servers of the same application.   In the newest version of VMware (vSphere 4) all of these items can be changed while the guest server is running.
A. Software vendors do tend to cover themselves for 'worst case' (peak loading of simultaneous resource intensive tasks, maximum concurrent users etc - and then add a margin), to ensure that the performance of their software isn't a problem. The basic answer is to use your own experience based on a good Capacity Planning and Performance Management process (see the GAMP "IT Infrastructure Control and Compliance" Good Practice Guide again). This shoud tell you whether your hardware is over-rated or not and you can use historic data to size your hardware. It can also be useful to seek out the opinion of other users via user groups, discussion boards and forums etc.
Modern virtualization (which we also covered in a previous webcast "Qualification of Virtualized Environments") does allow the flexibility to modify capacity on the fly, but this isn't an option for Regulated Companies running in a traditional hardware environment. Some hardware vendors will allow you to install additional capacity and only pay for it when it is 'turned on' , but these tend to be large servers with mutliple processors etc.
At the end of the day it comes down to risk assessment - do you take the risk of not going with the software vendors recommendation for the sake of reducing the cost of the hardware? This is the usual issue of balancing project capex' budget against the cost to the business of poor performance.