Can we thank you all for taking the time to submit such interesting questions.
Q. Retesting: What is your opinion on retesting requirements when infrastructure components are upgraded? i.e. O/S patches, database upgrades, web server upgrades
A. The GAMP "IT Infrastructure Control and Compliance" Good Practice Guide specifically addresses this question. In summary, this recommends a risk-based approach to the testing of infrastructure patches, upgrades etc. Based on risk severity, likelihood and detectability this may require little or no testing, will sometime require testing in a Test/QA instance or in some cases they may or should be rolled out to the Production environment (e.g. anti-virus updates). Remember - with a risk-based approach there is no 'one-size-fits-all' approach.
Q. No value add for independent review and oversight? Why not staff SQE's?
A. Assuming that 'SQE' is Software Quality Expert, we would agree that independent review by such SQE's does add value, specifically because they are experts in software and should understand software testing best practices. Where we do question the value of quality reviews (based on current gidance) is where the Quality Unit has no such expertise to draw upon. In these cases the independent Quality Unit still has a useful value add role to play, but this is an oversight role, ensuring that test processes and procedures are followed (by review of Test Strategies/Plans/Reports and/or periodic review or internal audit)
Q. What FDA guidance was being referred to re: QA review of test scripts etc not being necessary?
A. The FDA Final Guidance document “General Principles of Software Validation” doesn’t specifically state that QA review of test scripts is not necessary, but like the GAMP “Testing of GxP Systems“ Good Practice Guide, GAMP 5 and ASTM E2500, it places the emphasis on independent PEER review. i.e. by suitably qualified, trained or experienced peers (e.g. software developers, testers etc) who are able to independently review test cases. Although QA IT people may well have the necessary technical background to play a useful part in this process (guiding, supporting etc) this is not always the case for the independent Quality Unit who are primarily responsible for product (drug, medical device etc) quality.
Q. Do the regulators accept the concept of risk-based testing?
A. As we stated in response to a similar question in the webcast, regulatory authorities generally accept risk-based testing when it is done well. There is a concern amongst some regulators (US FDA and some European inspectors) that in some cases risk-assessments are being used to justify decisions that are actually taken based on timescale or cost constraints.
In the case of testing, the scope and rigor of testing is sometimes determined in advance and the risk assessment (risk criteria, weightings etc) are 'adjusted' to give the desired answer e.g. "Look - we don't need to do any negative case testing after al!"
The better informed regulators are aware of this issue, but where testing is generally risk-based our experience is that this is viewed positively by most inspectors.
Q. Do you think that there a difference in testing good practices in different sectors e.g pharma vs. medical device vs. biomedical?
A. There shouldn't be, but in reality the history of individual Divisions in the FDA (and European Agencies) means that there are certain hot topics in some sectors e.g.
- Because of well understood failures to perform regressions analysis and testing the CBER are very hot on this topic in blood banking.
- Because of the relatively high risk of software embedded in medical devices, some inspectors place a lot of focus on structural testing.
Q. Leaving GMP systems aside and referring to GxP for IT, Clinical and Regulatory applications. How do you handle a vendors minimum hardware spec for an application in a virtual environment?
We have found that vendors overstate the minimums (# of CPUs, CPU spec, minimum RAM, disk space usage, etc.) by a huge margin when comparing actual usage after a system is in place.
A large pharma I used to work for put a standard VM build of 512k RAM and to increase it if needed. This was waived for additional servers of the same application. In the newest version of VMware (vSphere 4) all of these items can be changed while the guest server is running.
A. Software vendors do tend to cover themselves for 'worst case' (peak loading of simultaneous resource intensive tasks, maximum concurrent users etc - and then add a margin), to ensure that the performance of their software isn't a problem. The basic answer is to use your own experience based on a good Capacity Planning and Performance Management process (see the GAMP "IT Infrastructure Control and Compliance" Good Practice Guide again). This shoud tell you whether your hardware is over-rated or not and you can use historic data to size your hardware. It can also be useful to seek out the opinion of other users via user groups, discussion boards and forums etc.
Modern virtualization (which we also covered in a previous webcast "Qualification of Virtualized Environments") does allow the flexibility to modify capacity on the fly, but this isn't an option for Regulated Companies running in a traditional hardware environment. Some hardware vendors will allow you to install additional capacity and only pay for it when it is 'turned on' , but these tend to be large servers with mutliple processors etc.
At the end of the day it comes down to risk assessment - do you take the risk of not going with the software vendors recommendation for the sake of reducing the cost of the hardware? This is the usual issue of balancing project capex' budget against the cost to the business of poor performance.
No comments:
Post a Comment