Showing posts with label Testing. Show all posts
Showing posts with label Testing. Show all posts

Friday, March 30, 2012

Computer System Validation Policy on Software-as-a-Service (SaaS)


In a recent LinkedIn Group discussion (Computerized Systems Validation Group: Discussion "Validation of Cloud), the topic of Software-as-a-Service (SaaS) was widely discussed and the need to identify appropriate controls in Computer System Validation (CSV) policies was discussed.

The reality is that relatively few compliant, validated SaaS solutions are out there, and relatively few Life Sciences companies have CSV policies that address this. 

However, there are a few CSV policies that I’ve worked on that address this and although client confidentiality means that I can’t share the documents, I did volunteer to publish some content on what could be included in a CSV policy to address SaaS.

Based on the assumption that any CSV policy leveraging a risk-based approach needs to provide a flexible framework which is instantiated on a project specific basis in the Validation (Master) Plan, I've provided some notes below (in italics) which may be useful in providing policy guidance. These would need to be incorporated in a CSV Policy using appropriate language (some Regulated Company's CSV Policy's are more prescriptive that others and the language should reflect this).

"When the use of Software-as-a-Service (SaaS) is considered, additional risks should be identified and accounted for in the risk assessment and in the development of the Validation Plan computer system validation approach. These are in addition to the issues that need to be considered with any third party service provider (e.g. general hosting and managed services). These include:
  • How much control the Regulated Company has over the configuration of the application, to meet their specific regulatory or business needs (by definition, SaaS applications provide the Regulated Company (Consumer) with little or no control over the application configuration)
o   How does the Provider communicate application changes to the Regulated Company, where the Regulated Company has no direct control of the application?
o   What if Provider controlled changes mean that the application no longer complies with regulatory requirements?
  • The ability/willingness (or otherwise) of the Provider to support compliance audits
  • As part of the validation process, whether or not the Regulated Company can effectively test or otherwise verify that their regulatory requirements have been fulfilled
o   Does the Provider provide a separate Test/QA/Validation Instance?
o   Whether it is practical to test in the Production instance prior to Production use (can such test records be clearly differentiated from production records, by time or unique identification)
o   Can the functioning of the SaaS application be verified against User Requirements as part of the vendor/package selection process? (prior to contract - applicable to higher risk applications)
o   Can the functioning of the SaaS application be verified against User Requirements once in production use? (after the control - may be acceptable for lower risk applications)
  • Whether or not the Provider applies applications changes directly to the Production instance, or whether they are tested in a separate Test/QA Instance
  • Security and data integrity risks associated with the use of a multi-tenanted SaaS application (i.e. one that is also used by other users of the system), including
o   Whether or not different companies data is contained in the same database, or the same database tables
o   The security controls that are implemented within the SaaS application and/or database, to ensure that companies cannot read/write delete other companies data
  • Where appropriate, whether or not copies of only the Regulated Companies data can be provided to regulatory authorities, in accordance with regulatory requirements (e.g. 21CFR Part 11)
  • Where appropriate, whether or not the Regulated Companies data can be archived
  • If it is likely that the SaaS application is de-clouded (brought in-house or moved to another Provider)
o   Can the Regulated Companies data be extracted from the SaaS application?
o   Can the Regulated Companies data be deleted in the original SaaS application?

If these issues cannot be adequately addressed (and risks mitigated), alternative options may be considered. These may include:
  • Acquiring similar software from an acceptable SaaS Provider,
  • Provisioning the same software as a Private Cloud, single tenancy application (if allowed by the Provider)
  • Managing a similar application (under the direct control of the Regulated Company), deployed on a Platform-as-a-Service (PaaS)"
Hopefully these ideas will help people to develop their approach to SaaS, but CSV Policies should also address the use of PaaS and IaaS within the broader context of outsourcing.

Thursday, February 18, 2010

Answers to Webcast Questions - Using Compliant ERP E-Records in Support of Regulatory Compliance

In yesterday's webcast Using Compliant ERP E-Records in Support of Regulatory Compliance, there were a couple of technical questions around the use of E-Records in Oracle E-Business Suite that we didn't get time to answer.

Thanks to our colleagues at Oracle for supporting the webcast and their help in answering these questions.

Q. Are new Oracle E-Business E-Record enabled events being added to the 11.5.10 release or just Release 12?
A. New developments are focused on Oracle E-Business Suite Release 12 and most of the recent E-Record enabled events are part of the Release 12 functionality e.g. Manufacturing Execution System. Release 11.5.10 is entering the maintenance mode of its life cycle so although some Release 12 functionality was previously ported back to 11.5.10, do not expect much, if any, new functional development on 11.5.10 moving forward


Q. In a earlier Business & Decision webcast (Testing Best Practices: 5 Years of the GAMP Good Practice Guide), it was suggested to get testing documentation from the vendor. What can Oracle provide to help minimize our internal testing?
A. As we discussed on the E-Records webcast, Oracle E-Business Suite customers can access automated test scripts that will run against the E-Business Suite Vision data set from the Oracle support site (formerly MetaLink). Just log in and search on "Test Starter Kit".
For clients implementing Oracle E-Business Suite using Oracle Accelerators test scripts are also generated by the Oracle Accelerator tool and these are specific to the client's configured instance generated by the Oracle Accelerator tool (see webcast "Compliant ERP Implementation in the Regulated Life Sciences Industry" for more information).

Thanks to all of you for your questions and remember that you can submit questions at any time on validation@businessdecision.com or erp@businessdecision.com, or by following the 'Ask an Expert' links on the website

Wednesday, February 10, 2010

Answers to Webcast Questions - Testing Best Practices: 5 Years of the GAMP Good Practice Guide

The following answers are provided to questions submitted during the "Testing Best Practices: 5 Years of the GAMP Good Practice Guide" and which we did not have time to answer while we were live.


Can we thank you all for taking the time to submit such interesting questions.

Q. Retesting: What is your opinion on retesting requirements when infrastructure components are upgraded? i.e. O/S patches, database upgrades, web server upgrades
A. The GAMP "IT Infrastructure Control and Compliance" Good Practice Guide specifically addresses this question. In summary, this recommends a risk-based approach to the testing of infrastructure patches, upgrades etc. Based on risk severity, likelihood and detectability this may require little or no testing, will sometime require testing in a Test/QA instance or in some cases they may or should be rolled out to the Production environment (e.g. anti-virus updates). Remember - with a risk-based approach there is no 'one-size-fits-all' approach.
 
Q. No value add for independent review and oversight? Why not staff SQE's?
A. Assuming that 'SQE' is Software Quality Expert, we would agree that independent review by such SQE's does add value, specifically because they are experts in software and should understand software testing best practices. Where we do question the value of quality reviews (based on current gidance) is where the Quality Unit has no such expertise to draw upon. In these cases the independent Quality Unit still has a useful value add role to play, but this is an oversight role, ensuring that test processes and procedures are followed (by review of Test Strategies/Plans/Reports and/or periodic review or internal audit)

Q. What FDA guidance was being referred to re: QA review of test scripts etc not being necessary?
A. The FDA Final Guidance document “General Principles of Software Validation” doesn’t specifically state that QA review of test scripts is not necessary, but like the GAMP “Testing of GxP Systems“ Good Practice Guide, GAMP 5 and ASTM E2500, it places the emphasis on independent PEER review. i.e. by suitably qualified, trained or experienced peers (e.g. software developers, testers etc) who are able to independently review test cases. Although QA IT people may well have the necessary technical background to play a useful part in this process (guiding, supporting etc) this is not always the case for the independent Quality Unit who are primarily responsible for product (drug, medical device etc) quality.
 
Q. Do the regulators accept the concept of risk-based testing?
A. As we stated in response to a similar question in the webcast, regulatory authorities generally accept risk-based testing when it is done well. There is a concern amongst some regulators (US FDA and some European inspectors) that in some cases risk-assessments are being used to justify decisions that are actually taken based on timescale or cost constraints.
In the case of testing, the scope and rigor of testing is sometimes determined in advance and the risk assessment (risk criteria, weightings etc) are 'adjusted' to give the desired answer e.g. "Look - we don't need to do any negative case testing after al!"
The better informed regulators are aware of this issue, but where testing is generally risk-based our experience is that this is viewed positively by most inspectors.
 
Q. Do you think that there a difference in testing good practices in different sectors e.g pharma vs. medical device vs. biomedical?
A. There shouldn't be, but in reality the history of individual Divisions in the FDA (and European Agencies) means that there are certain hot topics in some sectors e.g.
  • Because of well understood failures to perform regressions analysis and testing the CBER are very hot on this topic in blood banking.
  • Because of the relatively high risk of software embedded in medical devices, some inspectors place a lot of focus on structural testing.
Although this shouldn't change the scope or rigor of the planned testing it is necessary that the testing is appropriate to the nature of the software and the risk, and that project documentation shows that valid regulatory concerns are addressed. It is therefore useful to be aware of sector specific issues, hot topics and terminology.

Q. Leaving GMP systems aside and referring to GxP for IT, Clinical and Regulatory applications. How do you handle a vendors minimum hardware spec for an application in a virtual environment?
We have found that vendors overstate the minimums (# of CPUs, CPU spec, minimum RAM, disk space usage, etc.) by a huge margin when comparing actual usage after a system is in place.
A large pharma I used to work for put a standard VM build of 512k RAM and to increase it if needed.  This was waived for additional  servers of the same application.   In the newest version of VMware (vSphere 4) all of these items can be changed while the guest server is running.
A. Software vendors do tend to cover themselves for 'worst case' (peak loading of simultaneous resource intensive tasks, maximum concurrent users etc - and then add a margin), to ensure that the performance of their software isn't a problem. The basic answer is to use your own experience based on a good Capacity Planning and Performance Management process (see the GAMP "IT Infrastructure Control and Compliance" Good Practice Guide again). This shoud tell you whether your hardware is over-rated or not and you can use historic data to size your hardware. It can also be useful to seek out the opinion of other users via user groups, discussion boards and forums etc.
Modern virtualization (which we also covered in a previous webcast "Qualification of Virtualized Environments") does allow the flexibility to modify capacity on the fly, but this isn't an option for Regulated Companies running in a traditional hardware environment. Some hardware vendors will allow you to install additional capacity and only pay for it when it is 'turned on' , but these tend to be large servers with mutliple processors etc.
At the end of the day it comes down to risk assessment - do you take the risk of not going with the software vendors recommendation for the sake of reducing the cost of the hardware? This is the usual issue of balancing project capex' budget against the cost to the business of poor performance.