Showing posts with label Infrastructure. Show all posts
Showing posts with label Infrastructure. Show all posts

Wednesday, August 29, 2012

Parameterised IQ Protocols

Another question from the 21CFRPart11 forum - not strictly relating to ERES, but interesting all the same:

Q. I am wondering about a project and how the FDA could see it as a validated way to execute qualification protocols.
 

There is the idea: we validated our document management system, where we validated the electronic signature, and the documents could be developed as pdf forms, where some fields are able to be written... and in this case, we could develop our qualification protocols as a pdf forms, with the mandatory fields for protocols, able to be written, and filled with the qualification info.

Is this a situation which FDA could see as a correct way to develop and execute protocols?


A. There should be no problem at all with the approach, as long as the final protocols (i.e. with the parameters entered) are still subject to peer review prior to (and usually after) execution.

We do this with such documents and we have also used a similar approach using HP QualityCenter (developing the basic protocol as a test case in QualityCenter and entering the parameters for each instance of the protocol that is being run).

The peer review process is of course also much simpler, because the reviewer can focus on the correct parameters having been specified rather than the (unchanging) body of the document.

Tuesday, September 27, 2011

Software as a Service - Questions Answered

As we expected, last week's webcast on Software as a Service (Compliant Cloud Computing - Applications and SaaS) garnered a good deal of interest with some great questions and some interesting votes.


Unfortunately we ran out of time before we could answer all of your questions. We did manage to get around to answering the following questions (see webcast for answers)
  • Would you agree that we may have to really escrow applications with third parties in order to be able to retrieve data throughout data retention periods?
  • How is security managed with a SaaS provider? Do they have to have Admin access, which allows them access to our data?
  • How do you recommend the Change Management (control) of the SaaS software be managed?
  • How can we use Cloud but still have real control over our applications?
  • What should we do if procurement and IT have already outsourced to a Saas provider, but we haven't done an audit?

As promised, we have answered the two remaining questions we didn't get time to address below.

 
Cloud computing is, not surprisingly, the big topic of interest in the IT industry and much of business in general. Cloud will change the IT and business models in many companies and Life Sciences is no different in that respect.

 
We've have covered this extensively during the last few months, leveraging heavily on the draft NIST Definition of Cloud Computing which is starting to be the de-facto standard for talking about the Cloud - regardless of Cloud Service Providers constantly inventing their own terminology and services!

If you missed any of the previous webcasts they were
- Qualifying the Cloud: Fact or Fiction?
- Leveraging Infrastructure as a Service
- Leveraging Platform as a Service


There are of course specific issues that we need to address in Life Sciences and our work as part of the Stevens Institute of Technology Cloud Computing Consortium is helping to define good governance models for Cloud Computing. These can be leveraged by Regulated Companies in the Life Sciences industry, but it is still important to address the questions and issues covered in our Cloud webcasts.

As we described in our last session, Software as a Service isn't for everyone and although it is the model that many would like to adopt, there are very few SaaS solutions that allow Regulated Companies to maintain compliance of their GxP applications 'out-of-the-box'. This is starting to change, but for now we're putting our money (literally - investment on our qualified data center) into Platform as a Service, which be believe offers the best solution for companies looking to leverage the advantage of Cloud Computing with the necessary control over their GxP applications.

But on to those SaaS questions we didn't get around to last week:

Q. Are you aware of any compliant ERP solutions available as SaaS?

A. We're not. We work with a number of major ERP vendors who are developing Cloud solutions, but their applications aren't yet truly multi-tenanted (see SaaS webcast for issues). Other Providers do offer true multi-tenanted ERP solutions but they are not aimed specifically for Life Sciences. We're currently working with Regulated Company clients and their SaaS Cloud Service Providers to address a number of issues around infrastructure qualification, training of staff, testing of software releases etc, . Things are getting better for a number of Providers, but we're not aware of anyone who yet meets the regulatory needs of Life Sciences as a standard part of the service.

The issue is that this would add costs and this isn't the model that most SaaS vendors are looking for. It's an increasingly competitive market and it's cost sensitive. This is why we believe that niche Life Sciences vendors (e.g. LIMS, EDMS vendors) will get their first, when they combine their existing knowledge of Life Sciences with true multi-tenanted versions of their applications (and of course, deliver the Essential Characteristics of Cloud Computing - see webcasts)

Q. You clearly don't think that SaaS is yet applicable for high risk applications? What about low risk applications?

 
A. Risk severity of the application is one dimension of the risk calculation. The other is risk likelihood where you are so dependent on your Cloud Services Provider. If you select a good Provider with good general controls (a well designed SaaS application, good physical and logical security, mature support and maintenance process) then it should be possible to balance the risks and look at SaaS, certainly for lower risk applications.
 
It still doesn't mean that as a Regulated Company you won't have additional costs to add to the costs of the service. You need to align processes and provide on-going oversight and you should expect that this will add to the cost and slow down the provisioning. However, it should be possible to move lower risk applications into the Cloud as SaaS, assuming that you go in with your eyes open and realistic expectations of what is required and what is available.
 
Q. What strategy should we adopt to the Cloud, as a small-medium Life Sciences company?
 
A. This is something we're helping companies with and although every organization is different, our approach is generally
  • Brief everyone on the advantages of Cloud, what the regulatory expectations are and what to expect. 'Everyone' means IT, Procurement, Finance, the business (Process Owners) and of course Quality.
  • Use your system inventory to identify potential applications for Clouding (you do have one, don't you?). Look at which services and applications are suitable for Clouding (using the IaaS, PaaS and SaaS, Private/Public/Community models) and decide how far you want to go. For some organizations IaaS/PaaS is enough to start with, but for other organizations there will be a desire to move to SaaS. Don't forget to think about new services and applications that may be coming along in foreseeable timescales.
  • If you are looking at SaaS, start with lower risk applications, get your toe in the water and gradually move higher risk applications into the Cloud as your experience (and confidence) grows - this could take years and remember that experience with one SaaS Provider does not automatically transfer to another Provider.
  • Look to leverage one or two Providers for IaaS and PaaS - the economies of scale are useful, but it's good to share the work/risk.
  • Carefully assess all Providers (our webcasts will show you what to look for) and don't be tempted to cut audits short. It is time well worth investing and provides significant ROI.
  • Only sign contracts when important compliance issues have been addressed, or are included as part of the contractual requirements. That way there won't be any cost surprises later on.
  • Remember to consider un-Clouding. We've talked about this in our webcasts but one day you may want to switch Provider of move some services or applications out of the Cloud.
The Cloud is coming - in fact, it's already here. As usual, were not always the earliest adopters in Life Sciences, but you need to be prepared to move and take advantage. We hope that our webcasts have helped - please do let us know if you have any questions.

E-mail us at life.sciences@businessdecision.com

Thursday, May 19, 2011

Cloud Computing: Infrastructure as a Service Webcast

Yesterday saw the webcast of the first in a series of Cloud Computing webcasts - this one on "Infrastructure as a Service". The next ones are looking at "Platform as a Service" (on July 20th) and "Software as a Service" (on September 21st) - don't worry if the dates have passed by the time you come across this blog entry because all of the webcasts are recorded and available via our Past Events page

There was was a good turnout and some good questions asked. Unfortunately we didn't have time to cover all the questions before our hour ran out. We've therefore covered the questions and answers in our Life Sciences blog below:

The first questions we did quickly look at were about Change Control and Configuration Management:

Q. (Change Control) pre-approvals tend to be the sticking points for changes, how have you overcome this
Q. Is there a live configuration management database used?

A. These first questions related to how the Essential Characteristics of Cloud Computing i.e. Flexible On-Demand Service and Rapid Elasticity can be met in a regulated and qualified environment.

Business & Decision does use pre-approved change controls for some types of like-for-like change and we discussed our change control processes in more detail in our webcast "Maintaining the Qualified State: A Day in the Life of a Data Center" webcast on January 12th 2011.

In the same webcast we also discussed the use of configuration management records. Depending on the platform and component our configuration management records are either paper based or electronic and in many cases we use a secure spreadsheet for recording platform or component specific configuration item details. Updates to such electronic records are 'approved' by the sign-off of the controlling change control record and this means that a separate paper document doesn't need updating, printing and signing. This supports the 'rapid elasticity' required in a Cloud model.

Q. If PaaS is provided by 3rd party, would vendor audit be sufficient?


A. Although the next webcast in the series will discuss Platform as a Service (PaaS) in more detail, we did have time to briefly answer this question on-line. Generally an audit of any Cloud provider (IaaS, PaaS or SaaS) would be the minimum that would be required. This is essential to ensure that you understand:
- What service is being provisioned
- Who provisions which parts of the service (you or the third party)
- Who manages the services on a day-to-day basis
- Where they provision the service (where your data and applications will reside)
- How you, as the Regulated Company, can provide effective control (and demonstrate accountability to your regulators)
- How they provision and manage the relevant services (and do they actually do what their Policies and Procedures say that they do)
- What are the specific risk scenarios, the risk likelihood and what risk controls need to be established beyond the providers standard services

Whether any addition actions would be required would depend on the outcome of the initial audit. In some cases infrequent surveillance audits are all that would be required. In other cases additional metrics may need to be established in the SLA and in some cases it might be determined that some services will need to stay On-Premise. If people download the slides from our website (let us know if you need to know how) you'll be able to see the flow chart for this process.

Q. In case of IaaS, provisioning is tightly coupled with PaaS and thus required to provisioning of PaaS as well. How can your on-demand provisioning can be achieved in a minute? 

A. Our experience is also that in the real world, at least contractually, the provisioning of Infrastructure as a Service is coupled to Platform as a Service i.e. we provide the complete Platform, including the infrastructure components (this also fits much more realistically with the GAMP definition of Infrastructure, as discussed yesterday). However in many cases the change is at the level of "processing, storage, networks, and other fundamental computing resources" (to use the NIST definition of IaaS), so it really is IaaS within a broader PaaS model.

It is certain infrastructure components that can be technically provisioned in a minute or so - additional storage, network connections etc - this is usually just a change to a configuration parameter. You obviously need to add on the time to raise the change control and update the configuration management record, but for small changes that don't need client approval (because this is based upon a client request), and because these processes and systems use electronic records it can still be done in minutes rather than hours.

For physical infrastructure items (e.g. additional memory or CPUs) we can make the physical change and reboot the server also in a matter if minutes. Where we need to prepare a new environment (e.g. switch the clients application to a different virtual machine with the right infrastructure) this may need additional time to prepare, but the downtime for the client can also be a matter of moments as we reallocate IP addresses etc.

Even where we have had to provision new virtual machines (which really is PaaS) this is done in a matter or hours as we not only leverage standard specifications/designs and build, but we also leverage standard qualification documentation to ensure that the qualification process doesn't slow things down.

While it's true that most PaaS changes require more than a few minutes, it's usually a matter of hours rather than days.

Q. How are security exposures addressed within a IaaS scenario? (If the IaaS is not responsible for OS?) 

de-provisioned) such as storage or memory or CPUs then there should be no fundamental change in the risk scenario previously assessed. e.g. when you allocate additional disc capacity at a physical level it 'inherits' the security settings and permissions to which it is allocated. Issues with the database security are handled at the level of PaaS and any allocation of new database tables would of course need a reassessment of risk. The same is of course true regarding memory and CPUs.

At the IaaS level it is network capacity and specifically routings that need more thought. Allocating bandwidth to a specific client or application within a T1 pipe is again a matter of configuration and doesn't affect the associated security settings. Making changes to routings is of course more 'risky' and would require us to look again at the original risk assessments.

Most security risks do occur in the PaaS and SaaS models which we'll be looking at in more detail in the future webcasts in this series.


Thanks again for everyone for joining us on the webcast yesterday - if you missed it the recording will stay on line as long as the infrastructure is provisioned by BrightTalk! We hope you'll join us agian soon.

Thursday, January 13, 2011

Maintaining the Qualified State

There were a couple of unanswered questions in yesterday's "Maintaining the Qualified State: A Day In The Life of a Data Center" webcast yesterday.

As usual, we've taken the time to provide the questions and answers here in our blog.

Q. Where can I get a copy of that ISPE article on (the qualification of) Virtualization?


A. There is a page on the Business & Decision Life Sciences website which provides access to the webcast on qualifying virtual environments and which also has a link to the relevant page on the ISPE website (ISPE members can download the Pharmaceutical Engineering article free of charge).
The page can be accessed at http://www.businessdecision-lifesciences.com/2426-webcast-qualification-of-virtualized-environments.htm


Our next answer responds to two similar questions:
Q. Do your IT Quality People review every change control?
Q. Do you view the change control as an auditable item, and as such require them to be written so that an auditor can understand it – requiring clarity beyond the level of a technical SME?

A. Our Quality Manager (or Quality designee) reviews and approves every Planned or Emergency Change Control. Pre-Planned (like-for-like) changes are not reviewed and approved by the Quality Manager.

However, all of the Change Control processes (Pre-Approved, Planned and Emergency) are subject to Periodic Review by the Quality Group, so if there was any issue with the Pre-Approved change process this would be picked up then.

All changes are also peer reviewed by a technical subject matter expert so we do not expect them to be written in such a way as to allow a technical ‘newbie’ to be able to understand them. Change are also reviewed by the System Owner and/or Client, to ensure that all impacts of the change are assessed (e.g. clients ability to access an application during a service outage).

The role of the Quality Manager is to ensure that the change control process is followed, not to ensure that the change is technically correct. Having said that, our Quality Team and independent Internal Auditor are technically competent and do understand what they’re reviewing, at least at a high enough level to understand what the change is about and why it’s required.

We find that some external auditors have a technical background and are able to understand the content of most change controls. However, some do not have a specific IT Quality role and can not understand the technicalities of all of the changes. If this is ever an issue during a client audit we get the originator of the change control to explain it.

Thanks again to everyone who tuned in to the live webcast. If you missed the live event the recording is still available at "Maintaining the Qualified State: A Day In The Life of a Data Center"

Friday, November 19, 2010

Qualifying the Cloud: Fact or Fiction?

There was a great deal of interest in last Wednesday’s webcast “Qualifying the Cloud: Fact or Fiction?”. Cloud Computing is certainly an issue with a number of people and your responses during the session clearly indicate that there are some regulatory concerns.

Despite adding 15 minutes to the originally scheduled session there were still more questions than we could fully answer in the time allowed and as promised we have provided written answers to your questions below.

Q. In your audit and/or customer experience, have you found that a SLA or service level agreements indicating demonstrative control over the infrastructure in the cloud is sufficient to meet GxP regulatory compliance, or are auditors still looking for IQ/OQ "installation operational qualification" checklists against a specific list of requirements

Different auditors look for different things and let’s start by saying that it’s pretty rare for regulatory inspectors to be spending any time in data centers unless there is due cause. Nowadays this is usually because of issues with an uncontrolled system that are encountered during a broader inspection.

When I am auditing on behalf of Life Sciences clients I will always look for evidence that IQ/OQ (or combined  IOQ) is performed properly. By this I mean not just that the as-built/installed infrastructure matches the configuration management records, but that the as-built/installed infrastructure complies with the design specifications and client requirements.

I once audited a major managed services and hosting provider and their processes for building and installing infrastructure platforms were very good and highly automated – which is good for the rapid elasticity required in Cloud Computing. They literally selected the options off a pick list – how much memory, how many CPUs, what disk capacity etc – and the system was built and installed accordingly in their data center accordingly.
However, there was no independent review of the specifications against the client requirements and no independent review of the as built/installed server platform against the specification. Configuration management records were generated directly from the as built/installed server and never compared against the specification.

As Neill described in the webcast, if someone had accidentally selected the wrong build option from the pick list (e.g. 20GB of storage instead of 40GB) no-one would have noticed until the Service Level Agreement requirements were unfulfilled. That’s why I will always check that there is some element of design review and build/install verification.

However, I’ll usually review the specification, design, build and verification procedures as part of the initial audit to check that these reviews are part of the defined process. I’ll also spot check some of the IOQ records to check that the verification has been done. During subsequent surveillance audits I’ll also check the IOQ records as part of whatever sampling approach I’m taking (sometimes I’ll follow the end-to-end specification, design, build/installation and verification for a particular platform or sometimes I’ll focus on the IOQ process). I'm not looking to verify the build/installation of the infrastructure myself, but I am looking for evidence that there is a process to do this and that someone has done it.

IOQ needn’t be a particularly onerous process – the use of checklists and standard templates can help accelerate this process and as long as people are appropriately trained I’m usually prepared to accept a signature of someone to say that the review activity was done i.e. a signed design specification signed by the reviewer.
As we've found in our own data center, if it's an integral part of the process (especially a semi-automated process) it doesn't have a significant impact on timescales and doesn't detract from the 'rapid elasticity' which as an essential characterristic of Cloud Computing. While issues of capacity are less of a problem in a extensible Cloud the process of IOQ does help catch other types of error (patches not being applied, two or three steps in an automated install having failed etc).

Q. Your early descriptions were good but how would you explain the concept of a the cloud to a traditional Quality person with only a very basic knowledge of Network Architecture? 

I don’t think I would!

Trying to explain to a non-IT specialist in the Quality Unit what the Cloud is always going to be difficult if you take the approach of saying that the Cloud is undefined and the Users don’t need to know what’s going on.
The way to explain it is to say that although the Users in the Regulated Company don’t need to know what the Cloud is, the Regulated Companies IT Department and their IT Quality Group do know what is going on in the Cloud, and that they have checked that it is appropriately controlled

You then need to demonstrate to your Quality Unit that you do know what’s going on in the Cloud. If it’s a Private Cloud you do this by showing them diagrams and specifications, qualification documents and so on. If it’s a Public Cloud (or an externally hosted Private Cloud) you do this by showing that you have audited the Cloud Provider to check that they have the diagrams and specifications, qualification documents and so on.
It’s all about perception. It’s okay for the Users not to know what’s going on in the Cloud, but someone clearly has to be in control. This needs to be the appropriate subject matter experts (either your own IT people or the Cloud Service Providers) and your own IT Quality Unit.

If you’re a small company without the resources or technical knowledge to assess your Cloud Providers you can rely on independent consultants for this support, but you have to select the right consultants and demonstrate due diligence in their selection.

Q. In the event of a regulatory audit, when you are using cloud resources (non-private), how does the Cloud Service Providers responsibility factor in?

Basically, you need your Cloud Service Providers to be on the hook with you and this means clearly defining what support they will provide both in terms of day to day service level requirements and in the event of a regulatory inspection.

Again, let’s emphasize that regulatory authorities rarely look in the data center without due cause and although we are prepared for them to come to our data center in Wayne, we’re not aware of any regulators actually having visited a true third party hosting facility. (However, with the concerns the industry is demonstrating around this issue we think that it’s only a matter of time before they visit someone’s third party data center, somewhere).

The worst case scenario is when, during a regulatory inspection and Inspector asks the question “Is the System Validated” and you have to say “We don’t know…” That’s when further questions will be asked, the answers to which will eventually lead to your Cloud Service Provider. A failure to have properly assessed your Provider will clearly demonstrate to the regulatory authorities a lack of control.

We know of a LOT of Life Sciences Regulated Companies who have outsourced based solely on cost, with the process driven by IT management and the accountants. They usually accept the Providers standard service levels and any involvement from quality/regulatory is often late and sometimes ignored. The result is that ‘compliance’ then becomes an added activity with added costs, the promised cost savings disappear and there is often no right to adequately audit or provide support for regulatory inspections including in the Service Level Agreement.

  • Always conduct a full audit well before signing a contract (at least two days on-site, at least a month before the contract is due for signing).
  • Agree in the contract how and when any quality/compliance/control ‘gaps’ from the audit (and any surveillance audits) will be addressed.
  • Identify the penalties for not addressing any quality/compliance/control ‘gaps’ in the contract (this might include reducing service charges to cover the cost of the Regulated Companies additional quality/compliance/control activities or even cancellation of the contract – which we know one pharmaceutical company actually did).
  • Include the right for surveillance audits in the contract.
  • Include the need to support any regulatory inspections in the contract (this may never happen so can be a justifiable additional cost).
If the Cloud Service Provider won’t agree to these things we would recommend looking elsewhere or building your own Private Cloud in-house. Regulated Companies should remember that they are accountable for the control of systems they are using and that although they are the customer, the most power you’ll have in the relationship is immediately before the contract is signed.


Finally we’d just like to highlight a comment made by one of the listeners “Audit and assessment of the provider should be seen as the Insurance Certificate!” This is an excellent point and really emphasizes the key issue about Cloud Computing – you need to dig below the surface, get behind all of the hype and really understand the what, who, where and how.

There’s no reason why Cloud Computing shouldn’t be used for regulatory purposes as long as Regulated Companies exercise their responsibilities and work with Service Providers who are willing to be open about what they are doing. As far as the Users are concerned, the Cloud is still a Cloud (on-demand, rapid elasticity etc), but the Regulated Companies IT department and IT Quality group need to be in the Cloud with the Service Providers, understanding what’s going on and making sure that things are controlled.

Thank you again to everyone for their interest. The recording is still available online for anyone who didn’t catch the entire session and you can still register for the final webcast in the series via the Business & Decision Life Sciences website.

Monday, November 1, 2010

IT Infrastructure Qualification - Your Questions Answered

There we're a couple of questions relating to last week's "Pragmatic Best Practices for IT Infrastructure Qualification" webcast that we didn't get around to answering... so here are the questions and the answers.

Q.  What are the qualification strategies to be followed for infrastructure for which the applications it will support are unknown?

There are three basic approaches here:

The first is to qualify infrastructure platforms and components on the assumption of high risk severity of supported applications. This will mean that all infrastructure platforms and components are qualified such that they can support any application with no additional qualification activities being required at a later date. This is the approach taken by Business & Decision for shared platforms and components in our own data center and this provides us with the flexibility needed to meet changing customer requirements.

While this would appear ‘over kill’ to some, because qualification is really based on well documented good engineering practice (as per ASTM E2500) there is relatively little additional overhead over and above what any Class A data center should be doing to specify, build/install and test their infrastructure (this was covered in more detail in our webcast "A Lean Approach to Infrastructure Qualification").

The second approach is to qualify specific platforms and components for the risk associated with those applications that are known. This is possible for infrastructure that is dedicated to defined applications e.g. specific servers, storage devices etc. This can reduce the level of documentation in some cases, but this means that whenever a change is made at the applications layer, the risk associated with the infrastructure may need to be revisited. While additional IQ activities would not be required, it may be necessary to conduct additional OQ activities (functional or performance testing) of the infrastructure components prior to re (validating) the applications. This requires an on-going commitment to more rigorous change control impact assessment and can slow down the time taken to make changes. While Business & Decision might consider this approach for some client specific platforms and components our clients generally prefer the responsiveness the first approach provides.

The third approach (taken by some very large Regulated Companies in their own data centers) is to qualify different platforms according to different levels of risk e.g. there could be a cluster of servers, virtual machines and network attached storage dedicated to high risk applications, with the same (or a similar architecture) being dedicated to medium and low risk applications. This is probably the best solution because it balances flexibility and responsiveness with scalable risk-based qualification, but can tend to lead to over capacity and is only really a good solution in large data centers.

Q. Each component of our network software is individually validated.  What is the best strategy for qualifying the network itself?

The network isn’t really qualified in its entirety, but is qualified by way of qualifying all of the network platforms and components. This may include some functional testing of platforms or components, but the correct functioning of the network is really verified by validating applications.

The network can essentially be considered to be made up of the non-functional cables, fiber etc, the hardware (which may include firmware) and the software components that are necessary to make it work.

The software components (e.g. software based firewalls, server monitoring software, time synchronization software etc) should be designed, specified (including all configuration parameters), installed, configured and verified. Verification will include installation qualification, verification of configuration parameters and may also include some functional testing (OQ) which will be based on meeting the functional requirements of the software.

Hardware components such as bridges, switches, firewalls etc will be designed, specified, built/installed and verified. Verification will include IQ and if the ‘hardware’ component includes software (which is often the case) there will again be an element of configuration parameter verification and some functional testing. Business & Decision usually combine the IQ and OQ into a single a single verification test, simply for efficiency.

For the basic network (backbone cables, fiber, fiber switches and really ‘dumb’ network components with no configurable software element such as hubs), these will again be designed, specified, built/installed and verified, but verification will be limited to a simple IQ (recording installation details such a cable numbers, serial and model numbers etc). This can of course be done retrospectively.

All of the above can be scaled, based upon risk as discussed in the answer above.

Remmeber, if you have a question that you'd like us to answer, you can contact us on validation@businessdecision.com or you can submit your questions via the 'Ask An Expert' page on our Life Sciences website.

Wednesday, February 10, 2010

Answers to Webcast Questions - Testing Best Practices: 5 Years of the GAMP Good Practice Guide

The following answers are provided to questions submitted during the "Testing Best Practices: 5 Years of the GAMP Good Practice Guide" and which we did not have time to answer while we were live.


Can we thank you all for taking the time to submit such interesting questions.

Q. Retesting: What is your opinion on retesting requirements when infrastructure components are upgraded? i.e. O/S patches, database upgrades, web server upgrades
A. The GAMP "IT Infrastructure Control and Compliance" Good Practice Guide specifically addresses this question. In summary, this recommends a risk-based approach to the testing of infrastructure patches, upgrades etc. Based on risk severity, likelihood and detectability this may require little or no testing, will sometime require testing in a Test/QA instance or in some cases they may or should be rolled out to the Production environment (e.g. anti-virus updates). Remember - with a risk-based approach there is no 'one-size-fits-all' approach.
 
Q. No value add for independent review and oversight? Why not staff SQE's?
A. Assuming that 'SQE' is Software Quality Expert, we would agree that independent review by such SQE's does add value, specifically because they are experts in software and should understand software testing best practices. Where we do question the value of quality reviews (based on current gidance) is where the Quality Unit has no such expertise to draw upon. In these cases the independent Quality Unit still has a useful value add role to play, but this is an oversight role, ensuring that test processes and procedures are followed (by review of Test Strategies/Plans/Reports and/or periodic review or internal audit)

Q. What FDA guidance was being referred to re: QA review of test scripts etc not being necessary?
A. The FDA Final Guidance document “General Principles of Software Validation” doesn’t specifically state that QA review of test scripts is not necessary, but like the GAMP “Testing of GxP Systems“ Good Practice Guide, GAMP 5 and ASTM E2500, it places the emphasis on independent PEER review. i.e. by suitably qualified, trained or experienced peers (e.g. software developers, testers etc) who are able to independently review test cases. Although QA IT people may well have the necessary technical background to play a useful part in this process (guiding, supporting etc) this is not always the case for the independent Quality Unit who are primarily responsible for product (drug, medical device etc) quality.
 
Q. Do the regulators accept the concept of risk-based testing?
A. As we stated in response to a similar question in the webcast, regulatory authorities generally accept risk-based testing when it is done well. There is a concern amongst some regulators (US FDA and some European inspectors) that in some cases risk-assessments are being used to justify decisions that are actually taken based on timescale or cost constraints.
In the case of testing, the scope and rigor of testing is sometimes determined in advance and the risk assessment (risk criteria, weightings etc) are 'adjusted' to give the desired answer e.g. "Look - we don't need to do any negative case testing after al!"
The better informed regulators are aware of this issue, but where testing is generally risk-based our experience is that this is viewed positively by most inspectors.
 
Q. Do you think that there a difference in testing good practices in different sectors e.g pharma vs. medical device vs. biomedical?
A. There shouldn't be, but in reality the history of individual Divisions in the FDA (and European Agencies) means that there are certain hot topics in some sectors e.g.
  • Because of well understood failures to perform regressions analysis and testing the CBER are very hot on this topic in blood banking.
  • Because of the relatively high risk of software embedded in medical devices, some inspectors place a lot of focus on structural testing.
Although this shouldn't change the scope or rigor of the planned testing it is necessary that the testing is appropriate to the nature of the software and the risk, and that project documentation shows that valid regulatory concerns are addressed. It is therefore useful to be aware of sector specific issues, hot topics and terminology.

Q. Leaving GMP systems aside and referring to GxP for IT, Clinical and Regulatory applications. How do you handle a vendors minimum hardware spec for an application in a virtual environment?
We have found that vendors overstate the minimums (# of CPUs, CPU spec, minimum RAM, disk space usage, etc.) by a huge margin when comparing actual usage after a system is in place.
A large pharma I used to work for put a standard VM build of 512k RAM and to increase it if needed.  This was waived for additional  servers of the same application.   In the newest version of VMware (vSphere 4) all of these items can be changed while the guest server is running.
A. Software vendors do tend to cover themselves for 'worst case' (peak loading of simultaneous resource intensive tasks, maximum concurrent users etc - and then add a margin), to ensure that the performance of their software isn't a problem. The basic answer is to use your own experience based on a good Capacity Planning and Performance Management process (see the GAMP "IT Infrastructure Control and Compliance" Good Practice Guide again). This shoud tell you whether your hardware is over-rated or not and you can use historic data to size your hardware. It can also be useful to seek out the opinion of other users via user groups, discussion boards and forums etc.
Modern virtualization (which we also covered in a previous webcast "Qualification of Virtualized Environments") does allow the flexibility to modify capacity on the fly, but this isn't an option for Regulated Companies running in a traditional hardware environment. Some hardware vendors will allow you to install additional capacity and only pay for it when it is 'turned on' , but these tend to be large servers with mutliple processors etc.
At the end of the day it comes down to risk assessment - do you take the risk of not going with the software vendors recommendation for the sake of reducing the cost of the hardware? This is the usual issue of balancing project capex' budget against the cost to the business of poor performance.

Saturday, October 10, 2009

Infrastructure Qualification White Paper Updated

Following the finalization of the GAMP "A Risk-Based Approach to Operation of GxP Computerized Systems" good practice guide we decided to update our "Pragmatic Infrastructure Qualification" white paper (available on the Business & Decision Life Sciences website).

Although originally written in 2005 the white paper had stood the test of time well - what we were promoting as industry leading best practice back then has now been adopted in the new GAMP good practice guide or in version 5 of the GAMP Guide. It just goes to show how best practice changes to become more widespread good practice over time.

Some of the regulatory references had become a bit dated and we're now also able to reference almost five years operation of own qualified Data Center where we host and manage client's regulatory significant applications (ERP, LIMS, Change Management etc) on our qualified infrastructure.

Although infrastructure qualification isn't quite the 'hot button' topic it was in the middle of the decade it's still something that needs to be addressed - the more cost effectively the better.

Over the coming months we're going to be running a series of webcasts to share some of this best practice with a wider audience and there are also plans to develop a new white paper looking at applying some of this best practice to new technologies such as virtualization and middleware and Service Oriented Architecture.