The revised FDA draft guidance document “Electronic Source Data in Clinical Investigations” provides guidance to clinical trial sponsors, CROs, data managers, clinical investigators and others involved in the capture, review and archiving of electronic source data in FDA-regulated clinical investigations.
The original January 2011 draft guidance has been updated to clarify a number of points made to the FDA by commentators in the industry and the new draft guidance is published to collect additional public comments.
It's good to see industry and regulators working to develop guidance on the use of electronic Case Report Forms (eCRFs), recognising that capturing clinical trial data electronically at source significantly reduces the number of transcription errors requiring resolution, does away with unnecessary duplication of data and provides more timely access for data reviewers.
While much of the guidance contained in the draft would be seen as common sense in much the industry it does start to provide a consensus on important issues such as associating authorised data originators with data elements, the scope of 21CFR Part 11 with respect to the use of such records, interfaces between medical devices or Electronic Health Records and the eCRF.
No doubt a number of the recommendations contained in the draft guidance document will be of concern to software vendor's whose systems do not currently meet the technical recommendations provided. We will therefore surely see a variety of comments from “non-compliant” vendors trying to water down the recommendation until such a time their systems can meet what is already accepted good practice.
One key issue that would appear to be missing is the use of default values on eCRFs, which we know has been a concern in a number of systems and clinical trials i.e. where the investigator has skipped over a field leaving the data element at the default value. This is something we have provided feedback on and we would encourage everybody in the industry to review the new draft guidance and provide comments.
You can view a copy of the new draft guidance at http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM328691.pdf and comment through the usual FDA process at https://www.federalregister.gov/articles/2012/11/20/2012-28198/draft-guidance-for-industry-on-electronic-source-data-in-clinical-investigations-availability
Thursday, November 22, 2012
Thursday, November 15, 2012
Penetrating the Cloud
IVT's Conference on Cloud and Virtualization (Dublin, 13-14th November 2012) was everything I'd hoped it would be. After two year of conference sessions simply peering into the Cloud to understand what it us, or sticking a head in to the Cloud to see what the risks are, it was good to spend two days looking through the Cloud to see how these risks can be managed and to review some case studies.
It served to endorse our opinion that while generalist Cloud providers are either not interested in the needs of the Life Sciences industry, or are still struggling to understand Life Sciences requirements, what some people have called the 'Pharma Cloud' (or 'Life Sciences' Cloud, and what we define as Compliant Cloud Computing) is here. As we report in one of our latest Perspectives opinion pieces, while specialist providers are relatively few, Infrastructure, Platform and Software as a Service can now be provisioned in a manner that meets the expectations of most regulators.
While it would have been good for an organization such as ISPE to provide such clarity, well done to IVT for organizing events in the US and Europe and giving people a chance to unpack such issues. To be fair to ISPE, many GAMP sessions have looked at Cloud at country specific meetings and conferences, but the topic really does need a couple of days to get your head around.
What also emerged was the ability to select the right Cloud model, including On-Premise options and discussions with a number of delegates confirmed the attractiveness of the Compliant Cloud Anywhere solution (IaaS, installed On-Premise, but owned and operated by a specialist Cloud Services provider).
At the end of the IVT event delegates (many of whom are from QA or IT Quality) went home with a much better understanding of what Cloud and Virtualization is and what the risks are. Perhaps more importantly, what also emerged were some good examples of how to mitigate the risks and the outline of a strategy to move further into the Cloud without risking regulatory compliance.
As we'll explore in our webcast "State of the Art in Compliant Cloud Computing", relatively few Life Sciences companies have a real Cloud Strategy that also addresses regulatory compliance and this is quickly becoming a necessity for organizations looking the take advantage of the business benefits that Cloud and Virtualization offers.
As clarity emerges we expect to see things move significantly further into the Cloud in the next 12 months - "watch this space" as they say!
It served to endorse our opinion that while generalist Cloud providers are either not interested in the needs of the Life Sciences industry, or are still struggling to understand Life Sciences requirements, what some people have called the 'Pharma Cloud' (or 'Life Sciences' Cloud, and what we define as Compliant Cloud Computing) is here. As we report in one of our latest Perspectives opinion pieces, while specialist providers are relatively few, Infrastructure, Platform and Software as a Service can now be provisioned in a manner that meets the expectations of most regulators.
While it would have been good for an organization such as ISPE to provide such clarity, well done to IVT for organizing events in the US and Europe and giving people a chance to unpack such issues. To be fair to ISPE, many GAMP sessions have looked at Cloud at country specific meetings and conferences, but the topic really does need a couple of days to get your head around.
What also emerged was the ability to select the right Cloud model, including On-Premise options and discussions with a number of delegates confirmed the attractiveness of the Compliant Cloud Anywhere solution (IaaS, installed On-Premise, but owned and operated by a specialist Cloud Services provider).
At the end of the IVT event delegates (many of whom are from QA or IT Quality) went home with a much better understanding of what Cloud and Virtualization is and what the risks are. Perhaps more importantly, what also emerged were some good examples of how to mitigate the risks and the outline of a strategy to move further into the Cloud without risking regulatory compliance.
As we'll explore in our webcast "State of the Art in Compliant Cloud Computing", relatively few Life Sciences companies have a real Cloud Strategy that also addresses regulatory compliance and this is quickly becoming a necessity for organizations looking the take advantage of the business benefits that Cloud and Virtualization offers.
As clarity emerges we expect to see things move significantly further into the Cloud in the next 12 months - "watch this space" as they say!
Wednesday, November 7, 2012
Applying Anti-Virus and Automatic Updates
Another question from LinkedIn, which has been popping up quite a few times on-line lately. We therefore thought we'd share the question and answer with a wider audience.
Q. What are the impact of Microsoft patch upgrades on validated computer systems. How we can consider them.
A. The GAMP IT Infrastructure Good Practice Guide and GAMP 5 have good appendices on patch management, which includes security patches.
These patches (and the updating process) are pretty mature by now and are generally considered to be of low risk likelihood. The impact on validated systems should therefore be low risk.
In most cases, for low-medium risk enterprise systems / IT platforms, organizations rely on automatic updates to protect their systems, because the risk of contracting some malware or leaving a security vulnerability open is greater than that of applying an 'untested' patch - the security patches are of course tested by Microsoft and should be fine with most validated systems / applications.
However, for some systems controlling processes directly impacting on product quality another strategy is often applied which is to place such systems on a segregated (almost isolated), highly protected network domain and not allow automatic updating of patches, but to update manually.
Placing them on such a protected network limits business flexibility but significantly reduces the likelihood of most malware propagating to such systems, or of malware being able to access such systems to exploit security vulnerabilities. If such systems e.g. SCADA are using Microsoft Windows it may well be an older version and these can be particularly vulnerable to malware, especially if connected to the Internet via anything less that a robust and multi-layered set of security controls (for licensing reasons, on a machine that was being decommissioned I once uninstalled the malware protection from an machine running Windows XP - which is still relatively common in some parts of the corporate world - and even siting behind a reasonably secure firewall it was exploited in less than two minutes...)
In these cases anti malware should be installed and Windows updates applied, but applied manually after assessing the patch i.e. reading the knowledge base articles. The risk associated with applying a patch which has not been tested by the regulated company with the specific control software may pose a greater risk to the system and hence to product safety. In these cases the regulated company will test patches in a test environment and patch relatively infrequently by hand, or only to fix known issues.
Q. What are the impact of Microsoft patch upgrades on validated computer systems. How we can consider them.
A. The GAMP IT Infrastructure Good Practice Guide and GAMP 5 have good appendices on patch management, which includes security patches.
These patches (and the updating process) are pretty mature by now and are generally considered to be of low risk likelihood. The impact on validated systems should therefore be low risk.
In most cases, for low-medium risk enterprise systems / IT platforms, organizations rely on automatic updates to protect their systems, because the risk of contracting some malware or leaving a security vulnerability open is greater than that of applying an 'untested' patch - the security patches are of course tested by Microsoft and should be fine with most validated systems / applications.
However, for some systems controlling processes directly impacting on product quality another strategy is often applied which is to place such systems on a segregated (almost isolated), highly protected network domain and not allow automatic updating of patches, but to update manually.
Placing them on such a protected network limits business flexibility but significantly reduces the likelihood of most malware propagating to such systems, or of malware being able to access such systems to exploit security vulnerabilities. If such systems e.g. SCADA are using Microsoft Windows it may well be an older version and these can be particularly vulnerable to malware, especially if connected to the Internet via anything less that a robust and multi-layered set of security controls (for licensing reasons, on a machine that was being decommissioned I once uninstalled the malware protection from an machine running Windows XP - which is still relatively common in some parts of the corporate world - and even siting behind a reasonably secure firewall it was exploited in less than two minutes...)
In these cases anti malware should be installed and Windows updates applied, but applied manually after assessing the patch i.e. reading the knowledge base articles. The risk associated with applying a patch which has not been tested by the regulated company with the specific control software may pose a greater risk to the system and hence to product safety. In these cases the regulated company will test patches in a test environment and patch relatively infrequently by hand, or only to fix known issues.
Key to all of this is a risk-based patching strategy, which should be defined in e.g. a Security Policy and appropriate SOPs. Key considerations are:
- Understanding the risk vulnerability of different platforms e.g. Windows XP vs Windows 7 vs Windows Server etc
- Understanding the risk vulnerability of different network segments
- Understanding the risk likelihood of automatically applying updates i.e. the extent of the interaction between the operating system and validated applications
Friday, November 2, 2012
Collaborating on Collaboration - Validating Open Source Software
I'm delighted to hear that "Open Source Software in Life Science Research: Practical Solutions to Common
Challenges in the Pharmaceutical Industry and Beyond" has just been published (available from Woodhead Publishing and also on Amazon).
The book has an interesting history, having started life as a discussion on LinkedIn on the use of open source software in the Life Sciences industry. After a very illuminating discussion it was suggested that we write a book on the subject and so Lee Harland of Pfizer and Mark Forster of Syngenta U.K. agreed to take on the editing of the book, which would look at the how open source software is used in our industry.
The result is a comprehensive look at the the current state of the market, with chapters looking at the use of various open source tools and packages looking at predictive toxicology, mass spectrometry, image processing, data analytics, high throughput genomic data analysis, web based collaboration and many, many more applications. As well as addressing general issues the book looks at specific tools and applications and is a useful reference for anyone looking for a guide as to the kind of software that is out there (many of these applications are quite well 'hidden' on the Internet!)
Without doubt, open source software is widely used in pharmaceutical research and development and is transforming the way the industry works in many ways. Open source software has many advantages - it's free to acquire and rather than wait for a software vendor to understand and respond to market requirements, the developer community just goes ahead and extends functionality in the direction that research and development needs.
My contribution to the book ("Validation and Regulatory Compliance of Free/Open Source Software") sounds a slightly more cautious note, highlighting when open source software may require validation, the challenges of validating open source software and sharing some ideas on how to go about it - including collaboratively!
This can be challenging, time consuming and does of course have costs associated with it, which is why we see less open source software in GMP areas. However, I have no doubt that the trend to collaboratively develop niche applications will continue to expand, especially with the prevalence of mature software development tools and Platform-as-a-Service development environments.
The process of collaborating to write a book is quite interesting - just like collaborating to develop software you're not quite sure what you're going to get, sometimes some contributors may head off at a tangent and you're never quite sure when everything is going to be delivered. Well done to Lee and Mark for sticking with the process. I think that the end result is well worth the wait.
Thursday, November 1, 2012
Retrospective Validation of Enterprise Systems
Yesterday was the fourth and final stop on our virtual book tour, looking at important topics from "Validating Enterprise Systems: A Practical Guide" (published by the Parenteral Drug Association and available via their bookstore)
In yesterday's session (recording and slides available here) we looked at the topic of retrospectively validating enterprise systems including why it is important to retrospectively validate systems that have not previously been validated and various reasons which lead to retrospective validation.
As we discussed yesterday, some of these reasons are understandable and acceptable to regulatory authorities while other reasons (such as ignorance or cost avoidance) are not.
We got through most of your questions yesterday but there were a couple of questions we didn't quite get around to answering so here they are.
Q. How is retrospective validation different to normal validation?
A. Many of the actual activities are identical but when retrospectively validating the system it is possible to leverage existing documentation (if any) and actual experience with the system.
Where good documentation exists and has been kept up-to-date the task of retrospective validation can be relatively quick. However, when little or no documentation exists it can take almost as long to retrospectively validate a system as it did implement it.
If the system has been maintained and supported using well documented processes such as help desk, incident management and problem management it will also be possible to leverage this information and use it to inform detailed risk assessments. With actual empirical data it is possible to make a more accurate assessment of risk likelihood and probability of detection.
Where a system was well implemented and has been well maintained this additional information can be justifiably used to limit the extent of the verification activities required as part of the retrospective validation.
Where this empirical data highlights problems or issues with the system it can also be used to determine which areas of the system require greatest focus as part of the risk-based validation effort.
This can mean that in some cases retrospective validation can be more successful than prospective validation in terms of appropriately addressing real risks in a cost-effective manner. However, as we stated in the webcast yesterday, because retrospective validation is not conducted as part of the implementation activities it is generally significantly more expensive than prospective validation which is well integrated with the original implementation effort. For this reason retrospective should be avoided.
Q. How common is retrospective validation? Do many companies have this problem?
A. There is a good deal of variation in the industry. At one end of the scale there are well-managed companies who are well aware of their regulatory responsibilities and who prospectively validate all appropriate enterprise systems at the time of implementation. At the other end of the scale there are companies who are either ignorant of the need to validate certain enterprise systems, or who choose to ignore the requirement in order to save money.
To some extent this depends on the maturity of the market and of the organisation. The most mature organisations have learned that risk-based prospective validation adds relatively little to the cost of implementation and provides good return on investment in terms of more reliable and robust solutions.
Less mature organisations still do not understand the cost benefit of risk-based validation or are not aware of the need to validate certain enterprise systems. While to some extent the US and Europe are more mature markets in this regard this is not universally the case. There are still new start-ups and companies where profit margins are slim who still do not prospectively validated their systems.
In markets which have historically been seen as less mature (e.g. Asia, Africa, Latin and South America) there is a growing realisation of both the need and attractiveness of validation with respect to implementing reliable and robust enterprise systems which help to streamline business operations. While retrospective validation is currently quite common in these markets (as they move to meet the growing expectations of changing local regulations and loom to export products to markets where the need for validation is already well established) this will change over time - and quite rapidly if other indicators of change are reflected.
This means that while retrospective validation will continue to be required in many markets for a number of years to come (and hence the need for a chapter on the subject in "Validating Enterprise Systems: A Practical Guide") we predict that this will be the exception within the next 5-10 years, with retrospective validation becoming rarer in all markets.
Thanks to all of you who have supported the virtual book tour we do hope you will join us for our forthcoming series of IS compliance and computer system validation webcasts over the next few months (details and registration available here)
In yesterday's session (recording and slides available here) we looked at the topic of retrospectively validating enterprise systems including why it is important to retrospectively validate systems that have not previously been validated and various reasons which lead to retrospective validation.
As we discussed yesterday, some of these reasons are understandable and acceptable to regulatory authorities while other reasons (such as ignorance or cost avoidance) are not.
We got through most of your questions yesterday but there were a couple of questions we didn't quite get around to answering so here they are.
Q. How is retrospective validation different to normal validation?
A. Many of the actual activities are identical but when retrospectively validating the system it is possible to leverage existing documentation (if any) and actual experience with the system.
Where good documentation exists and has been kept up-to-date the task of retrospective validation can be relatively quick. However, when little or no documentation exists it can take almost as long to retrospectively validate a system as it did implement it.
If the system has been maintained and supported using well documented processes such as help desk, incident management and problem management it will also be possible to leverage this information and use it to inform detailed risk assessments. With actual empirical data it is possible to make a more accurate assessment of risk likelihood and probability of detection.
Where a system was well implemented and has been well maintained this additional information can be justifiably used to limit the extent of the verification activities required as part of the retrospective validation.
Where this empirical data highlights problems or issues with the system it can also be used to determine which areas of the system require greatest focus as part of the risk-based validation effort.
This can mean that in some cases retrospective validation can be more successful than prospective validation in terms of appropriately addressing real risks in a cost-effective manner. However, as we stated in the webcast yesterday, because retrospective validation is not conducted as part of the implementation activities it is generally significantly more expensive than prospective validation which is well integrated with the original implementation effort. For this reason retrospective should be avoided.
Q. How common is retrospective validation? Do many companies have this problem?
A. There is a good deal of variation in the industry. At one end of the scale there are well-managed companies who are well aware of their regulatory responsibilities and who prospectively validate all appropriate enterprise systems at the time of implementation. At the other end of the scale there are companies who are either ignorant of the need to validate certain enterprise systems, or who choose to ignore the requirement in order to save money.
To some extent this depends on the maturity of the market and of the organisation. The most mature organisations have learned that risk-based prospective validation adds relatively little to the cost of implementation and provides good return on investment in terms of more reliable and robust solutions.
Less mature organisations still do not understand the cost benefit of risk-based validation or are not aware of the need to validate certain enterprise systems. While to some extent the US and Europe are more mature markets in this regard this is not universally the case. There are still new start-ups and companies where profit margins are slim who still do not prospectively validated their systems.
In markets which have historically been seen as less mature (e.g. Asia, Africa, Latin and South America) there is a growing realisation of both the need and attractiveness of validation with respect to implementing reliable and robust enterprise systems which help to streamline business operations. While retrospective validation is currently quite common in these markets (as they move to meet the growing expectations of changing local regulations and loom to export products to markets where the need for validation is already well established) this will change over time - and quite rapidly if other indicators of change are reflected.
This means that while retrospective validation will continue to be required in many markets for a number of years to come (and hence the need for a chapter on the subject in "Validating Enterprise Systems: A Practical Guide") we predict that this will be the exception within the next 5-10 years, with retrospective validation becoming rarer in all markets.
Thanks to all of you who have supported the virtual book tour we do hope you will join us for our forthcoming series of IS compliance and computer system validation webcasts over the next few months (details and registration available here)
Tuesday, October 23, 2012
Using non-ERES Compliant Business Intelligence Tools in Support of GxP Decision Making
Another interesting question here - this time posted in the LinkedIn 21 CFR Part 11 discussion group on the use of business intelligence tools to support GxP decision making,. However, the business intelligence tools are not 21 CFR Part 11 compliant!
Q. We are implementing a 'system' which comprises of replicating software from Oracle and a destination replicated Oracle database. The purpose of this 'system' will be for running Oracle and Cognos reports from this replicated database instead of the high transactional source database. This system will be accessed only by the DBA.
The replicating software is pretty much command line based which is basically to say it does not have a GUI. From the command prompt user interface of this software, we can define (enter commands) for the source and target database; define the number of processes to run; set filters for what data are to be replicated and the frequencey for replication.
We did an assessment and found that part 11 is applicable. The problem is that we cannot satisfy all the part 11 requirements.
Although we deal with e-records (store the data from the source system including setup/ configuration of replication process) how do we still justify that e-signatures, audit trail requirements or password aging/restrictions are not applicable and not supported by the replicating software?
Q. We are implementing a 'system' which comprises of replicating software from Oracle and a destination replicated Oracle database. The purpose of this 'system' will be for running Oracle and Cognos reports from this replicated database instead of the high transactional source database. This system will be accessed only by the DBA.
The replicating software is pretty much command line based which is basically to say it does not have a GUI. From the command prompt user interface of this software, we can define (enter commands) for the source and target database; define the number of processes to run; set filters for what data are to be replicated and the frequencey for replication.
We did an assessment and found that part 11 is applicable. The problem is that we cannot satisfy all the part 11 requirements.
Although we deal with e-records (store the data from the source system including setup/ configuration of replication process) how do we still justify that e-signatures, audit trail requirements or password aging/restrictions are not applicable and not supported by the replicating software?
A. We've just finished doing exactly this for another project where we set up a reporting database to reduce the load on the transactional system. Some of the reports from the data warehouse were GxP significant.
We also had similar issues consolidating medical device traceability data for the device history record from signed records in SAP to the SAP Business Warehouse (BW) where we lost the secure relationship between the signed records in the transactional database and the copy in BW.
It's all about clearly identifying what records and data are used for what purposes, documenting the rationale and design, developing or updating appropriate SOPs and training your users and DBAs accordingly.
The first thing to do is to clearly differentiate between the records that you will ultimately rely on for regulatory purposes versus the data that you use for reporting. We maintained (and clearly documented) that the signed records in the transactional system would always be the ultimate source of data for regulatory decision-making (i.e. product recall, CAPA investigations etc). Where applicable these were signed and could be verified and were still Part 11 compliant.
Our rationale was that the data warehouse (and associated GxP reports) were acting as a grand 'indexing system' which allowed us to find the key regulatory records much faster (which has to be good for patient safety). We would not use these reports for making regulatory critical decisions, but we did use these reports to be able to more quickly find the key records in the transactional database which we could then rely upon for regulatory decision-making. Under that rationale the records and signatures which was subject to Part 11 remained in the transactional database.
We made sure that SOPs for key processes such as product recall, CAPA investigation etc were updated to reflect the appropriate use of the reports and the signed transactional records. We were able to demonstrate that using the data warehouse reports was between 2 to 3 times faster than running queries and filters in the transactional system and we could find the records much faster. In our testing we were also able to demonstrate that we always found the correct record (more than 99.9% of the time) unless something had subsequently been changed in the transactional database (less than 0.1% of the time) and even where this was the case the discrepancy was obvious and would not lead to erroneous decision-making.
However, that's not to say that the GxP significant reports were not important. We still validated the software which replicated the databases, we qualified the underlying IT infrastructure and we validated the GxP significant reports.
We essentially had three categories of report:
- Non-GxP reports, which were not validated (this was the majority of reports)
- GxP significant reports, which were not based upon signed copies of Part 11 records, but which were validated.
- GxP significant reports, which were based upon signed copies of Part 11 records help in the transactional system.These were validated and were clearly annotated to the effect that they should not be relied upon for regulatory decision-making and they also gave a reference to the signed, transactional record in the transactional database.
On both of these projects we had much better tools for the replication of the data. Since you're using Oracle databases we would recommend that you create (and maintained under change control/configuration management) PL SQL programs to replicate the data bases. This will significantly reduce the likelihood of human error, allow you to replicate the databases on a scheduled basis and make it much easier to validate replication process.
For more background (discussion paper and webcast) on the use of validated Business Intelligence tools for GxP decision making on our website at http://www.businessdecision-lifesciences.com/2297-operational-analytics-in-life-sciences.htm.
Monday, October 22, 2012
Validating Clouded Enterprise Systems - Your Questions Answered
Thank you once again to those of you who attended the latest stage on our virtual book tour, with the latest stop looking at the validation of enterprise systems in the Cloud. This is in relation to chapter 17 of "Validating Enterprise Systems: A Practical Guide".
Unfortunate we had a few technical gremlins last Wednesday (both David Hawley and myself independently lost Internet access at our end just before the webcast was due to start) and so the event was postponed until Friday. Our apologies again for that, but we nevertheless received quite a number of registration questions which were answered during the event (you can find a recording of the webcast and copies of the slides here).
We did manage to get through the questions that were asked live during the webcast but we received one by e-mail just after the event which we thought we would answer here in the blog.
Q. "What elements should go into a Master VP for Clouded application / platforms?
A. It depends on the context that the phrase Master Validation Plan is being used. In some organisations a Master Validation Plan is used to define the approach to validating computerised systems on an individual site, in an individual business unit or, as will be the case here, for applications in the Cloud.
In other organisations a Master Validation Plan is used to define the common validation approach which is applied to an enterprise system which is being rolled out in multiple phases to multiple sites (each phase of the roll-out would typically have a separate Validation Plan defining what is different about the specific phase in the roll-out)
Logically, if we are implementing a Clouded enterprise application it could (and often would) be made available to all locations at virtually the same time. This is because there is limited configuration flexibility with a Software-as-a-Service solution and different sites have limited opportunities for significant functional differentiation. In this context is it is unlikely that the second use of a Master Validation Plan would be particularly useful so we'll answer the question in the first context.
With respect to the last point our webcast "Compliant Cloud Computing - Applications and Software as a Service" discusses issues with the validation of Software-as-a-Service applications using traditional approaches and outlines alternative verification techniques that can be used.
Whether it is in a Master Validation Plan or some form of Cloud strategy document, it is important for all regulated companies to start to think about how they will validate Clouded applications. This is clearly a topic that is not going to go away and is something that all life sciences companies will need to address.
You may also be interested to know that on 15th November 2012 we're going to be looking more closely at the current state of the Cloud computing market specifically with respect to meeting the need of regulated companies in the life sciences industry . We'll be talking about where the market has matured and where appropriate providers can be leveraged - and where it hasn't yet matured. Registration is, as ever, free of charge and you can register for the event at the Business & Decision Life Sciences website.
We look forward to hearing from you on the last stage of our virtual book tour when we'll be looking at the retrospective validation of enterprise systems, which we know is a topic of great interest to many of our clients in Asia, Eastern Europe, the Middle East and Africa and in Latin and South America.
Unfortunate we had a few technical gremlins last Wednesday (both David Hawley and myself independently lost Internet access at our end just before the webcast was due to start) and so the event was postponed until Friday. Our apologies again for that, but we nevertheless received quite a number of registration questions which were answered during the event (you can find a recording of the webcast and copies of the slides here).
We did manage to get through the questions that were asked live during the webcast but we received one by e-mail just after the event which we thought we would answer here in the blog.
Q. "What elements should go into a Master VP for Clouded application / platforms?
A. It depends on the context that the phrase Master Validation Plan is being used. In some organisations a Master Validation Plan is used to define the approach to validating computerised systems on an individual site, in an individual business unit or, as will be the case here, for applications in the Cloud.
In other organisations a Master Validation Plan is used to define the common validation approach which is applied to an enterprise system which is being rolled out in multiple phases to multiple sites (each phase of the roll-out would typically have a separate Validation Plan defining what is different about the specific phase in the roll-out)
Logically, if we are implementing a Clouded enterprise application it could (and often would) be made available to all locations at virtually the same time. This is because there is limited configuration flexibility with a Software-as-a-Service solution and different sites have limited opportunities for significant functional differentiation. In this context is it is unlikely that the second use of a Master Validation Plan would be particularly useful so we'll answer the question in the first context.
Where a Master Validation Plan is being used to define the approach to validating Clouded enterprise systems it need to define the minimum requirements for validating clouded applications and provide a framework which:
- Recognises the various cloud computing models (i.e. Infrastructure-as-a-Service, Platform-As-a-Service, Software-as-a-Service; Private Cloud, Community Cloud, Public Cloud and Hybrid Cloud; On-Premise and Off-Premise
- Categorises platforms and applications by relative risk and identifies which cloud models are acceptable for each category of platform/application, which models are unacceptable and which ones may be acceptable with futher risk controls being put in place
- Identifies opportunities for leveraging provider (supplier) activities in support of the regulated company's validation (per GAMP 5/ASTM E2500)
- Stresses the importance of rigourous provider (supplier) assessments, including thorough pre-contract and surveillance audits
- Highlights the need to include additional risk scenarios as part of a defined risk management process (this should include risks which are specific to the Essential Characteristics of Cloud Computing as well as general risks with the outsourcing of IT services)
- Lists additional risk scenarios which may need to be considered, depending upon the Cloud Computing model being looked at (these are discussed in our various webcasts)
- Identifies alternative approaches to validating clouded enterprise systems. This would most usefully identify how the use of Cloud computing often prevents traditional approaches to computer systems validation from being followed and identifies alternative approaches to verifying that the Software-as-a-Service application fulfils the regulated companies requirements
With respect to the last point our webcast "Compliant Cloud Computing - Applications and Software as a Service" discusses issues with the validation of Software-as-a-Service applications using traditional approaches and outlines alternative verification techniques that can be used.
Whether it is in a Master Validation Plan or some form of Cloud strategy document, it is important for all regulated companies to start to think about how they will validate Clouded applications. This is clearly a topic that is not going to go away and is something that all life sciences companies will need to address.
You may also be interested to know that on 15th November 2012 we're going to be looking more closely at the current state of the Cloud computing market specifically with respect to meeting the need of regulated companies in the life sciences industry . We'll be talking about where the market has matured and where appropriate providers can be leveraged - and where it hasn't yet matured. Registration is, as ever, free of charge and you can register for the event at the Business & Decision Life Sciences website.
We look forward to hearing from you on the last stage of our virtual book tour when we'll be looking at the retrospective validation of enterprise systems, which we know is a topic of great interest to many of our clients in Asia, Eastern Europe, the Middle East and Africa and in Latin and South America.
Subscribe to:
Posts (Atom)