IVT's Conference on Cloud and Virtualization (Dublin, 13-14th November 2012) was everything I'd hoped it would be. After two year of conference sessions simply peering into the Cloud to understand what it us, or sticking a head in to the Cloud to see what the risks are, it was good to spend two days looking through the Cloud to see how these risks can be managed and to review some case studies.
It served to endorse our opinion that while generalist Cloud providers are either not interested in the needs of the Life Sciences industry, or are still struggling to understand Life Sciences requirements, what some people have called the 'Pharma Cloud' (or 'Life Sciences' Cloud, and what we define as Compliant Cloud Computing) is here. As we report in one of our latest Perspectives opinion pieces, while specialist providers are relatively few, Infrastructure, Platform and Software as a Service can now be provisioned in a manner that meets the expectations of most regulators.
While it would have been good for an organization such as ISPE to provide such clarity, well done to IVT for organizing events in the US and Europe and giving people a chance to unpack such issues. To be fair to ISPE, many GAMP sessions have looked at Cloud at country specific meetings and conferences, but the topic really does need a couple of days to get your head around.
What also emerged was the ability to select the right Cloud model, including On-Premise options and discussions with a number of delegates confirmed the attractiveness of the Compliant Cloud Anywhere solution (IaaS, installed On-Premise, but owned and operated by a specialist Cloud Services provider).
At the end of the IVT event delegates (many of whom are from QA or IT Quality) went home with a much better understanding of what Cloud and Virtualization is and what the risks are. Perhaps more importantly, what also emerged were some good examples of how to mitigate the risks and the outline of a strategy to move further into the Cloud without risking regulatory compliance.
As we'll explore in our webcast "State of the Art in Compliant Cloud Computing", relatively few Life Sciences companies have a real Cloud Strategy that also addresses regulatory compliance and this is quickly becoming a necessity for organizations looking the take advantage of the business benefits that Cloud and Virtualization offers.
As clarity emerges we expect to see things move significantly further into the Cloud in the next 12 months - "watch this space" as they say!
Showing posts with label Cloud Computing. Show all posts
Showing posts with label Cloud Computing. Show all posts
Thursday, November 15, 2012
Monday, October 22, 2012
Validating Clouded Enterprise Systems - Your Questions Answered
Thank you once again to those of you who attended the latest stage on our virtual book tour, with the latest stop looking at the validation of enterprise systems in the Cloud. This is in relation to chapter 17 of "Validating Enterprise Systems: A Practical Guide".
Unfortunate we had a few technical gremlins last Wednesday (both David Hawley and myself independently lost Internet access at our end just before the webcast was due to start) and so the event was postponed until Friday. Our apologies again for that, but we nevertheless received quite a number of registration questions which were answered during the event (you can find a recording of the webcast and copies of the slides here).
We did manage to get through the questions that were asked live during the webcast but we received one by e-mail just after the event which we thought we would answer here in the blog.
Q. "What elements should go into a Master VP for Clouded application / platforms?
A. It depends on the context that the phrase Master Validation Plan is being used. In some organisations a Master Validation Plan is used to define the approach to validating computerised systems on an individual site, in an individual business unit or, as will be the case here, for applications in the Cloud.
In other organisations a Master Validation Plan is used to define the common validation approach which is applied to an enterprise system which is being rolled out in multiple phases to multiple sites (each phase of the roll-out would typically have a separate Validation Plan defining what is different about the specific phase in the roll-out)
Logically, if we are implementing a Clouded enterprise application it could (and often would) be made available to all locations at virtually the same time. This is because there is limited configuration flexibility with a Software-as-a-Service solution and different sites have limited opportunities for significant functional differentiation. In this context is it is unlikely that the second use of a Master Validation Plan would be particularly useful so we'll answer the question in the first context.
With respect to the last point our webcast "Compliant Cloud Computing - Applications and Software as a Service" discusses issues with the validation of Software-as-a-Service applications using traditional approaches and outlines alternative verification techniques that can be used.
Whether it is in a Master Validation Plan or some form of Cloud strategy document, it is important for all regulated companies to start to think about how they will validate Clouded applications. This is clearly a topic that is not going to go away and is something that all life sciences companies will need to address.
You may also be interested to know that on 15th November 2012 we're going to be looking more closely at the current state of the Cloud computing market specifically with respect to meeting the need of regulated companies in the life sciences industry . We'll be talking about where the market has matured and where appropriate providers can be leveraged - and where it hasn't yet matured. Registration is, as ever, free of charge and you can register for the event at the Business & Decision Life Sciences website.
We look forward to hearing from you on the last stage of our virtual book tour when we'll be looking at the retrospective validation of enterprise systems, which we know is a topic of great interest to many of our clients in Asia, Eastern Europe, the Middle East and Africa and in Latin and South America.
Unfortunate we had a few technical gremlins last Wednesday (both David Hawley and myself independently lost Internet access at our end just before the webcast was due to start) and so the event was postponed until Friday. Our apologies again for that, but we nevertheless received quite a number of registration questions which were answered during the event (you can find a recording of the webcast and copies of the slides here).
We did manage to get through the questions that were asked live during the webcast but we received one by e-mail just after the event which we thought we would answer here in the blog.
Q. "What elements should go into a Master VP for Clouded application / platforms?
A. It depends on the context that the phrase Master Validation Plan is being used. In some organisations a Master Validation Plan is used to define the approach to validating computerised systems on an individual site, in an individual business unit or, as will be the case here, for applications in the Cloud.
In other organisations a Master Validation Plan is used to define the common validation approach which is applied to an enterprise system which is being rolled out in multiple phases to multiple sites (each phase of the roll-out would typically have a separate Validation Plan defining what is different about the specific phase in the roll-out)
Logically, if we are implementing a Clouded enterprise application it could (and often would) be made available to all locations at virtually the same time. This is because there is limited configuration flexibility with a Software-as-a-Service solution and different sites have limited opportunities for significant functional differentiation. In this context is it is unlikely that the second use of a Master Validation Plan would be particularly useful so we'll answer the question in the first context.
Where a Master Validation Plan is being used to define the approach to validating Clouded enterprise systems it need to define the minimum requirements for validating clouded applications and provide a framework which:
- Recognises the various cloud computing models (i.e. Infrastructure-as-a-Service, Platform-As-a-Service, Software-as-a-Service; Private Cloud, Community Cloud, Public Cloud and Hybrid Cloud; On-Premise and Off-Premise
- Categorises platforms and applications by relative risk and identifies which cloud models are acceptable for each category of platform/application, which models are unacceptable and which ones may be acceptable with futher risk controls being put in place
- Identifies opportunities for leveraging provider (supplier) activities in support of the regulated company's validation (per GAMP 5/ASTM E2500)
- Stresses the importance of rigourous provider (supplier) assessments, including thorough pre-contract and surveillance audits
- Highlights the need to include additional risk scenarios as part of a defined risk management process (this should include risks which are specific to the Essential Characteristics of Cloud Computing as well as general risks with the outsourcing of IT services)
- Lists additional risk scenarios which may need to be considered, depending upon the Cloud Computing model being looked at (these are discussed in our various webcasts)
- Identifies alternative approaches to validating clouded enterprise systems. This would most usefully identify how the use of Cloud computing often prevents traditional approaches to computer systems validation from being followed and identifies alternative approaches to verifying that the Software-as-a-Service application fulfils the regulated companies requirements
With respect to the last point our webcast "Compliant Cloud Computing - Applications and Software as a Service" discusses issues with the validation of Software-as-a-Service applications using traditional approaches and outlines alternative verification techniques that can be used.
Whether it is in a Master Validation Plan or some form of Cloud strategy document, it is important for all regulated companies to start to think about how they will validate Clouded applications. This is clearly a topic that is not going to go away and is something that all life sciences companies will need to address.
You may also be interested to know that on 15th November 2012 we're going to be looking more closely at the current state of the Cloud computing market specifically with respect to meeting the need of regulated companies in the life sciences industry . We'll be talking about where the market has matured and where appropriate providers can be leveraged - and where it hasn't yet matured. Registration is, as ever, free of charge and you can register for the event at the Business & Decision Life Sciences website.
We look forward to hearing from you on the last stage of our virtual book tour when we'll be looking at the retrospective validation of enterprise systems, which we know is a topic of great interest to many of our clients in Asia, Eastern Europe, the Middle East and Africa and in Latin and South America.
Labels:
ASTM E2500,
Cloud Computing,
GAMP,
IaaS,
PaaS,
SaaS,
Validation
Tuesday, August 21, 2012
Cloud Computing Comes of Age in Life Sciences
For a while now we have been saying the Cloud was coming of age in the Life Sciences industry.
Business & Decision, along with a small number of other Providers have been providing Infrastructure-as-a-Service and Platform-as-a-Service for some time.
We also said that as far as Software-as-a-Service was concerned, we would see Life Sciences specialist vendors (e.g. LIMS, Quality Management, Learning Management etc) providing compliant Software-as-a-Service solutions - simply because they understand our industry both at the functional level and also at the regulatory level.
We are working with a number of such vendors to deploy their software on our Platform-as-a-Service solutions, leveraging virtualization to provision solutions that are inherently flexible, scalable and - perhaps just as importantly - compliant.
At the same time, we have just started to engineer our first compliant 'Cloud Anywhere' solutions - which allow us to deploy pre-engineered and pre-qualified Platforms (hardware, power, HVAC, storage, virtualization, operating systems, database servers and applications servers) anywhere in the world. This was an idea first developed with Oracle with their Exadata and Exalogic machines (for which Business & Decision developed standard Qualification Packs).
Based upon a wider and more affordable technology base ‘Cloud Anywhere’ allows Business & Decision to leverage our investment in our Quality Management System to provision compliant Private or Community Cloud solutions with the minimum of additional qualification activities. These can be installed on client sites, in third party data centres of in the data centres of our software partners.
As well as deploying the solution, these 'Cloud Anywhere' solutions also come complete with Managed Services from Business & Decision - meaning that clients, partners etc no longer need to worry about the management of the Platform. All of this is taken care of remotely by our own staff (with the exception of local power and network connections of course) and the solutions can also be engineered to automatically failover to a remote Disaster Recovery site.
In the last couple of years we have seen people asking "How long will it be before everything is in the Cloud?", but the reality is that this will never be case in Life Sciences. There will always be Life Sciences companies who need or want some infrastructure on their own sites (because of network latency issues or data integrity issues) and the reality is that we are moving towards a mixed-Model Cloud Environment.
The coming of age of safe, secure multi-tenanted Software-as-a-Service and the availability of solutions such as 'Cloud Anywhere' means that Life Sciences companies now have the ability to mix'n'match their Cloud environments to meet their specific business needs - and address their regulatory compliance requirements.
It may not seem like it now, but in the next few years we will see these solutions move from leading-edge to mainstream and we will wonder what all the fuss about Cloud was for.
Business & Decision, along with a small number of other Providers have been providing Infrastructure-as-a-Service and Platform-as-a-Service for some time.
We also said that as far as Software-as-a-Service was concerned, we would see Life Sciences specialist vendors (e.g. LIMS, Quality Management, Learning Management etc) providing compliant Software-as-a-Service solutions - simply because they understand our industry both at the functional level and also at the regulatory level.
We are working with a number of such vendors to deploy their software on our Platform-as-a-Service solutions, leveraging virtualization to provision solutions that are inherently flexible, scalable and - perhaps just as importantly - compliant.
At the same time, we have just started to engineer our first compliant 'Cloud Anywhere' solutions - which allow us to deploy pre-engineered and pre-qualified Platforms (hardware, power, HVAC, storage, virtualization, operating systems, database servers and applications servers) anywhere in the world. This was an idea first developed with Oracle with their Exadata and Exalogic machines (for which Business & Decision developed standard Qualification Packs).
Based upon a wider and more affordable technology base ‘Cloud Anywhere’ allows Business & Decision to leverage our investment in our Quality Management System to provision compliant Private or Community Cloud solutions with the minimum of additional qualification activities. These can be installed on client sites, in third party data centres of in the data centres of our software partners.
As well as deploying the solution, these 'Cloud Anywhere' solutions also come complete with Managed Services from Business & Decision - meaning that clients, partners etc no longer need to worry about the management of the Platform. All of this is taken care of remotely by our own staff (with the exception of local power and network connections of course) and the solutions can also be engineered to automatically failover to a remote Disaster Recovery site.
In the last couple of years we have seen people asking "How long will it be before everything is in the Cloud?", but the reality is that this will never be case in Life Sciences. There will always be Life Sciences companies who need or want some infrastructure on their own sites (because of network latency issues or data integrity issues) and the reality is that we are moving towards a mixed-Model Cloud Environment.
We will see a mixture of non-clouded Infrastructure, Platforms and Software, and various Cloud models, including On-Premise & Off-Premise and Public & Private Clouds.
The coming of age of safe, secure multi-tenanted Software-as-a-Service and the availability of solutions such as 'Cloud Anywhere' means that Life Sciences companies now have the ability to mix'n'match their Cloud environments to meet their specific business needs - and address their regulatory compliance requirements.
It may not seem like it now, but in the next few years we will see these solutions move from leading-edge to mainstream and we will wonder what all the fuss about Cloud was for.
Friday, March 30, 2012
Computer System Validation Policy on Software-as-a-Service (SaaS)
In a recent LinkedIn Group discussion (Computerized Systems Validation Group: Discussion "Validation of Cloud), the topic of
Software-as-a-Service (SaaS) was widely discussed and the need to identify appropriate
controls in Computer System Validation (CSV) policies was discussed.
The reality is that relatively few compliant, validated SaaS
solutions are out there, and relatively few Life Sciences companies have CSV
policies that address this.
However, there are a few CSV policies that I’ve worked on that address this and
although client confidentiality means that I can’t share the documents, I did
volunteer to publish some content on what could be included in a CSV policy
to address SaaS.
Based on the assumption that any CSV policy leveraging a
risk-based approach needs to provide a flexible framework which is instantiated
on a project specific basis in the Validation (Master) Plan, I've provided some notes below (in italics) which may be useful in providing policy guidance. These would need to be incorporated in a CSV Policy using appropriate language (some Regulated Company's CSV Policy's are more prescriptive that others and the language should reflect this).
"When the use of Software-as-a-Service (SaaS) is considered,
additional risks should be identified and accounted for in the risk assessment and in the development of the Validation Plan computer system validation
approach. These are in addition to the issues that need to be considered with
any third party service provider (e.g. general hosting and managed services).
These include:
- How much control the Regulated Company has over the configuration of the application, to meet their specific regulatory or business needs (by definition, SaaS applications provide the Regulated Company (Consumer) with little or no control over the application configuration)
o
How does the Provider communicate application
changes to the Regulated Company, where the Regulated Company has no direct
control of the application?
o
What if Provider controlled changes mean that
the application no longer complies with regulatory requirements?
- The ability/willingness (or otherwise) of the Provider to support compliance audits
- As part of the validation process, whether or not the Regulated Company can effectively test or otherwise verify that their regulatory requirements have been fulfilled
o
Does the Provider provide a separate
Test/QA/Validation Instance?
o
Whether it is practical to test in the
Production instance prior to Production use (can such test records be clearly
differentiated from production records, by time or unique identification)
o
Can the functioning of the SaaS application be
verified against User Requirements as part of the vendor/package selection
process? (prior to contract - applicable to higher risk applications)
o
Can the functioning of the SaaS application be
verified against User Requirements once in production use? (after the control -
may be acceptable for lower risk applications)
- Whether or not the Provider applies applications changes directly to the Production instance, or whether they are tested in a separate Test/QA Instance
- Security and data integrity risks associated with the use of a multi-tenanted SaaS application (i.e. one that is also used by other users of the system), including
o
Whether or not different companies data is
contained in the same database, or the same database tables
o
The security controls that are implemented
within the SaaS application and/or database, to ensure that companies cannot
read/write delete other companies data
- Where appropriate, whether or not copies of only the Regulated Companies data can be provided to regulatory authorities, in accordance with regulatory requirements (e.g. 21CFR Part 11)
- Where appropriate, whether or not the Regulated Companies data can be archived
- If it is likely that the SaaS application is de-clouded (brought in-house or moved to another Provider)
o
Can the Regulated Companies data be extracted
from the SaaS application?
o
Can the Regulated Companies data be deleted in
the original SaaS application?
If these issues cannot be adequately addressed (and risks
mitigated), alternative options may be considered. These may include:
- Acquiring similar software from an acceptable SaaS Provider,
- Provisioning the same software as a Private Cloud, single tenancy application (if allowed by the Provider)
- Managing a similar application (under the direct control of the Regulated Company), deployed on a Platform-as-a-Service (PaaS)"
Tuesday, September 27, 2011
Software as a Service - Questions Answered
As we expected, last week's webcast on Software as a Service (Compliant Cloud Computing - Applications and SaaS) garnered a good deal of interest with some great questions and some interesting votes.
Unfortunately we ran out of time before we could answer all of your questions. We did manage to get around to answering the following questions (see webcast for answers)
Cloud computing is, not surprisingly, the big topic of interest in the IT industry and much of business in general. Cloud will change the IT and business models in many companies and Life Sciences is no different in that respect.
We've have covered this extensively during the last few months, leveraging heavily on the draft NIST Definition of Cloud Computing which is starting to be the de-facto standard for talking about the Cloud - regardless of Cloud Service Providers constantly inventing their own terminology and services!
If you missed any of the previous webcasts they were
- Qualifying the Cloud: Fact or Fiction?
- Leveraging Infrastructure as a Service
- Leveraging Platform as a Service
There are of course specific issues that we need to address in Life Sciences and our work as part of the Stevens Institute of Technology Cloud Computing Consortium is helping to define good governance models for Cloud Computing. These can be leveraged by Regulated Companies in the Life Sciences industry, but it is still important to address the questions and issues covered in our Cloud webcasts.
As we described in our last session, Software as a Service isn't for everyone and although it is the model that many would like to adopt, there are very few SaaS solutions that allow Regulated Companies to maintain compliance of their GxP applications 'out-of-the-box'. This is starting to change, but for now we're putting our money (literally - investment on our qualified data center) into Platform as a Service, which be believe offers the best solution for companies looking to leverage the advantage of Cloud Computing with the necessary control over their GxP applications.
But on to those SaaS questions we didn't get around to last week:
Q. Are you aware of any compliant ERP solutions available as SaaS?
A. We're not. We work with a number of major ERP vendors who are developing Cloud solutions, but their applications aren't yet truly multi-tenanted (see SaaS webcast for issues). Other Providers do offer true multi-tenanted ERP solutions but they are not aimed specifically for Life Sciences. We're currently working with Regulated Company clients and their SaaS Cloud Service Providers to address a number of issues around infrastructure qualification, training of staff, testing of software releases etc, . Things are getting better for a number of Providers, but we're not aware of anyone who yet meets the regulatory needs of Life Sciences as a standard part of the service.
The issue is that this would add costs and this isn't the model that most SaaS vendors are looking for. It's an increasingly competitive market and it's cost sensitive. This is why we believe that niche Life Sciences vendors (e.g. LIMS, EDMS vendors) will get their first, when they combine their existing knowledge of Life Sciences with true multi-tenanted versions of their applications (and of course, deliver the Essential Characteristics of Cloud Computing - see webcasts)
Q. You clearly don't think that SaaS is yet applicable for high risk applications? What about low risk applications?
E-mail us at life.sciences@businessdecision.com
Unfortunately we ran out of time before we could answer all of your questions. We did manage to get around to answering the following questions (see webcast for answers)
- Would you agree that we may have to really escrow applications with third parties in order to be able to retrieve data throughout data retention periods?
- How is security managed with a SaaS provider? Do they have to have Admin access, which allows them access to our data?
- How do you recommend the Change Management (control) of the SaaS software be managed?
- How can we use Cloud but still have real control over our applications?
- What should we do if procurement and IT have already outsourced to a Saas provider, but we haven't done an audit?
As promised, we have answered the two remaining questions we didn't get time to address below.
If you missed any of the previous webcasts they were
- Qualifying the Cloud: Fact or Fiction?
- Leveraging Infrastructure as a Service
- Leveraging Platform as a Service
There are of course specific issues that we need to address in Life Sciences and our work as part of the Stevens Institute of Technology Cloud Computing Consortium is helping to define good governance models for Cloud Computing. These can be leveraged by Regulated Companies in the Life Sciences industry, but it is still important to address the questions and issues covered in our Cloud webcasts.
As we described in our last session, Software as a Service isn't for everyone and although it is the model that many would like to adopt, there are very few SaaS solutions that allow Regulated Companies to maintain compliance of their GxP applications 'out-of-the-box'. This is starting to change, but for now we're putting our money (literally - investment on our qualified data center) into Platform as a Service, which be believe offers the best solution for companies looking to leverage the advantage of Cloud Computing with the necessary control over their GxP applications.
But on to those SaaS questions we didn't get around to last week:
Q. Are you aware of any compliant ERP solutions available as SaaS?
A. We're not. We work with a number of major ERP vendors who are developing Cloud solutions, but their applications aren't yet truly multi-tenanted (see SaaS webcast for issues). Other Providers do offer true multi-tenanted ERP solutions but they are not aimed specifically for Life Sciences. We're currently working with Regulated Company clients and their SaaS Cloud Service Providers to address a number of issues around infrastructure qualification, training of staff, testing of software releases etc, . Things are getting better for a number of Providers, but we're not aware of anyone who yet meets the regulatory needs of Life Sciences as a standard part of the service.
The issue is that this would add costs and this isn't the model that most SaaS vendors are looking for. It's an increasingly competitive market and it's cost sensitive. This is why we believe that niche Life Sciences vendors (e.g. LIMS, EDMS vendors) will get their first, when they combine their existing knowledge of Life Sciences with true multi-tenanted versions of their applications (and of course, deliver the Essential Characteristics of Cloud Computing - see webcasts)
Q. You clearly don't think that SaaS is yet applicable for high risk applications? What about low risk applications?
A. Risk severity of the application is one dimension of the risk calculation. The other is risk likelihood where you are so dependent on your Cloud Services Provider. If you select a good Provider with good general controls (a well designed SaaS application, good physical and logical security, mature support and maintenance process) then it should be possible to balance the risks and look at SaaS, certainly for lower risk applications.
It still doesn't mean that as a Regulated Company you won't have additional costs to add to the costs of the service. You need to align processes and provide on-going oversight and you should expect that this will add to the cost and slow down the provisioning. However, it should be possible to move lower risk applications into the Cloud as SaaS, assuming that you go in with your eyes open and realistic expectations of what is required and what is available.
Q. What strategy should we adopt to the Cloud, as a small-medium Life Sciences company?
A. This is something we're helping companies with and although every organization is different, our approach is generally
- Brief everyone on the advantages of Cloud, what the regulatory expectations are and what to expect. 'Everyone' means IT, Procurement, Finance, the business (Process Owners) and of course Quality.
- Use your system inventory to identify potential applications for Clouding (you do have one, don't you?). Look at which services and applications are suitable for Clouding (using the IaaS, PaaS and SaaS, Private/Public/Community models) and decide how far you want to go. For some organizations IaaS/PaaS is enough to start with, but for other organizations there will be a desire to move to SaaS. Don't forget to think about new services and applications that may be coming along in foreseeable timescales.
- If you are looking at SaaS, start with lower risk applications, get your toe in the water and gradually move higher risk applications into the Cloud as your experience (and confidence) grows - this could take years and remember that experience with one SaaS Provider does not automatically transfer to another Provider.
- Look to leverage one or two Providers for IaaS and PaaS - the economies of scale are useful, but it's good to share the work/risk.
- Carefully assess all Providers (our webcasts will show you what to look for) and don't be tempted to cut audits short. It is time well worth investing and provides significant ROI.
- Only sign contracts when important compliance issues have been addressed, or are included as part of the contractual requirements. That way there won't be any cost surprises later on.
- Remember to consider un-Clouding. We've talked about this in our webcasts but one day you may want to switch Provider of move some services or applications out of the Cloud.
E-mail us at life.sciences@businessdecision.com
Thursday, May 19, 2011
Cloud Computing: Infrastructure as a Service Webcast
Yesterday saw the webcast of the first in a series of Cloud Computing webcasts - this one on "Infrastructure as a Service". The next ones are looking at "Platform as a Service" (on July 20th) and "Software as a Service" (on September 21st) - don't worry if the dates have passed by the time you come across this blog entry because all of the webcasts are recorded and available via our Past Events page
There was was a good turnout and some good questions asked. Unfortunately we didn't have time to cover all the questions before our hour ran out. We've therefore covered the questions and answers in our Life Sciences blog below:
The first questions we did quickly look at were about Change Control and Configuration Management:
Q. (Change Control) pre-approvals tend to be the sticking points for changes, how have you overcome this
Q. Is there a live configuration management database used?
A. These first questions related to how the Essential Characteristics of Cloud Computing i.e. Flexible On-Demand Service and Rapid Elasticity can be met in a regulated and qualified environment.
Business & Decision does use pre-approved change controls for some types of like-for-like change and we discussed our change control processes in more detail in our webcast "Maintaining the Qualified State: A Day in the Life of a Data Center" webcast on January 12th 2011.
In the same webcast we also discussed the use of configuration management records. Depending on the platform and component our configuration management records are either paper based or electronic and in many cases we use a secure spreadsheet for recording platform or component specific configuration item details. Updates to such electronic records are 'approved' by the sign-off of the controlling change control record and this means that a separate paper document doesn't need updating, printing and signing. This supports the 'rapid elasticity' required in a Cloud model.
Q. If PaaS is provided by 3rd party, would vendor audit be sufficient?
A. Although the next webcast in the series will discuss Platform as a Service (PaaS) in more detail, we did have time to briefly answer this question on-line. Generally an audit of any Cloud provider (IaaS, PaaS or SaaS) would be the minimum that would be required. This is essential to ensure that you understand:
- What service is being provisioned
- Who provisions which parts of the service (you or the third party)
- Who manages the services on a day-to-day basis
- Where they provision the service (where your data and applications will reside)
- How you, as the Regulated Company, can provide effective control (and demonstrate accountability to your regulators)
- How they provision and manage the relevant services (and do they actually do what their Policies and Procedures say that they do)
- What are the specific risk scenarios, the risk likelihood and what risk controls need to be established beyond the providers standard services
Whether any addition actions would be required would depend on the outcome of the initial audit. In some cases infrequent surveillance audits are all that would be required. In other cases additional metrics may need to be established in the SLA and in some cases it might be determined that some services will need to stay On-Premise. If people download the slides from our website (let us know if you need to know how) you'll be able to see the flow chart for this process.
Q. In case of IaaS, provisioning is tightly coupled with PaaS and thus required to provisioning of PaaS as well. How can your on-demand provisioning can be achieved in a minute?
A. Our experience is also that in the real world, at least contractually, the provisioning of Infrastructure as a Service is coupled to Platform as a Service i.e. we provide the complete Platform, including the infrastructure components (this also fits much more realistically with the GAMP definition of Infrastructure, as discussed yesterday). However in many cases the change is at the level of "processing, storage, networks, and other fundamental computing resources" (to use the NIST definition of IaaS), so it really is IaaS within a broader PaaS model.
It is certain infrastructure components that can be technically provisioned in a minute or so - additional storage, network connections etc - this is usually just a change to a configuration parameter. You obviously need to add on the time to raise the change control and update the configuration management record, but for small changes that don't need client approval (because this is based upon a client request), and because these processes and systems use electronic records it can still be done in minutes rather than hours.
For physical infrastructure items (e.g. additional memory or CPUs) we can make the physical change and reboot the server also in a matter if minutes. Where we need to prepare a new environment (e.g. switch the clients application to a different virtual machine with the right infrastructure) this may need additional time to prepare, but the downtime for the client can also be a matter of moments as we reallocate IP addresses etc.
Even where we have had to provision new virtual machines (which really is PaaS) this is done in a matter or hours as we not only leverage standard specifications/designs and build, but we also leverage standard qualification documentation to ensure that the qualification process doesn't slow things down.
While it's true that most PaaS changes require more than a few minutes, it's usually a matter of hours rather than days.
Q. How are security exposures addressed within a IaaS scenario? (If the IaaS is not responsible for OS?)
de-provisioned) such as storage or memory or CPUs then there should be no fundamental change in the risk scenario previously assessed. e.g. when you allocate additional disc capacity at a physical level it 'inherits' the security settings and permissions to which it is allocated. Issues with the database security are handled at the level of PaaS and any allocation of new database tables would of course need a reassessment of risk. The same is of course true regarding memory and CPUs.
At the IaaS level it is network capacity and specifically routings that need more thought. Allocating bandwidth to a specific client or application within a T1 pipe is again a matter of configuration and doesn't affect the associated security settings. Making changes to routings is of course more 'risky' and would require us to look again at the original risk assessments.
Most security risks do occur in the PaaS and SaaS models which we'll be looking at in more detail in the future webcasts in this series.
Thanks again for everyone for joining us on the webcast yesterday - if you missed it the recording will stay on line as long as the infrastructure is provisioned by BrightTalk! We hope you'll join us agian soon.
There was was a good turnout and some good questions asked. Unfortunately we didn't have time to cover all the questions before our hour ran out. We've therefore covered the questions and answers in our Life Sciences blog below:
The first questions we did quickly look at were about Change Control and Configuration Management:
Q. (Change Control) pre-approvals tend to be the sticking points for changes, how have you overcome this
Q. Is there a live configuration management database used?
A. These first questions related to how the Essential Characteristics of Cloud Computing i.e. Flexible On-Demand Service and Rapid Elasticity can be met in a regulated and qualified environment.
Business & Decision does use pre-approved change controls for some types of like-for-like change and we discussed our change control processes in more detail in our webcast "Maintaining the Qualified State: A Day in the Life of a Data Center" webcast on January 12th 2011.
In the same webcast we also discussed the use of configuration management records. Depending on the platform and component our configuration management records are either paper based or electronic and in many cases we use a secure spreadsheet for recording platform or component specific configuration item details. Updates to such electronic records are 'approved' by the sign-off of the controlling change control record and this means that a separate paper document doesn't need updating, printing and signing. This supports the 'rapid elasticity' required in a Cloud model.
Q. If PaaS is provided by 3rd party, would vendor audit be sufficient?
A. Although the next webcast in the series will discuss Platform as a Service (PaaS) in more detail, we did have time to briefly answer this question on-line. Generally an audit of any Cloud provider (IaaS, PaaS or SaaS) would be the minimum that would be required. This is essential to ensure that you understand:
- What service is being provisioned
- Who provisions which parts of the service (you or the third party)
- Who manages the services on a day-to-day basis
- Where they provision the service (where your data and applications will reside)
- How you, as the Regulated Company, can provide effective control (and demonstrate accountability to your regulators)
- How they provision and manage the relevant services (and do they actually do what their Policies and Procedures say that they do)
- What are the specific risk scenarios, the risk likelihood and what risk controls need to be established beyond the providers standard services
Whether any addition actions would be required would depend on the outcome of the initial audit. In some cases infrequent surveillance audits are all that would be required. In other cases additional metrics may need to be established in the SLA and in some cases it might be determined that some services will need to stay On-Premise. If people download the slides from our website (let us know if you need to know how) you'll be able to see the flow chart for this process.
Q. In case of IaaS, provisioning is tightly coupled with PaaS and thus required to provisioning of PaaS as well. How can your on-demand provisioning can be achieved in a minute?
A. Our experience is also that in the real world, at least contractually, the provisioning of Infrastructure as a Service is coupled to Platform as a Service i.e. we provide the complete Platform, including the infrastructure components (this also fits much more realistically with the GAMP definition of Infrastructure, as discussed yesterday). However in many cases the change is at the level of "processing, storage, networks, and other fundamental computing resources" (to use the NIST definition of IaaS), so it really is IaaS within a broader PaaS model.
It is certain infrastructure components that can be technically provisioned in a minute or so - additional storage, network connections etc - this is usually just a change to a configuration parameter. You obviously need to add on the time to raise the change control and update the configuration management record, but for small changes that don't need client approval (because this is based upon a client request), and because these processes and systems use electronic records it can still be done in minutes rather than hours.
For physical infrastructure items (e.g. additional memory or CPUs) we can make the physical change and reboot the server also in a matter if minutes. Where we need to prepare a new environment (e.g. switch the clients application to a different virtual machine with the right infrastructure) this may need additional time to prepare, but the downtime for the client can also be a matter of moments as we reallocate IP addresses etc.
Even where we have had to provision new virtual machines (which really is PaaS) this is done in a matter or hours as we not only leverage standard specifications/designs and build, but we also leverage standard qualification documentation to ensure that the qualification process doesn't slow things down.
While it's true that most PaaS changes require more than a few minutes, it's usually a matter of hours rather than days.
Q. How are security exposures addressed within a IaaS scenario? (If the IaaS is not responsible for OS?)
de-provisioned) such as storage or memory or CPUs then there should be no fundamental change in the risk scenario previously assessed. e.g. when you allocate additional disc capacity at a physical level it 'inherits' the security settings and permissions to which it is allocated. Issues with the database security are handled at the level of PaaS and any allocation of new database tables would of course need a reassessment of risk. The same is of course true regarding memory and CPUs.
At the IaaS level it is network capacity and specifically routings that need more thought. Allocating bandwidth to a specific client or application within a T1 pipe is again a matter of configuration and doesn't affect the associated security settings. Making changes to routings is of course more 'risky' and would require us to look again at the original risk assessments.
Most security risks do occur in the PaaS and SaaS models which we'll be looking at in more detail in the future webcasts in this series.
Thanks again for everyone for joining us on the webcast yesterday - if you missed it the recording will stay on line as long as the infrastructure is provisioned by BrightTalk! We hope you'll join us agian soon.
Friday, November 19, 2010
Qualifying the Cloud: Fact or Fiction?
There was a great deal of interest in last Wednesday’s webcast “Qualifying the Cloud: Fact or Fiction?”. Cloud Computing is certainly an issue with a number of people and your responses during the session clearly indicate that there are some regulatory concerns.
Despite adding 15 minutes to the originally scheduled session there were still more questions than we could fully answer in the time allowed and as promised we have provided written answers to your questions below.
Q. In your audit and/or customer experience, have you found that a SLA or service level agreements indicating demonstrative control over the infrastructure in the cloud is sufficient to meet GxP regulatory compliance, or are auditors still looking for IQ/OQ "installation operational qualification" checklists against a specific list of requirements
Different auditors look for different things and let’s start by saying that it’s pretty rare for regulatory inspectors to be spending any time in data centers unless there is due cause. Nowadays this is usually because of issues with an uncontrolled system that are encountered during a broader inspection.
When I am auditing on behalf of Life Sciences clients I will always look for evidence that IQ/OQ (or combined IOQ) is performed properly. By this I mean not just that the as-built/installed infrastructure matches the configuration management records, but that the as-built/installed infrastructure complies with the design specifications and client requirements.
I once audited a major managed services and hosting provider and their processes for building and installing infrastructure platforms were very good and highly automated – which is good for the rapid elasticity required in Cloud Computing. They literally selected the options off a pick list – how much memory, how many CPUs, what disk capacity etc – and the system was built and installed accordingly in their data center accordingly.
However, there was no independent review of the specifications against the client requirements and no independent review of the as built/installed server platform against the specification. Configuration management records were generated directly from the as built/installed server and never compared against the specification.
As Neill described in the webcast, if someone had accidentally selected the wrong build option from the pick list (e.g. 20GB of storage instead of 40GB) no-one would have noticed until the Service Level Agreement requirements were unfulfilled. That’s why I will always check that there is some element of design review and build/install verification.
However, I’ll usually review the specification, design, build and verification procedures as part of the initial audit to check that these reviews are part of the defined process. I’ll also spot check some of the IOQ records to check that the verification has been done. During subsequent surveillance audits I’ll also check the IOQ records as part of whatever sampling approach I’m taking (sometimes I’ll follow the end-to-end specification, design, build/installation and verification for a particular platform or sometimes I’ll focus on the IOQ process). I'm not looking to verify the build/installation of the infrastructure myself, but I am looking for evidence that there is a process to do this and that someone has done it.
IOQ needn’t be a particularly onerous process – the use of checklists and standard templates can help accelerate this process and as long as people are appropriately trained I’m usually prepared to accept a signature of someone to say that the review activity was done i.e. a signed design specification signed by the reviewer.
As we've found in our own data center, if it's an integral part of the process (especially a semi-automated process) it doesn't have a significant impact on timescales and doesn't detract from the 'rapid elasticity' which as an essential characterristic of Cloud Computing. While issues of capacity are less of a problem in a extensible Cloud the process of IOQ does help catch other types of error (patches not being applied, two or three steps in an automated install having failed etc).
Q. Your early descriptions were good but how would you explain the concept of a the cloud to a traditional Quality person with only a very basic knowledge of Network Architecture?
I don’t think I would!
Trying to explain to a non-IT specialist in the Quality Unit what the Cloud is always going to be difficult if you take the approach of saying that the Cloud is undefined and the Users don’t need to know what’s going on.
The way to explain it is to say that although the Users in the Regulated Company don’t need to know what the Cloud is, the Regulated Companies IT Department and their IT Quality Group do know what is going on in the Cloud, and that they have checked that it is appropriately controlled
You then need to demonstrate to your Quality Unit that you do know what’s going on in the Cloud. If it’s a Private Cloud you do this by showing them diagrams and specifications, qualification documents and so on. If it’s a Public Cloud (or an externally hosted Private Cloud) you do this by showing that you have audited the Cloud Provider to check that they have the diagrams and specifications, qualification documents and so on.
It’s all about perception. It’s okay for the Users not to know what’s going on in the Cloud, but someone clearly has to be in control. This needs to be the appropriate subject matter experts (either your own IT people or the Cloud Service Providers) and your own IT Quality Unit.
If you’re a small company without the resources or technical knowledge to assess your Cloud Providers you can rely on independent consultants for this support, but you have to select the right consultants and demonstrate due diligence in their selection.
Q. In the event of a regulatory audit, when you are using cloud resources (non-private), how does the Cloud Service Providers responsibility factor in?
Basically, you need your Cloud Service Providers to be on the hook with you and this means clearly defining what support they will provide both in terms of day to day service level requirements and in the event of a regulatory inspection.
Again, let’s emphasize that regulatory authorities rarely look in the data center without due cause and although we are prepared for them to come to our data center in Wayne, we’re not aware of any regulators actually having visited a true third party hosting facility. (However, with the concerns the industry is demonstrating around this issue we think that it’s only a matter of time before they visit someone’s third party data center, somewhere).
The worst case scenario is when, during a regulatory inspection and Inspector asks the question “Is the System Validated” and you have to say “We don’t know…” That’s when further questions will be asked, the answers to which will eventually lead to your Cloud Service Provider. A failure to have properly assessed your Provider will clearly demonstrate to the regulatory authorities a lack of control.
We know of a LOT of Life Sciences Regulated Companies who have outsourced based solely on cost, with the process driven by IT management and the accountants. They usually accept the Providers standard service levels and any involvement from quality/regulatory is often late and sometimes ignored. The result is that ‘compliance’ then becomes an added activity with added costs, the promised cost savings disappear and there is often no right to adequately audit or provide support for regulatory inspections including in the Service Level Agreement.
Finally we’d just like to highlight a comment made by one of the listeners “Audit and assessment of the provider should be seen as the Insurance Certificate!” This is an excellent point and really emphasizes the key issue about Cloud Computing – you need to dig below the surface, get behind all of the hype and really understand the what, who, where and how.
There’s no reason why Cloud Computing shouldn’t be used for regulatory purposes as long as Regulated Companies exercise their responsibilities and work with Service Providers who are willing to be open about what they are doing. As far as the Users are concerned, the Cloud is still a Cloud (on-demand, rapid elasticity etc), but the Regulated Companies IT department and IT Quality group need to be in the Cloud with the Service Providers, understanding what’s going on and making sure that things are controlled.
Thank you again to everyone for their interest. The recording is still available online for anyone who didn’t catch the entire session and you can still register for the final webcast in the series via the Business & Decision Life Sciences website.
Despite adding 15 minutes to the originally scheduled session there were still more questions than we could fully answer in the time allowed and as promised we have provided written answers to your questions below.
Q. In your audit and/or customer experience, have you found that a SLA or service level agreements indicating demonstrative control over the infrastructure in the cloud is sufficient to meet GxP regulatory compliance, or are auditors still looking for IQ/OQ "installation operational qualification" checklists against a specific list of requirements
Different auditors look for different things and let’s start by saying that it’s pretty rare for regulatory inspectors to be spending any time in data centers unless there is due cause. Nowadays this is usually because of issues with an uncontrolled system that are encountered during a broader inspection.
When I am auditing on behalf of Life Sciences clients I will always look for evidence that IQ/OQ (or combined IOQ) is performed properly. By this I mean not just that the as-built/installed infrastructure matches the configuration management records, but that the as-built/installed infrastructure complies with the design specifications and client requirements.
I once audited a major managed services and hosting provider and their processes for building and installing infrastructure platforms were very good and highly automated – which is good for the rapid elasticity required in Cloud Computing. They literally selected the options off a pick list – how much memory, how many CPUs, what disk capacity etc – and the system was built and installed accordingly in their data center accordingly.
However, there was no independent review of the specifications against the client requirements and no independent review of the as built/installed server platform against the specification. Configuration management records were generated directly from the as built/installed server and never compared against the specification.
As Neill described in the webcast, if someone had accidentally selected the wrong build option from the pick list (e.g. 20GB of storage instead of 40GB) no-one would have noticed until the Service Level Agreement requirements were unfulfilled. That’s why I will always check that there is some element of design review and build/install verification.
However, I’ll usually review the specification, design, build and verification procedures as part of the initial audit to check that these reviews are part of the defined process. I’ll also spot check some of the IOQ records to check that the verification has been done. During subsequent surveillance audits I’ll also check the IOQ records as part of whatever sampling approach I’m taking (sometimes I’ll follow the end-to-end specification, design, build/installation and verification for a particular platform or sometimes I’ll focus on the IOQ process). I'm not looking to verify the build/installation of the infrastructure myself, but I am looking for evidence that there is a process to do this and that someone has done it.
IOQ needn’t be a particularly onerous process – the use of checklists and standard templates can help accelerate this process and as long as people are appropriately trained I’m usually prepared to accept a signature of someone to say that the review activity was done i.e. a signed design specification signed by the reviewer.
As we've found in our own data center, if it's an integral part of the process (especially a semi-automated process) it doesn't have a significant impact on timescales and doesn't detract from the 'rapid elasticity' which as an essential characterristic of Cloud Computing. While issues of capacity are less of a problem in a extensible Cloud the process of IOQ does help catch other types of error (patches not being applied, two or three steps in an automated install having failed etc).
Q. Your early descriptions were good but how would you explain the concept of a the cloud to a traditional Quality person with only a very basic knowledge of Network Architecture?
I don’t think I would!
Trying to explain to a non-IT specialist in the Quality Unit what the Cloud is always going to be difficult if you take the approach of saying that the Cloud is undefined and the Users don’t need to know what’s going on.
The way to explain it is to say that although the Users in the Regulated Company don’t need to know what the Cloud is, the Regulated Companies IT Department and their IT Quality Group do know what is going on in the Cloud, and that they have checked that it is appropriately controlled
You then need to demonstrate to your Quality Unit that you do know what’s going on in the Cloud. If it’s a Private Cloud you do this by showing them diagrams and specifications, qualification documents and so on. If it’s a Public Cloud (or an externally hosted Private Cloud) you do this by showing that you have audited the Cloud Provider to check that they have the diagrams and specifications, qualification documents and so on.
It’s all about perception. It’s okay for the Users not to know what’s going on in the Cloud, but someone clearly has to be in control. This needs to be the appropriate subject matter experts (either your own IT people or the Cloud Service Providers) and your own IT Quality Unit.
If you’re a small company without the resources or technical knowledge to assess your Cloud Providers you can rely on independent consultants for this support, but you have to select the right consultants and demonstrate due diligence in their selection.
Q. In the event of a regulatory audit, when you are using cloud resources (non-private), how does the Cloud Service Providers responsibility factor in?
Basically, you need your Cloud Service Providers to be on the hook with you and this means clearly defining what support they will provide both in terms of day to day service level requirements and in the event of a regulatory inspection.
Again, let’s emphasize that regulatory authorities rarely look in the data center without due cause and although we are prepared for them to come to our data center in Wayne, we’re not aware of any regulators actually having visited a true third party hosting facility. (However, with the concerns the industry is demonstrating around this issue we think that it’s only a matter of time before they visit someone’s third party data center, somewhere).
The worst case scenario is when, during a regulatory inspection and Inspector asks the question “Is the System Validated” and you have to say “We don’t know…” That’s when further questions will be asked, the answers to which will eventually lead to your Cloud Service Provider. A failure to have properly assessed your Provider will clearly demonstrate to the regulatory authorities a lack of control.
We know of a LOT of Life Sciences Regulated Companies who have outsourced based solely on cost, with the process driven by IT management and the accountants. They usually accept the Providers standard service levels and any involvement from quality/regulatory is often late and sometimes ignored. The result is that ‘compliance’ then becomes an added activity with added costs, the promised cost savings disappear and there is often no right to adequately audit or provide support for regulatory inspections including in the Service Level Agreement.
- Always conduct a full audit well before signing a contract (at least two days on-site, at least a month before the contract is due for signing).
- Agree in the contract how and when any quality/compliance/control ‘gaps’ from the audit (and any surveillance audits) will be addressed.
- Identify the penalties for not addressing any quality/compliance/control ‘gaps’ in the contract (this might include reducing service charges to cover the cost of the Regulated Companies additional quality/compliance/control activities or even cancellation of the contract – which we know one pharmaceutical company actually did).
- Include the right for surveillance audits in the contract.
- Include the need to support any regulatory inspections in the contract (this may never happen so can be a justifiable additional cost).
Finally we’d just like to highlight a comment made by one of the listeners “Audit and assessment of the provider should be seen as the Insurance Certificate!” This is an excellent point and really emphasizes the key issue about Cloud Computing – you need to dig below the surface, get behind all of the hype and really understand the what, who, where and how.
There’s no reason why Cloud Computing shouldn’t be used for regulatory purposes as long as Regulated Companies exercise their responsibilities and work with Service Providers who are willing to be open about what they are doing. As far as the Users are concerned, the Cloud is still a Cloud (on-demand, rapid elasticity etc), but the Regulated Companies IT department and IT Quality group need to be in the Cloud with the Service Providers, understanding what’s going on and making sure that things are controlled.
Thank you again to everyone for their interest. The recording is still available online for anyone who didn’t catch the entire session and you can still register for the final webcast in the series via the Business & Decision Life Sciences website.
Subscribe to:
Posts (Atom)