Over the last three days we've been taking part and presenting at the third Global Outsourcing Conference, jointly organized by Xavier University and the US FDA.
Although not the best attended of conferences this year, it proved to be one of the best in terms of content presented and the quality of the invited speakers, including a couple of key note addresses from senior members of US FDA. This resulted in some very interesting and beneficial discussions amongst the attendees, all of whom have taken home some thought provoking material and ideas for implementing positive change in terms of better securing the supply chain, assuring product and patient safety and in optimizing the performance of their extended enterprises.
The conference looked at a wide range of outsourcing and supply chain issues, ranging from the pragmatic management of outsourcing and supply chain management best practices, with a mixture of practical best practices from the pharmaceutical industry and research and experience from a number of leading Universities working in the field (presentations are currently available on the Xavier GOC website).
Of significant interest were the FDA presentations looking at the implications of the recent FDA Safety and Innovation Act (FDASIA - due to be signed into law next month) and the changes that this will have in terms of changes to GMP and GDP regulations.
There was a significant interest in the topic of serialization and ePedigree - which was covered in a number of sessions and signs are that companies are now realizing that rolling these solutions out will be necessary and more difficult than originally envisaged when compared to simpler pilot studies.
Supplier selection, assessment and management were also key topics with the focus on developing partnerships and relationships as the best way of meeting forthcoming regulatory expectations for the management of suppliers.
Business & Decision presented a deep dive session on the future challenges faced by ERP System and Process Owners, looking at the need to integrate with serialization systems, master data management systems, and supply chain partners systems. Acknowledging that many ERP systems were never designed to handle such a level of integration, the session looked at how middleware solutions such as Business Process Management solutions and SOA can be used to better integrate the supply chain.
Outsourcing clearly isn't going away and although some companies are looking to in-source some strategic products and services once again, the issues associated with outsourcing cannot be ignored. Although examples from India and China were much in evidence it was also acknowledged that outsourcing risks do not solely exist in so-called 'emerging economies'
This issues exist not only with product (API, excipients and other starting materials), but also with services such as IT services and it is clear that the US FDA expect companies to better manage their suppliers and supply chain.
For pharmaceutical companies looking to get involved in the debate there
is the opportunity to follow the discussion on-line in the LinkedIn "Xavier Pharmaceutical Community".
In summary, the conference provided pharmaceutical companies with a comprehensive list of the topics they will need to be address in the next 1 - 3 years, which now need to be developed into a road map leading to on-going compliance, improved product and patient safety and more efficient and cost-effective supply chain operations.
Showing posts with label Supplier Assessment. Show all posts
Showing posts with label Supplier Assessment. Show all posts
Wednesday, September 26, 2012
Tuesday, September 27, 2011
Software as a Service - Questions Answered
As we expected, last week's webcast on Software as a Service (Compliant Cloud Computing - Applications and SaaS) garnered a good deal of interest with some great questions and some interesting votes.
Unfortunately we ran out of time before we could answer all of your questions. We did manage to get around to answering the following questions (see webcast for answers)
Cloud computing is, not surprisingly, the big topic of interest in the IT industry and much of business in general. Cloud will change the IT and business models in many companies and Life Sciences is no different in that respect.
We've have covered this extensively during the last few months, leveraging heavily on the draft NIST Definition of Cloud Computing which is starting to be the de-facto standard for talking about the Cloud - regardless of Cloud Service Providers constantly inventing their own terminology and services!
If you missed any of the previous webcasts they were
- Qualifying the Cloud: Fact or Fiction?
- Leveraging Infrastructure as a Service
- Leveraging Platform as a Service
There are of course specific issues that we need to address in Life Sciences and our work as part of the Stevens Institute of Technology Cloud Computing Consortium is helping to define good governance models for Cloud Computing. These can be leveraged by Regulated Companies in the Life Sciences industry, but it is still important to address the questions and issues covered in our Cloud webcasts.
As we described in our last session, Software as a Service isn't for everyone and although it is the model that many would like to adopt, there are very few SaaS solutions that allow Regulated Companies to maintain compliance of their GxP applications 'out-of-the-box'. This is starting to change, but for now we're putting our money (literally - investment on our qualified data center) into Platform as a Service, which be believe offers the best solution for companies looking to leverage the advantage of Cloud Computing with the necessary control over their GxP applications.
But on to those SaaS questions we didn't get around to last week:
Q. Are you aware of any compliant ERP solutions available as SaaS?
A. We're not. We work with a number of major ERP vendors who are developing Cloud solutions, but their applications aren't yet truly multi-tenanted (see SaaS webcast for issues). Other Providers do offer true multi-tenanted ERP solutions but they are not aimed specifically for Life Sciences. We're currently working with Regulated Company clients and their SaaS Cloud Service Providers to address a number of issues around infrastructure qualification, training of staff, testing of software releases etc, . Things are getting better for a number of Providers, but we're not aware of anyone who yet meets the regulatory needs of Life Sciences as a standard part of the service.
The issue is that this would add costs and this isn't the model that most SaaS vendors are looking for. It's an increasingly competitive market and it's cost sensitive. This is why we believe that niche Life Sciences vendors (e.g. LIMS, EDMS vendors) will get their first, when they combine their existing knowledge of Life Sciences with true multi-tenanted versions of their applications (and of course, deliver the Essential Characteristics of Cloud Computing - see webcasts)
Q. You clearly don't think that SaaS is yet applicable for high risk applications? What about low risk applications?
E-mail us at life.sciences@businessdecision.com
Unfortunately we ran out of time before we could answer all of your questions. We did manage to get around to answering the following questions (see webcast for answers)
- Would you agree that we may have to really escrow applications with third parties in order to be able to retrieve data throughout data retention periods?
- How is security managed with a SaaS provider? Do they have to have Admin access, which allows them access to our data?
- How do you recommend the Change Management (control) of the SaaS software be managed?
- How can we use Cloud but still have real control over our applications?
- What should we do if procurement and IT have already outsourced to a Saas provider, but we haven't done an audit?
As promised, we have answered the two remaining questions we didn't get time to address below.
If you missed any of the previous webcasts they were
- Qualifying the Cloud: Fact or Fiction?
- Leveraging Infrastructure as a Service
- Leveraging Platform as a Service
There are of course specific issues that we need to address in Life Sciences and our work as part of the Stevens Institute of Technology Cloud Computing Consortium is helping to define good governance models for Cloud Computing. These can be leveraged by Regulated Companies in the Life Sciences industry, but it is still important to address the questions and issues covered in our Cloud webcasts.
As we described in our last session, Software as a Service isn't for everyone and although it is the model that many would like to adopt, there are very few SaaS solutions that allow Regulated Companies to maintain compliance of their GxP applications 'out-of-the-box'. This is starting to change, but for now we're putting our money (literally - investment on our qualified data center) into Platform as a Service, which be believe offers the best solution for companies looking to leverage the advantage of Cloud Computing with the necessary control over their GxP applications.
But on to those SaaS questions we didn't get around to last week:
Q. Are you aware of any compliant ERP solutions available as SaaS?
A. We're not. We work with a number of major ERP vendors who are developing Cloud solutions, but their applications aren't yet truly multi-tenanted (see SaaS webcast for issues). Other Providers do offer true multi-tenanted ERP solutions but they are not aimed specifically for Life Sciences. We're currently working with Regulated Company clients and their SaaS Cloud Service Providers to address a number of issues around infrastructure qualification, training of staff, testing of software releases etc, . Things are getting better for a number of Providers, but we're not aware of anyone who yet meets the regulatory needs of Life Sciences as a standard part of the service.
The issue is that this would add costs and this isn't the model that most SaaS vendors are looking for. It's an increasingly competitive market and it's cost sensitive. This is why we believe that niche Life Sciences vendors (e.g. LIMS, EDMS vendors) will get their first, when they combine their existing knowledge of Life Sciences with true multi-tenanted versions of their applications (and of course, deliver the Essential Characteristics of Cloud Computing - see webcasts)
Q. You clearly don't think that SaaS is yet applicable for high risk applications? What about low risk applications?
A. Risk severity of the application is one dimension of the risk calculation. The other is risk likelihood where you are so dependent on your Cloud Services Provider. If you select a good Provider with good general controls (a well designed SaaS application, good physical and logical security, mature support and maintenance process) then it should be possible to balance the risks and look at SaaS, certainly for lower risk applications.
It still doesn't mean that as a Regulated Company you won't have additional costs to add to the costs of the service. You need to align processes and provide on-going oversight and you should expect that this will add to the cost and slow down the provisioning. However, it should be possible to move lower risk applications into the Cloud as SaaS, assuming that you go in with your eyes open and realistic expectations of what is required and what is available.
Q. What strategy should we adopt to the Cloud, as a small-medium Life Sciences company?
A. This is something we're helping companies with and although every organization is different, our approach is generally
- Brief everyone on the advantages of Cloud, what the regulatory expectations are and what to expect. 'Everyone' means IT, Procurement, Finance, the business (Process Owners) and of course Quality.
- Use your system inventory to identify potential applications for Clouding (you do have one, don't you?). Look at which services and applications are suitable for Clouding (using the IaaS, PaaS and SaaS, Private/Public/Community models) and decide how far you want to go. For some organizations IaaS/PaaS is enough to start with, but for other organizations there will be a desire to move to SaaS. Don't forget to think about new services and applications that may be coming along in foreseeable timescales.
- If you are looking at SaaS, start with lower risk applications, get your toe in the water and gradually move higher risk applications into the Cloud as your experience (and confidence) grows - this could take years and remember that experience with one SaaS Provider does not automatically transfer to another Provider.
- Look to leverage one or two Providers for IaaS and PaaS - the economies of scale are useful, but it's good to share the work/risk.
- Carefully assess all Providers (our webcasts will show you what to look for) and don't be tempted to cut audits short. It is time well worth investing and provides significant ROI.
- Only sign contracts when important compliance issues have been addressed, or are included as part of the contractual requirements. That way there won't be any cost surprises later on.
- Remember to consider un-Clouding. We've talked about this in our webcasts but one day you may want to switch Provider of move some services or applications out of the Cloud.
E-mail us at life.sciences@businessdecision.com
Tuesday, September 20, 2011
GAMP® Conference: Cost-Effective Compliance – Practical Solutions for Computerised Systems
A very interesting and useful conference held here in Brussels over the past two days, with a focus on achieving IS compliance in a cost effective and pragmatic way. It's good to see ISPE / GAMP® moving past the basics and getting into some more advanced explorarations of how to apply risk-based approaches to projects and also the operational phase of the system life cycle.
There was understandably a lot of discussion and highlighting of the new Annex 11 (Computerised Systems), with many of the presenters tying their topics back to the new guidance document, which has now been in effect for just two and a half months.
One of the most interesting sessions was when Audny Stenbråten, a Pharmaceutical Inspector of the Norwegian Regulator (Statens Legemiddelverk) provided a perspective of Annex 11 from the point of view of the regulator. It was good to see an open approach to the use of pragmatic risk-based solutions, but as was highlighted throughout the conference, risk-based approaches require a well-documented rationale.
Chris Reid of Integrity Solutions presented a very good session on Managing Suppliers and Service Providers and Tim Goossens of MSD outlined how his company is currently approaching Annex 11.
Siôn Wyn, of Conformity, provided an update on 21 CFR Part 11, which was really ‘no change’. The FDA are continuing with their add-on Part 11 inspections for the foreseeable future, with no planned end date and no defined plans on how to address updates or any changes to Part 11.
On the second day, after yours truly presented some case studies on practical risk management in the Business & Decision Life Sciences CRO and our qualified data center, Jürgen Schmitz of Novartis Vaccines and Diagnostics presented an interesting session on how IT is embedded into their major projects.
Mick Symonds of Atos Origin presented on Business Continuity in what I thought was an informative and highly entertaining presentation, but which was non-industry specific and was just a little too commercial for my liking.
Yves Samson (Kereon AG) and Chris Reid led some useful workshops looking at the broader impacts of IT Change Control and the scope, and scalability of Periodic Evaluations. These were good, interactive sessions and I’m sure that everyone benefitted from the interaction and discussion.
In the final afternoon René Van Opstal, (Van Opstal Consulting) gave an interesting presentation on aligning project management and validation and Rob Stephenson (Rob Stephenson Consultancy) presented a case study on Decommissioning which, although it had previously been presented at a GAMP UK meeting, was well worth airing to a wider audience.
All in all it was a good couple of days with some useful sessions, living up to its billing as suitable for intermediate to advanced attendees. On the basis of this session I’d certainly recommend similar sessions to those responsible for IS Compliance in either a QA or IT role and I’m looking forward to the next GAMP UK meeting, and to presenting at the ISPE UK AGM meeting and also the ISPE Global AGM meeting later in the year.
There was understandably a lot of discussion and highlighting of the new Annex 11 (Computerised Systems), with many of the presenters tying their topics back to the new guidance document, which has now been in effect for just two and a half months.
One of the most interesting sessions was when Audny Stenbråten, a Pharmaceutical Inspector of the Norwegian Regulator (Statens Legemiddelverk) provided a perspective of Annex 11 from the point of view of the regulator. It was good to see an open approach to the use of pragmatic risk-based solutions, but as was highlighted throughout the conference, risk-based approaches require a well-documented rationale.
Chris Reid of Integrity Solutions presented a very good session on Managing Suppliers and Service Providers and Tim Goossens of MSD outlined how his company is currently approaching Annex 11.
Siôn Wyn, of Conformity, provided an update on 21 CFR Part 11, which was really ‘no change’. The FDA are continuing with their add-on Part 11 inspections for the foreseeable future, with no planned end date and no defined plans on how to address updates or any changes to Part 11.
On the second day, after yours truly presented some case studies on practical risk management in the Business & Decision Life Sciences CRO and our qualified data center, Jürgen Schmitz of Novartis Vaccines and Diagnostics presented an interesting session on how IT is embedded into their major projects.
Mick Symonds of Atos Origin presented on Business Continuity in what I thought was an informative and highly entertaining presentation, but which was non-industry specific and was just a little too commercial for my liking.
Yves Samson (Kereon AG) and Chris Reid led some useful workshops looking at the broader impacts of IT Change Control and the scope, and scalability of Periodic Evaluations. These were good, interactive sessions and I’m sure that everyone benefitted from the interaction and discussion.
In the final afternoon René Van Opstal, (Van Opstal Consulting) gave an interesting presentation on aligning project management and validation and Rob Stephenson (Rob Stephenson Consultancy) presented a case study on Decommissioning which, although it had previously been presented at a GAMP UK meeting, was well worth airing to a wider audience.
All in all it was a good couple of days with some useful sessions, living up to its billing as suitable for intermediate to advanced attendees. On the basis of this session I’d certainly recommend similar sessions to those responsible for IS Compliance in either a QA or IT role and I’m looking forward to the next GAMP UK meeting, and to presenting at the ISPE UK AGM meeting and also the ISPE Global AGM meeting later in the year.
Friday, November 19, 2010
Qualifying the Cloud: Fact or Fiction?
There was a great deal of interest in last Wednesday’s webcast “Qualifying the Cloud: Fact or Fiction?”. Cloud Computing is certainly an issue with a number of people and your responses during the session clearly indicate that there are some regulatory concerns.
Despite adding 15 minutes to the originally scheduled session there were still more questions than we could fully answer in the time allowed and as promised we have provided written answers to your questions below.
Q. In your audit and/or customer experience, have you found that a SLA or service level agreements indicating demonstrative control over the infrastructure in the cloud is sufficient to meet GxP regulatory compliance, or are auditors still looking for IQ/OQ "installation operational qualification" checklists against a specific list of requirements
Different auditors look for different things and let’s start by saying that it’s pretty rare for regulatory inspectors to be spending any time in data centers unless there is due cause. Nowadays this is usually because of issues with an uncontrolled system that are encountered during a broader inspection.
When I am auditing on behalf of Life Sciences clients I will always look for evidence that IQ/OQ (or combined IOQ) is performed properly. By this I mean not just that the as-built/installed infrastructure matches the configuration management records, but that the as-built/installed infrastructure complies with the design specifications and client requirements.
I once audited a major managed services and hosting provider and their processes for building and installing infrastructure platforms were very good and highly automated – which is good for the rapid elasticity required in Cloud Computing. They literally selected the options off a pick list – how much memory, how many CPUs, what disk capacity etc – and the system was built and installed accordingly in their data center accordingly.
However, there was no independent review of the specifications against the client requirements and no independent review of the as built/installed server platform against the specification. Configuration management records were generated directly from the as built/installed server and never compared against the specification.
As Neill described in the webcast, if someone had accidentally selected the wrong build option from the pick list (e.g. 20GB of storage instead of 40GB) no-one would have noticed until the Service Level Agreement requirements were unfulfilled. That’s why I will always check that there is some element of design review and build/install verification.
However, I’ll usually review the specification, design, build and verification procedures as part of the initial audit to check that these reviews are part of the defined process. I’ll also spot check some of the IOQ records to check that the verification has been done. During subsequent surveillance audits I’ll also check the IOQ records as part of whatever sampling approach I’m taking (sometimes I’ll follow the end-to-end specification, design, build/installation and verification for a particular platform or sometimes I’ll focus on the IOQ process). I'm not looking to verify the build/installation of the infrastructure myself, but I am looking for evidence that there is a process to do this and that someone has done it.
IOQ needn’t be a particularly onerous process – the use of checklists and standard templates can help accelerate this process and as long as people are appropriately trained I’m usually prepared to accept a signature of someone to say that the review activity was done i.e. a signed design specification signed by the reviewer.
As we've found in our own data center, if it's an integral part of the process (especially a semi-automated process) it doesn't have a significant impact on timescales and doesn't detract from the 'rapid elasticity' which as an essential characterristic of Cloud Computing. While issues of capacity are less of a problem in a extensible Cloud the process of IOQ does help catch other types of error (patches not being applied, two or three steps in an automated install having failed etc).
Q. Your early descriptions were good but how would you explain the concept of a the cloud to a traditional Quality person with only a very basic knowledge of Network Architecture?
I don’t think I would!
Trying to explain to a non-IT specialist in the Quality Unit what the Cloud is always going to be difficult if you take the approach of saying that the Cloud is undefined and the Users don’t need to know what’s going on.
The way to explain it is to say that although the Users in the Regulated Company don’t need to know what the Cloud is, the Regulated Companies IT Department and their IT Quality Group do know what is going on in the Cloud, and that they have checked that it is appropriately controlled
You then need to demonstrate to your Quality Unit that you do know what’s going on in the Cloud. If it’s a Private Cloud you do this by showing them diagrams and specifications, qualification documents and so on. If it’s a Public Cloud (or an externally hosted Private Cloud) you do this by showing that you have audited the Cloud Provider to check that they have the diagrams and specifications, qualification documents and so on.
It’s all about perception. It’s okay for the Users not to know what’s going on in the Cloud, but someone clearly has to be in control. This needs to be the appropriate subject matter experts (either your own IT people or the Cloud Service Providers) and your own IT Quality Unit.
If you’re a small company without the resources or technical knowledge to assess your Cloud Providers you can rely on independent consultants for this support, but you have to select the right consultants and demonstrate due diligence in their selection.
Q. In the event of a regulatory audit, when you are using cloud resources (non-private), how does the Cloud Service Providers responsibility factor in?
Basically, you need your Cloud Service Providers to be on the hook with you and this means clearly defining what support they will provide both in terms of day to day service level requirements and in the event of a regulatory inspection.
Again, let’s emphasize that regulatory authorities rarely look in the data center without due cause and although we are prepared for them to come to our data center in Wayne, we’re not aware of any regulators actually having visited a true third party hosting facility. (However, with the concerns the industry is demonstrating around this issue we think that it’s only a matter of time before they visit someone’s third party data center, somewhere).
The worst case scenario is when, during a regulatory inspection and Inspector asks the question “Is the System Validated” and you have to say “We don’t know…” That’s when further questions will be asked, the answers to which will eventually lead to your Cloud Service Provider. A failure to have properly assessed your Provider will clearly demonstrate to the regulatory authorities a lack of control.
We know of a LOT of Life Sciences Regulated Companies who have outsourced based solely on cost, with the process driven by IT management and the accountants. They usually accept the Providers standard service levels and any involvement from quality/regulatory is often late and sometimes ignored. The result is that ‘compliance’ then becomes an added activity with added costs, the promised cost savings disappear and there is often no right to adequately audit or provide support for regulatory inspections including in the Service Level Agreement.
Finally we’d just like to highlight a comment made by one of the listeners “Audit and assessment of the provider should be seen as the Insurance Certificate!” This is an excellent point and really emphasizes the key issue about Cloud Computing – you need to dig below the surface, get behind all of the hype and really understand the what, who, where and how.
There’s no reason why Cloud Computing shouldn’t be used for regulatory purposes as long as Regulated Companies exercise their responsibilities and work with Service Providers who are willing to be open about what they are doing. As far as the Users are concerned, the Cloud is still a Cloud (on-demand, rapid elasticity etc), but the Regulated Companies IT department and IT Quality group need to be in the Cloud with the Service Providers, understanding what’s going on and making sure that things are controlled.
Thank you again to everyone for their interest. The recording is still available online for anyone who didn’t catch the entire session and you can still register for the final webcast in the series via the Business & Decision Life Sciences website.
Despite adding 15 minutes to the originally scheduled session there were still more questions than we could fully answer in the time allowed and as promised we have provided written answers to your questions below.
Q. In your audit and/or customer experience, have you found that a SLA or service level agreements indicating demonstrative control over the infrastructure in the cloud is sufficient to meet GxP regulatory compliance, or are auditors still looking for IQ/OQ "installation operational qualification" checklists against a specific list of requirements
Different auditors look for different things and let’s start by saying that it’s pretty rare for regulatory inspectors to be spending any time in data centers unless there is due cause. Nowadays this is usually because of issues with an uncontrolled system that are encountered during a broader inspection.
When I am auditing on behalf of Life Sciences clients I will always look for evidence that IQ/OQ (or combined IOQ) is performed properly. By this I mean not just that the as-built/installed infrastructure matches the configuration management records, but that the as-built/installed infrastructure complies with the design specifications and client requirements.
I once audited a major managed services and hosting provider and their processes for building and installing infrastructure platforms were very good and highly automated – which is good for the rapid elasticity required in Cloud Computing. They literally selected the options off a pick list – how much memory, how many CPUs, what disk capacity etc – and the system was built and installed accordingly in their data center accordingly.
However, there was no independent review of the specifications against the client requirements and no independent review of the as built/installed server platform against the specification. Configuration management records were generated directly from the as built/installed server and never compared against the specification.
As Neill described in the webcast, if someone had accidentally selected the wrong build option from the pick list (e.g. 20GB of storage instead of 40GB) no-one would have noticed until the Service Level Agreement requirements were unfulfilled. That’s why I will always check that there is some element of design review and build/install verification.
However, I’ll usually review the specification, design, build and verification procedures as part of the initial audit to check that these reviews are part of the defined process. I’ll also spot check some of the IOQ records to check that the verification has been done. During subsequent surveillance audits I’ll also check the IOQ records as part of whatever sampling approach I’m taking (sometimes I’ll follow the end-to-end specification, design, build/installation and verification for a particular platform or sometimes I’ll focus on the IOQ process). I'm not looking to verify the build/installation of the infrastructure myself, but I am looking for evidence that there is a process to do this and that someone has done it.
IOQ needn’t be a particularly onerous process – the use of checklists and standard templates can help accelerate this process and as long as people are appropriately trained I’m usually prepared to accept a signature of someone to say that the review activity was done i.e. a signed design specification signed by the reviewer.
As we've found in our own data center, if it's an integral part of the process (especially a semi-automated process) it doesn't have a significant impact on timescales and doesn't detract from the 'rapid elasticity' which as an essential characterristic of Cloud Computing. While issues of capacity are less of a problem in a extensible Cloud the process of IOQ does help catch other types of error (patches not being applied, two or three steps in an automated install having failed etc).
Q. Your early descriptions were good but how would you explain the concept of a the cloud to a traditional Quality person with only a very basic knowledge of Network Architecture?
I don’t think I would!
Trying to explain to a non-IT specialist in the Quality Unit what the Cloud is always going to be difficult if you take the approach of saying that the Cloud is undefined and the Users don’t need to know what’s going on.
The way to explain it is to say that although the Users in the Regulated Company don’t need to know what the Cloud is, the Regulated Companies IT Department and their IT Quality Group do know what is going on in the Cloud, and that they have checked that it is appropriately controlled
You then need to demonstrate to your Quality Unit that you do know what’s going on in the Cloud. If it’s a Private Cloud you do this by showing them diagrams and specifications, qualification documents and so on. If it’s a Public Cloud (or an externally hosted Private Cloud) you do this by showing that you have audited the Cloud Provider to check that they have the diagrams and specifications, qualification documents and so on.
It’s all about perception. It’s okay for the Users not to know what’s going on in the Cloud, but someone clearly has to be in control. This needs to be the appropriate subject matter experts (either your own IT people or the Cloud Service Providers) and your own IT Quality Unit.
If you’re a small company without the resources or technical knowledge to assess your Cloud Providers you can rely on independent consultants for this support, but you have to select the right consultants and demonstrate due diligence in their selection.
Q. In the event of a regulatory audit, when you are using cloud resources (non-private), how does the Cloud Service Providers responsibility factor in?
Basically, you need your Cloud Service Providers to be on the hook with you and this means clearly defining what support they will provide both in terms of day to day service level requirements and in the event of a regulatory inspection.
Again, let’s emphasize that regulatory authorities rarely look in the data center without due cause and although we are prepared for them to come to our data center in Wayne, we’re not aware of any regulators actually having visited a true third party hosting facility. (However, with the concerns the industry is demonstrating around this issue we think that it’s only a matter of time before they visit someone’s third party data center, somewhere).
The worst case scenario is when, during a regulatory inspection and Inspector asks the question “Is the System Validated” and you have to say “We don’t know…” That’s when further questions will be asked, the answers to which will eventually lead to your Cloud Service Provider. A failure to have properly assessed your Provider will clearly demonstrate to the regulatory authorities a lack of control.
We know of a LOT of Life Sciences Regulated Companies who have outsourced based solely on cost, with the process driven by IT management and the accountants. They usually accept the Providers standard service levels and any involvement from quality/regulatory is often late and sometimes ignored. The result is that ‘compliance’ then becomes an added activity with added costs, the promised cost savings disappear and there is often no right to adequately audit or provide support for regulatory inspections including in the Service Level Agreement.
- Always conduct a full audit well before signing a contract (at least two days on-site, at least a month before the contract is due for signing).
- Agree in the contract how and when any quality/compliance/control ‘gaps’ from the audit (and any surveillance audits) will be addressed.
- Identify the penalties for not addressing any quality/compliance/control ‘gaps’ in the contract (this might include reducing service charges to cover the cost of the Regulated Companies additional quality/compliance/control activities or even cancellation of the contract – which we know one pharmaceutical company actually did).
- Include the right for surveillance audits in the contract.
- Include the need to support any regulatory inspections in the contract (this may never happen so can be a justifiable additional cost).
Finally we’d just like to highlight a comment made by one of the listeners “Audit and assessment of the provider should be seen as the Insurance Certificate!” This is an excellent point and really emphasizes the key issue about Cloud Computing – you need to dig below the surface, get behind all of the hype and really understand the what, who, where and how.
There’s no reason why Cloud Computing shouldn’t be used for regulatory purposes as long as Regulated Companies exercise their responsibilities and work with Service Providers who are willing to be open about what they are doing. As far as the Users are concerned, the Cloud is still a Cloud (on-demand, rapid elasticity etc), but the Regulated Companies IT department and IT Quality group need to be in the Cloud with the Service Providers, understanding what’s going on and making sure that things are controlled.
Thank you again to everyone for their interest. The recording is still available online for anyone who didn’t catch the entire session and you can still register for the final webcast in the series via the Business & Decision Life Sciences website.
Tuesday, November 9, 2010
Supplier Involvement - Don't Sign the Contract!
GAMP 5 tells us that Regulated Companies should be leveraging Supplier Involvement in order to efficiently take a risk-based approach to validation - but surely that's no more than common sense? Anyone contracting services from a Supplier should be looking to get as much out of their Suppliers as possible.
We constantly hear stories from clients, complaining about how poor some of their Suppliers are and in some cases the complaints are justified - software full of bugs, known problems not being acknowledged (or fixed), failure to provide evidence of compliance during audits, switching less skilled resources for the consultants you expected and so on.
The software and IT industry is no better or worse than any other - there are good Suppliers and less good Suppliers, but in a market such as Life Sciences the use of a less good Supplier can significantly increase the cost of compliance and in some rare circumstances place the safety of patients at risk.
Two years after the publication of GAMP 5 and five years after the publication of the GAMP "Testing of GxP Systems" Good Practice Guide (which leveraged the draft ASTM E2500) the Life Sciences industry is still:
This is especially true when it comes to defining quality and compliance requirements, so it's no wonder that Life Sciences companies struggle to leverage their Suppliers when they've failed to define what it is that they really expect.
In many cases quality and compliance people are involved in the selection of suppliers too late in the process to add any real value. In some circumstances there is no viable option than going with a 'less good' supplier (for instance, when a new Supplier has a really novel application or service that adds real competitive advantage) but in most cases it is possible to identify any gaps and agree how they should be rectified prior to signing a contract.
However, once a contract is signed it's too late to define quality and compliance requirements without Suppliers claiming that these are 'extras' which are outside the contract. While I've heard Regulated Companies make statements like "as a supplier to the Life Sciences industry you must have known that we'd need copies of your test results" (or whatever it is) you can't rely upon those unstated expectations in a court of law.
The result is that achieving the required quality and compliance standards often costs more that anticipated, either because the Supplier charges extra or the Regulated Company picks up the cost of the additional quality oversight. Very few Life Science's companies have actually achieved the promised cost savings with respect to the outsourcing of IT services, usually because the people driving the contract (purchasing, finance and IT) don't really understand what is required with respect to quality and compliance.
When Business & Decision are engaged in a supplier selection process we tell clients "don't sign the contract until you're happy - that's the best leverage you'll ever have over a Supplier" and it's advice worth repeating here.
At its best, the IT sector is a mature and responsible industry with standards and best practices that can be leveraged to assure that clients requirements are met. It's just a pity that the Life Sciences industry - which prides itself on being in control of most things it does - can't find a way to effectively leverage good practices like GAMP and standards like ISO 9001 and ISO 20000 to select and leverage the best IT Suppliers.
We constantly hear stories from clients, complaining about how poor some of their Suppliers are and in some cases the complaints are justified - software full of bugs, known problems not being acknowledged (or fixed), failure to provide evidence of compliance during audits, switching less skilled resources for the consultants you expected and so on.
The software and IT industry is no better or worse than any other - there are good Suppliers and less good Suppliers, but in a market such as Life Sciences the use of a less good Supplier can significantly increase the cost of compliance and in some rare circumstances place the safety of patients at risk.
Two years after the publication of GAMP 5 and five years after the publication of the GAMP "Testing of GxP Systems" Good Practice Guide (which leveraged the draft ASTM E2500) the Life Sciences industry is still:
- Struggling to understand how to get the best out of Suppliers,
- Complaining about compliance issues associated with outsourcing.
This is especially true when it comes to defining quality and compliance requirements, so it's no wonder that Life Sciences companies struggle to leverage their Suppliers when they've failed to define what it is that they really expect.
In many cases quality and compliance people are involved in the selection of suppliers too late in the process to add any real value. In some circumstances there is no viable option than going with a 'less good' supplier (for instance, when a new Supplier has a really novel application or service that adds real competitive advantage) but in most cases it is possible to identify any gaps and agree how they should be rectified prior to signing a contract.
However, once a contract is signed it's too late to define quality and compliance requirements without Suppliers claiming that these are 'extras' which are outside the contract. While I've heard Regulated Companies make statements like "as a supplier to the Life Sciences industry you must have known that we'd need copies of your test results" (or whatever it is) you can't rely upon those unstated expectations in a court of law.
The result is that achieving the required quality and compliance standards often costs more that anticipated, either because the Supplier charges extra or the Regulated Company picks up the cost of the additional quality oversight. Very few Life Science's companies have actually achieved the promised cost savings with respect to the outsourcing of IT services, usually because the people driving the contract (purchasing, finance and IT) don't really understand what is required with respect to quality and compliance.
When Business & Decision are engaged in a supplier selection process we tell clients "don't sign the contract until you're happy - that's the best leverage you'll ever have over a Supplier" and it's advice worth repeating here.
At its best, the IT sector is a mature and responsible industry with standards and best practices that can be leveraged to assure that clients requirements are met. It's just a pity that the Life Sciences industry - which prides itself on being in control of most things it does - can't find a way to effectively leverage good practices like GAMP and standards like ISO 9001 and ISO 20000 to select and leverage the best IT Suppliers.
Wednesday, April 21, 2010
Answers to Webcast Questions - Compliant Business Intelligence and Analytics in Life Sciences
Thank you to everyone who attended the webcast "Compliant Business Intelligence and Analytics" and who submitted questions. The recording is now on-line and subscribers can download the slides from the Business & Decision Life Sciences website via the Client Hub.
Listed below are the questions that we didn't have time for in the live webcast, along with the answers we promised to provide.
Q. How could BI be beneficial in an IT industry "IT Project"?
A. IT projects and processes are another subset of business processes and Business Intelligence and Analytics can certainly be applied there. The use of Key Performance Indicators in IT Projects and Processes was covered extensively in our webcast "Measuring IS Compliance Key Performance Indicators". This includes the use of Business Intelligence applications for supporting project and process improvement, both in terms of efficiency and cost effectiveness and also in terms of regulatory compliance.
Q. How would you qualify a BI solution provider (if one ever needed to be hired for a project)?
A. No differently from qualifying any other vendor. We would focus on the maturity of the solution provider in terms of:
- Track record in Business Intelligence (do they know the specific technology/application, can they help develop a BI strategy and architect a BI solution?).
- Track record in Life Sciences (and in the particular business domain [e.g. clinical trials versus sales and marketing] and the particular sector [e.g. pharmaceuticals, medical devices, biomedical etc].
Assuming that a BI solution had already been selected we would also look to the BI vendors to make recommendations with respect to which solution provider they would recommend.
When combined, these factors would reduce a list of potential suppliers to a manageable number.
Supplier selection seems to be a question that has been asked a few times in various webcasts and is something we'll look at covering in more detail in a future webcast.
Thanks again for joining us for the webcast and if there are any follow up questions you can submit them via the Life Sciences website
Listed below are the questions that we didn't have time for in the live webcast, along with the answers we promised to provide.
Q. How could BI be beneficial in an IT industry "IT Project"?
A. IT projects and processes are another subset of business processes and Business Intelligence and Analytics can certainly be applied there. The use of Key Performance Indicators in IT Projects and Processes was covered extensively in our webcast "Measuring IS Compliance Key Performance Indicators". This includes the use of Business Intelligence applications for supporting project and process improvement, both in terms of efficiency and cost effectiveness and also in terms of regulatory compliance.
Q. How would you qualify a BI solution provider (if one ever needed to be hired for a project)?
A. No differently from qualifying any other vendor. We would focus on the maturity of the solution provider in terms of:
- Track record in Business Intelligence (do they know the specific technology/application, can they help develop a BI strategy and architect a BI solution?).
- Track record in Life Sciences (and in the particular business domain [e.g. clinical trials versus sales and marketing] and the particular sector [e.g. pharmaceuticals, medical devices, biomedical etc].
Assuming that a BI solution had already been selected we would also look to the BI vendors to make recommendations with respect to which solution provider they would recommend.
When combined, these factors would reduce a list of potential suppliers to a manageable number.
Supplier selection seems to be a question that has been asked a few times in various webcasts and is something we'll look at covering in more detail in a future webcast.
Thanks again for joining us for the webcast and if there are any follow up questions you can submit them via the Life Sciences website
Thursday, February 11, 2010
Risk Likelihood of New Software
Here's a question submitted to validation@businessdecision.com - which we thought deserved a wider airing.
Q. To perform a Risk Assessment you need experience about the software performance. In the case of new software without previous history, how can you handle it?
A. We are really talking about the risk likelihood dimension of risk assessment here
GAMP suggests that when determining the risk likelihood you look at the ‘novelty’ of the supplier and the software (we sometimes use the opposite term – maturity – but we’re talking about the same thing).
If you have no personal experience with the software you can conduct market research – are there any reviews on the internet, any discussions on discussion boards or is there a software user group the Regulated Company could join? All of this will help to determine whether or not the software is ‘novel’ in the Life Sciences industry, whether it has been used by other Regulated Companies and whether there are any specific, known problems that will be the source of an unacceptable risk (or a risk that cannot be mitigated).
If it is a new product from a mature supplier then you can only assess risk based on the defect / support history of the supplier's previous products and an assessment of their quality management system. If it a completely new supplier to the market then you should conduct an appropriate supplier assessment and would generally assume high risk likelihood, at least until a history is established through surveillance audits and use of the software.
All of these pieces of information should feed into your initial high level risk assessment and be considered as part of your validation planning. When working with ‘novel’ suppliers or software it is usual for the Regulated Company to provide more oversight and independent verification.
At the level of a detailed functional risk assessment the most usual approach is to be guided by software categories – custom software (GAMP Category 5) is generally seen as having a higher risk likelihood than configurable software (GAMP Category 4), but this is not always the case (some configuration can be very complex)- our recent webcast on "Scaling Risk Assessment in Support of Risk Based Validation" has some more ideas on risk likelihood determination which you might find useful.
Q. To perform a Risk Assessment you need experience about the software performance. In the case of new software without previous history, how can you handle it?
A. We are really talking about the risk likelihood dimension of risk assessment here
GAMP suggests that when determining the risk likelihood you look at the ‘novelty’ of the supplier and the software (we sometimes use the opposite term – maturity – but we’re talking about the same thing).
If you have no personal experience with the software you can conduct market research – are there any reviews on the internet, any discussions on discussion boards or is there a software user group the Regulated Company could join? All of this will help to determine whether or not the software is ‘novel’ in the Life Sciences industry, whether it has been used by other Regulated Companies and whether there are any specific, known problems that will be the source of an unacceptable risk (or a risk that cannot be mitigated).
If it is a new product from a mature supplier then you can only assess risk based on the defect / support history of the supplier's previous products and an assessment of their quality management system. If it a completely new supplier to the market then you should conduct an appropriate supplier assessment and would generally assume high risk likelihood, at least until a history is established through surveillance audits and use of the software.
All of these pieces of information should feed into your initial high level risk assessment and be considered as part of your validation planning. When working with ‘novel’ suppliers or software it is usual for the Regulated Company to provide more oversight and independent verification.
At the level of a detailed functional risk assessment the most usual approach is to be guided by software categories – custom software (GAMP Category 5) is generally seen as having a higher risk likelihood than configurable software (GAMP Category 4), but this is not always the case (some configuration can be very complex)- our recent webcast on "Scaling Risk Assessment in Support of Risk Based Validation" has some more ideas on risk likelihood determination which you might find useful.
Subscribe to:
Posts (Atom)