Thursday, November 22, 2012

FDA Draft Guidance “Electronic Source Data in Clinical Investigations”

The revised FDA draft guidance document  “Electronic Source Data in Clinical Investigations” provides guidance to clinical trial sponsors, CROs, data managers, clinical investigators and others involved in the capture, review and archiving of electronic source data in FDA-regulated clinical investigations.

The original January 2011 draft guidance has been updated to clarify a number of points made to the FDA by commentators in the industry and the new draft guidance is published to collect additional public comments.

It's good to see industry and regulators working to develop guidance on the use of electronic Case Report Forms (eCRFs), recognising that capturing clinical trial data electronically at source significantly reduces the number of transcription errors requiring resolution, does away with unnecessary duplication of data and provides more timely access for data reviewers.

While much of the guidance contained in the draft would be seen as common sense in much the industry it does start to provide a consensus on important issues such as associating authorised data originators with data elements, the scope of 21CFR Part 11 with respect to the use of such records, interfaces between medical devices or Electronic Health Records and the eCRF.

No doubt a number of the recommendations contained in the draft guidance document will be of concern to software vendor's whose systems do not currently meet the technical recommendations provided. We will therefore surely see a variety of comments from “non-compliant” vendors trying to water down the recommendation until such a time their systems can meet what is already accepted good practice.

One key issue that would appear to be missing is the use of default values on eCRFs, which we know has been a concern in a number of systems and clinical trials i.e. where the investigator has skipped over a field leaving the data element at the default value. This is something we have provided feedback on and we would encourage everybody in the industry to review the new draft guidance and provide comments.

You can view a copy of the new draft guidance at http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM328691.pdf and comment through the usual FDA process at https://www.federalregister.gov/articles/2012/11/20/2012-28198/draft-guidance-for-industry-on-electronic-source-data-in-clinical-investigations-availability

Thursday, November 15, 2012

Penetrating the Cloud

IVT's Conference on Cloud and Virtualization (Dublin, 13-14th November 2012) was everything I'd hoped it would be. After two year of conference sessions simply peering into the Cloud to understand what it us, or sticking a head in to the Cloud to see what the risks are, it was good to spend two days looking through the Cloud to see how these risks can be managed and to review some case studies.

It served to endorse our opinion that while generalist Cloud providers are either not interested in the needs of the Life Sciences industry, or are still struggling to understand Life Sciences requirements, what some people have called the 'Pharma Cloud' (or 'Life Sciences' Cloud, and what we define as Compliant Cloud Computing) is here. As we report in one of our latest Perspectives opinion pieces, while specialist providers are relatively few, Infrastructure, Platform and Software as a Service can now be provisioned in a manner that meets the expectations of most regulators.

While it would have been good for an organization such as ISPE to provide such clarity, well done to IVT for organizing events in the US and Europe and giving people a chance to unpack such issues. To be fair to ISPE, many GAMP sessions have looked at Cloud at country specific meetings and conferences, but the topic really does need a couple of days to get your head around.

What also emerged was the ability to select the right Cloud model, including On-Premise options and discussions with a number of delegates confirmed the attractiveness of the Compliant Cloud Anywhere solution (IaaS, installed On-Premise, but owned and operated by a specialist Cloud Services provider).

At the end of the IVT event delegates (many of whom are from QA or IT Quality) went home with a much better understanding of what Cloud and Virtualization is and what the risks are. Perhaps more importantly, what also emerged were some good examples of how to mitigate the risks and the outline of a strategy to move further into the Cloud without risking regulatory compliance.

As we'll explore in our webcast "State of the Art in Compliant Cloud Computing", relatively few Life Sciences companies have a real Cloud Strategy that also addresses regulatory compliance and this is quickly becoming a necessity for organizations looking the take advantage of the business benefits that Cloud and Virtualization offers.

As clarity emerges we expect to see things move significantly further into the Cloud in the next 12 months - "watch this space" as they say!

Wednesday, November 7, 2012

Applying Anti-Virus and Automatic Updates

Another question from LinkedIn, which has been popping up quite a few times on-line lately. We therefore thought we'd share the question and answer with a wider audience.

Q. What are the impact of Microsoft patch upgrades on validated computer systems. How we can consider them.

A. The GAMP IT Infrastructure Good Practice Guide and GAMP 5 have good appendices on patch management, which includes security patches.

These patches (and the updating process) are pretty mature by now and are generally considered to be of low risk likelihood. The impact on validated systems should therefore be low risk.

In most cases, for low-medium risk enterprise systems / IT platforms, organizations rely on automatic updates to protect their systems, because the risk of contracting some malware or leaving a security vulnerability open is greater than that of applying an 'untested' patch - the security patches are of course tested by Microsoft and should be fine with most validated systems / applications.

However, for some systems controlling processes directly impacting on product quality another strategy is often applied which is to place such systems on a segregated (almost isolated), highly protected network domain and not allow automatic updating of patches, but to update manually.

Placing them on such a protected network limits business flexibility but significantly reduces the likelihood of most malware propagating to such systems, or of malware being able to access such systems to exploit security vulnerabilities. If such systems e.g. SCADA are using Microsoft Windows it may well be an older version and these can be particularly vulnerable to malware, especially if connected to the Internet via anything less that a robust and multi-layered set of security controls (for licensing reasons, on a machine that was being decommissioned I once uninstalled the malware protection from an machine running Windows XP - which is still relatively common in some parts of the corporate world - and even siting behind a reasonably secure firewall it was exploited in less than two minutes...)

In these cases anti malware should be installed and Windows updates applied, but applied manually after assessing the patch i.e. reading the knowledge base articles. The risk associated with applying a patch which has not been tested by the regulated company with the specific control software may pose a greater risk to the system and hence to product safety. In these cases the regulated company will test patches in a test environment and patch relatively infrequently by hand, or only to fix known issues.

Key to all of this is a risk-based patching strategy, which should be defined in e.g. a Security Policy and appropriate SOPs. Key considerations are:
  • Understanding the risk vulnerability of different platforms e.g. Windows XP vs Windows 7 vs Windows Server etc
  • Understanding the risk vulnerability of different network segments
  • Understanding the risk likelihood of automatically applying updates i.e. the extent of the interaction between the operating system and validated applications

Friday, November 2, 2012

Collaborating on Collaboration - Validating Open Source Software


I'm delighted to hear that "Open Source Software in Life Science Research: Practical Solutions to Common Challenges in the Pharmaceutical Industry and Beyond" has just been published (available from Woodhead Publishing and also on Amazon).
 

The book has an interesting history, having started life as a discussion on LinkedIn on the use of open source software in the Life Sciences industry. After a very illuminating discussion it was suggested that we write a book on the subject and so Lee Harland of Pfizer and Mark Forster of Syngenta U.K. agreed to take on the editing of the book, which would look at the how open source software is used in our industry.

The result is a comprehensive look at the the current state of the market, with chapters looking at the use of various open source tools and packages looking at predictive toxicology, mass spectrometry, image processing, data analytics, high throughput genomic data analysis, web based collaboration and many, many more applications. As well as addressing general issues the book looks at specific tools and applications and is a useful reference for anyone looking for a guide as to the kind of software that is out there (many of these applications are quite well 'hidden' on the Internet!)

Without doubt, open source software is widely used in pharmaceutical research and development and is transforming the way the industry works in many ways. Open source software has many advantages - it's free to acquire and rather than wait for a software vendor to understand and respond to market requirements, the developer community just goes ahead and extends functionality in the direction that research and development needs.

My contribution to the book ("Validation and Regulatory Compliance of Free/Open Source Software") sounds a slightly more cautious note, highlighting when open source software may require validation, the challenges of validating open source software and sharing some ideas on how to go about it - including collaboratively!

This can be challenging, time consuming and does of course have costs associated with it, which is why we see less open source software in GMP areas. However, I have no doubt that the trend to collaboratively develop niche applications will continue to expand, especially with the prevalence of mature software development tools and Platform-as-a-Service development environments.

The process of collaborating to write a book is quite interesting - just like collaborating to develop software you're not quite sure what you're going to get, sometimes some contributors may head off at a tangent and you're never quite sure when everything is going to be delivered. Well done to Lee and Mark for sticking with the process. I think that the end result is well worth the wait.

Thursday, November 1, 2012

Retrospective Validation of Enterprise Systems

Yesterday was the fourth and final stop on our virtual book tour, looking at important topics from "Validating Enterprise Systems: A Practical Guide" (published by the Parenteral Drug Association and available via their bookstore)

In yesterday's session (recording and slides available here) we looked at the topic of retrospectively validating enterprise systems including why it is important to retrospectively validate systems that have not previously been validated and various reasons which lead to retrospective validation.

As we discussed yesterday, some of these reasons are understandable and acceptable to regulatory authorities while other reasons (such as ignorance or cost avoidance) are not.

We got through most of your questions yesterday but there were a couple of questions we didn't quite get around to answering so here they are.

Q. How is retrospective validation different to normal validation?

A. Many of the actual activities are identical but when retrospectively validating the system it is possible to leverage existing documentation (if any) and actual experience with the system.
Where good documentation exists and has been kept up-to-date the task of retrospective validation can be relatively quick. However, when little or no documentation exists it can take almost as long to retrospectively validate a system as it did implement it.

If the system has been maintained and supported using well documented processes such as help desk, incident management and problem management it will also be possible to leverage this information and use it to inform detailed risk assessments. With actual empirical data it is possible to make a more accurate assessment of risk likelihood and probability of detection.

Where a system was well implemented and has been well maintained this additional information can be justifiably used to limit the extent of the verification activities required as part of the retrospective validation.

Where this empirical data highlights problems or issues with the system it can also be used to determine which areas of the system require greatest focus as part of the risk-based validation effort.

This can mean that in some cases retrospective validation can be more successful than prospective validation in terms of appropriately addressing real risks in a cost-effective manner. However, as we stated in the webcast yesterday, because retrospective validation is not conducted as part of the implementation activities it is generally significantly more expensive than prospective validation which is well integrated with the original implementation effort. For this reason retrospective should be avoided.

Q. How common is retrospective validation? Do many companies have this problem?

A. There is a good deal of variation in the industry. At one end of the scale there are well-managed companies who are well aware of their regulatory responsibilities and who prospectively validate all appropriate enterprise systems at the time of implementation. At the other end of the scale there are companies who are either ignorant of the need to validate certain enterprise systems, or who choose to ignore the requirement in order to save money.

To some extent this depends on the maturity of the market and of the organisation. The most mature organisations have learned that risk-based prospective validation adds relatively little to the cost of implementation and provides good return on investment in terms of more reliable and robust solutions.

Less mature organisations still do not understand the cost benefit of risk-based validation or are not aware of the need to validate certain enterprise systems. While to some extent the US and Europe are more mature markets in this regard this is not universally the case. There are still new start-ups and companies where profit margins are slim who still do not prospectively validated their systems.

In markets which have historically been seen as less mature (e.g. Asia, Africa, Latin and South America) there is a growing realisation of both the need and attractiveness of validation with respect to implementing reliable and robust enterprise systems which help to streamline business operations. While retrospective validation is currently quite common in these markets (as they move to meet the growing expectations of changing local regulations and loom to export products to markets where the need for validation is already well established) this will change over time - and quite rapidly if other indicators of change are reflected.

This means that while retrospective validation will continue to be required in many markets for a number of years to come (and hence the need for a chapter on the subject in "Validating Enterprise Systems: A Practical Guide") we predict that this will be the exception within the next 5-10 years, with retrospective validation becoming rarer in all markets.


Thanks to all of you who have supported the virtual book tour we do hope you will join us for our forthcoming series of IS compliance and computer system validation webcasts over the next few months (details and registration available here)

Tuesday, October 23, 2012

Using non-ERES Compliant Business Intelligence Tools in Support of GxP Decision Making

Another interesting question here - this time posted in the LinkedIn 21 CFR Part 11 discussion group on the use of business intelligence tools to support GxP decision making,. However, the business intelligence tools are not 21 CFR Part 11 compliant!

Q. We are implementing a 'system' which comprises of replicating software from Oracle and a destination replicated Oracle database. The purpose of this 'system' will be for running Oracle and Cognos reports from this replicated database instead of the high transactional source database. This system will be accessed only by the DBA.

The replicating software is pretty much command line based which is basically to say it does not have a GUI. From the command prompt user interface of this software, we can define (enter commands) for the source and target database; define the number of processes to run; set filters for what data are to be replicated and the frequencey for replication.

We did an assessment and found that part 11 is applicable. The problem is that we cannot satisfy all the part 11 requirements.

Although we deal with e-records (store the data from the source system including setup/ configuration of replication process) how do we still justify that e-signatures, audit trail requirements or password aging/restrictions are not applicable and not supported by the replicating software?


A. We've just finished doing exactly this for another project where we set up a reporting database to reduce the load on the transactional system. Some of the reports from the data warehouse were GxP significant.
 
We also had similar issues consolidating medical device traceability data for the device history record from signed records in SAP to the SAP Business Warehouse (BW) where we lost the secure relationship between the signed records in the transactional database and the copy in BW.
 
It's all about clearly identifying what records and data are used for what purposes, documenting the rationale and design, developing or updating appropriate SOPs and training your users and DBAs accordingly.
 
The first thing to do is to clearly differentiate between the records that you will ultimately rely on for regulatory purposes versus the data that you use for reporting. We maintained (and clearly documented) that the signed records in the transactional system would always be the ultimate source of data for regulatory decision-making (i.e. product recall, CAPA investigations etc). Where applicable these were signed and could be verified and were still Part 11 compliant.
 
Our rationale was that the data warehouse (and associated GxP reports) were acting as a grand 'indexing system' which allowed us to find the key regulatory records much faster (which has to be good for patient safety). We would not use these reports for making regulatory critical decisions, but we did use these reports to be able to more quickly find the key records in the transactional database which we could then rely upon for regulatory decision-making. Under that rationale the records and signatures which was subject to Part 11 remained in the transactional database.
 
We made sure that SOPs for key processes such as product recall, CAPA investigation etc were updated to reflect the appropriate use of the reports and the signed transactional records. We were able to demonstrate that using the data warehouse reports was between 2 to 3 times faster than running queries and filters in the transactional system and we could find the records much faster. In our testing we were also able to demonstrate that we always found the correct record (more than 99.9% of the time) unless something had subsequently been changed in the transactional database (less than 0.1% of the time) and even where this was the case the discrepancy was obvious and would not lead to erroneous decision-making.
 
However, that's not to say that the GxP significant reports were not important. We still validated the software which replicated the databases, we qualified the underlying IT infrastructure and we validated the GxP significant reports.
 
We essentially had three categories of report:
  • Non-GxP reports, which were not validated (this was the majority of reports)
  •  GxP significant reports, which were not based upon signed copies of Part 11 records, but which were validated.
  • GxP significant reports, which were based upon signed copies of Part 11 records help in the transactional system.These were validated and were clearly annotated to the effect that they should not be relied upon for regulatory decision-making and they also gave a reference to the signed, transactional record in the transactional database.
 
On both of these projects we had much better tools for the replication of the data. Since you're using Oracle databases we would recommend that you create (and maintained under change control/configuration management) PL SQL programs to replicate the data bases. This will significantly reduce the likelihood of human error, allow you to replicate the databases on a scheduled basis and make it much easier to validate replication process.
 
For more background (discussion paper and webcast) on the use of validated Business Intelligence tools for GxP decision making on our website at http://www.businessdecision-lifesciences.com/2297-operational-analytics-in-life-sciences.htm.




Monday, October 22, 2012

Validating Clouded Enterprise Systems - Your Questions Answered

Thank you once again to those of you who attended the latest stage on our virtual book tour, with the latest stop looking at the validation of enterprise systems in the Cloud. This is in relation to chapter 17 of "Validating Enterprise Systems: A Practical Guide".

Unfortunate we had a few technical gremlins last Wednesday (both David Hawley and myself independently lost Internet access at our end just before the webcast was due to start) and so the event was postponed until Friday. Our apologies again for that, but we nevertheless received quite a number of registration questions which were answered during the event (you can find a recording of the webcast and copies of the slides here).

We did manage to get through the questions that were asked live during the webcast but we received one by e-mail just after the event which we thought we would answer here in the blog.

Q. "What elements should go into a Master VP for Clouded application / platforms?

A. It depends on the context that the phrase Master Validation Plan is being used. In some organisations a Master Validation Plan is used to define the approach to validating computerised systems on an individual site, in an individual business unit or, as will be the case here, for applications in the Cloud.

In other organisations a Master Validation Plan is used to define the common validation approach which is applied to an enterprise system which is being rolled out in multiple phases to multiple sites (each phase of the roll-out would typically have a separate Validation Plan defining what is different about the specific phase in the roll-out)

Logically, if we are implementing a Clouded enterprise application it could (and often would) be made available to all locations at virtually the same time. This is because there is limited configuration flexibility with a Software-as-a-Service solution and different sites have limited opportunities for significant functional differentiation. In this context is it is unlikely that the second use of a Master Validation Plan would be particularly useful so we'll answer the question in the first context.

Where a Master Validation Plan is being used to define the approach to validating Clouded enterprise systems it need to define the minimum requirements for validating clouded applications and provide a framework which:
  • Recognises the various cloud computing models (i.e. Infrastructure-as-a-Service, Platform-As-a-Service, Software-as-a-Service; Private Cloud, Community Cloud, Public Cloud and Hybrid Cloud; On-Premise and Off-Premise
  • Categorises platforms and applications by relative risk and identifies which cloud models are acceptable for each category of platform/application, which models are unacceptable and which ones may be acceptable with futher risk controls being put in place
  • Identifies opportunities for leveraging provider (supplier) activities in support of the regulated company's validation (per GAMP 5/ASTM E2500)
  • Stresses the importance of rigourous provider (supplier) assessments, including thorough pre-contract and surveillance audits
  • Highlights the need to include additional risk scenarios as part of a defined risk management process (this should include risks which are specific to the Essential Characteristics of Cloud Computing as well as general risks with the outsourcing of IT services)
  • Lists additional risk scenarios which may need to be considered, depending upon the Cloud Computing model being looked at (these are discussed in our various webcasts)
  • Identifies alternative approaches to validating clouded enterprise systems. This would most usefully identify how the use of Cloud computing often prevents traditional approaches to computer systems validation from being followed and identifies alternative approaches to verifying that the Software-as-a-Service application fulfils the regulated companies requirements

With respect to the last point our webcast "Compliant Cloud Computing - Applications and Software as a Service" discusses issues with the validation of Software-as-a-Service applications using traditional approaches and outlines alternative verification techniques that can be used.

Whether it is in a Master Validation Plan or some form of Cloud strategy document, it is important for all regulated companies to start to think about how they will validate Clouded applications. This is clearly a topic that is not going to go away and is something that all life sciences companies will need to address.

You may also be interested to know that on 15th November 2012 we're going to be looking more closely at the current state of the Cloud computing market specifically with respect to meeting the need of regulated companies in the life sciences industry .  We'll be talking about where the market has matured and where appropriate providers can be leveraged - and where it hasn't yet matured. Registration is, as ever, free of charge and you can register for the event at the Business & Decision Life Sciences website.

We look forward to hearing from you on the last stage of our virtual book tour when we'll be looking at the retrospective validation of enterprise systems, which we know is a topic of great interest to many of our clients in Asia, Eastern Europe, the Middle East and Africa and in Latin and South America.

Wednesday, October 17, 2012

Part 11 and "Disappearing" Signature Manifestations

An interesting question appeared on-line today which we thought deserved airing with a wider audience via the blog.

Q. When implementing an electronic document management system, is it acceptable to make the author/approver names and dates disappear? Is this still in compliance with 21 CFR Part 11.50, Signature Manifestations?

A. Let's remind ourselves of the relevant rule:

§ 11.50 Signature manifestations.
(a) Signed electronic records shall contain information associated with the signing that clearly indicates all of the following:
(1) The printed name of the signer;
(2) The date and time when the signature was executed; and
(3) The meaning (such as review, approval, responsibility, or authorship) associated with the signature.
(b) The items identified in paragraphs (a)(1), (a)(2), and (a)(3) of this section shall be subject to the same controls as for electronic records and shall be included as part of any human readable form of the electronic record (such as electronic display or printout).


The first question to ask ourselves is the question of scope. Not all of the documents stored in the EDMS will fall within the scope of 21 CFR Part 11. In fact, this is a notoriously difficult area in which to interpret the predicate rules. Some rules will clearly state that documents need to be signed and in other areas it must be inferred from the use of words like "authorised" or "approved".

The first thing to therefore do is to clearly decide which categories of document fall within the scope of 21 CFR Part 11 OR (to be on the safe side) to decide that the approval of all documents will meet the technical requirements of part 11.

Looking at the specific question of signature manifestations, subpart 11.50 (b) clearly states that the name of the person signing, the signature date and the meaning of the signature must be included in any printout or electronic display.

Making the names and dates "disappear" in some way clearly contravenes the requirements of 11.50 (b) if these components are not readable in either the on-screen display or the hardcopy printout of the document. If this were to be implemented we would consider the solution to be non-compliant with Part 11, at least with respect to the documents that fall within the scope of Part 11.

Monday, October 8, 2012

PLCs and GAMP Categorization

Here's another interesting question and answer that we thought we'd share on-line with a wider audience:

Q. We are developing a standard template to help in identifying and classifying PLCs into GAMP 5 categories. Could you please guide us to develop the right tool for the same.

A. The 'questions' to include are relatively simple and relate to the GAMP definitions of the categories, specifically on the context of PLCs - for example:
- Is the PLC (or parts of the software) used 'as is' with no modification or with simple changes to parameters e.g. run time, setpoint etc (typically with a machine or piece of equipment)? [Cat 3]
- Is the PLC (or parts of the software) reconfigured using standard graphical programming languages e.g. ladder logic, function blocks? [Cat 4]
- Is the PLC (or parts of the software) programmed e.g. are you writing code to achieve a function or operation that is not a standard feature of the PLC? [Cat 5]

You should also remember that it is quite likely that PLCs will contain a combination of Cat 3, 4 and 5 software and that your validation approach should reflect this.

However, the on-going  problem you will have is two-fold:

1. PLC systems have a great deal of variation in the way they they are parametrized/configured/coded, and sometimes the lines are blurred eg. some 'configuration' can be as complex and error prone (risk-likelihood) as traditional coding. You can't possibly expect to come up with questions and guidance on every type of PLC out there.

2. You need to educate people on what all of these terms mean, in the context of the PLC you are looking at. This takes time and experience.

The only time I have seen this done successfully is when a company standardises on just one or two manufacturers PLCs and then they can provide a checklist / guidance based on the manufacturers specific software development techniques.

When developing the validation approach you will also have to take into account the work that may already have been done by the supplier if the PLC is part of an embedded system.

If you are implementing a lot of PLC systems it will be worth developing an SOP or guidance document on the topic. You will however need to develop some subject matter expertise to be able to guide and support your engineers, because this is a specialist area and needs experience to do cost effectively.

Friday, October 5, 2012

Reflecting on 21 years of GAMP

Last Tuesday saw the 21st anniversary celebrations of the GAMP Forum, now of course part of ISPE. It was a great opportunity to reflect on the history of GAMP and to catch up with some of the original founder members.

Although much of the talk was backward looking, rehashing events that led to the formation and subsequent growth of the GAMP forum, Randy Perez (current chairman of ISPE) did reflect upon the role that GAMP has played and continues to play within ISPE i.e. the GAMP Community of Practice has the best selling publications, the best attended conferences etc.

No one stopped at the time to really comment on why that might be. Having thought about it it occurs to me that this simply reflects the increasing importance of computerised systems not only in the pharmaceutical industry but also in everyday life. The pace of technology change is unending and it is perfectly understandable why GAMP came into being and why it continues to look into topical issues such as cloud computing, mobile platforms etc.

As long as this technology change continues apace GAMP will always have a role to play in applying well founded good practices to new technologies and new applications. There is however a significant challenge that ISPE/GAMP faces with the pace of technology change.

Looking back over recent years it appears that new technologies appear and are adopted by leading edge regulated companies faster than organisations such as ISPE/GAMP are currently able to respond. This is perfectly natural because the strength of organisation like ISPE and GAMP are that they are consensus driven and volunteer led. The time taken to achieve consensus and the limited time available from volunteers means that it takes months or years to discuss new technologies, the understand the implications, identify risks and how they can be mitigated and then to publish consensual good practice.

However, we're all aware that other technologies such as blogging, websites and social networking allow interaction between industry professionals in much shorter timescales. In many cases we are starting to see individuals and commercial organisations provide pragmatic and acceptable guidance well ahead of organisations such as GAMP/ISPE. The other part of the challenge is that although ISPE exists to serve the needs of the pharmaceutical community, the move towards greater outsourcing means that it is very often suppliers who are the subject matter experts with new technologies and consultants who have a broader experience in how new challenges are being tackled across the industry.

The challenge for ISPE and the GAMP Community of Practice is to get the balance right between achieving consensual good practices which regulatory agencies can buy in to and providing guidance in a timely manner. This will require more widespread use of some traditional channels such as ISPE Pharmaceutical Engineering and the greater use of Internet channels such as webcasts, web publishing and social networking. This will also mean continuing towards a model where suppliers and consultants provide valuable input but users from regulated companies are the final arbiters of what is acceptable with respect to good practice.

In some cases this will mean identifying a smaller number of thought leading subject matter experts and asking them to focus on providing pragmatic guidance in shorter timescales. This is certainly the way commercial organisations such as IVT and Concept Heidelberg are working when organizing conferences and commissioning articles and although ISPE/GAMP is a not-for-profit organisation it's important to realise that the lines between not-for-profit and commercial are indistinct in these areas. Another part of the challenge will be to identify appropriate subject matter experts in new technologies who may not be working in the pharmaceutical industry and who may not be part of the existing ISPE/GAMP community.

These challenges can however be overcome and ISPE is certainly moving towards this model, led as so often been the case by the GAMP Community of Practice.

Over the last 21 years GAMP has done an excellent job in providing practical guidance to the industry during what has certainly been the greatest period of technological change industry has seen. The fact that this has been led by volunteers (of whom both I, and Business and Decision Life Sciences are proud to be part) is perhaps one of the most amazing parts of the GAMP story. The fact that this extended community has developed good practices behind which most regulated companies and regulatory agencies now stand is a significant achievement and certainly one to be celebrated.

Happy 21st birthday GAMP - and here's to many more!

Thursday, October 4, 2012

Practical Risk Management - Webcast Follow Up

Thanks once again to those of you who joined us for yesterday's webcast - the second stop on our "virtual book tour" which looked at practical risk management. We had a good number of questions asked as part of the registration process which we handled in yesterday's webcast (you can watch a recording of the webcast and download the slides here) but unfortunately we didn't have time to answer all of your questions that were asked during the session.

As usual, we taken the time to answer the outstanding questions here on a life sciences blog.

Q. Can you say more about regulators who are worried about misused risk assessments?

A. During the webcast we mentioned that the number of inspectors from European and US regulatory agencies had commented that they have concerns about the quality of risk assessments and the resulting validation. This comments have been made during informal discussions and in one case at a conference.

Their concern is that the resulting validation is either not broad enough in terms of scope or not rigorous enough in terms of depth and that this has been uncovered during inspection of what they believe to be relatively critical systems. In a couple of cases inspectors have commented that they believe that this is a case of the companies involved using risk assessment as an excuse to reduce the level of effort and resources applied in validating such systems.

We know from their comments that in a number of cases this has led to inspection observations and enforcement actions and it appears that a number of regulatory inspectors are in their words "wise to the trick". As we said in a webcast yesterday is important that the scope and rigour of any validation is appropriate to the system and the risk assessment is used to determine which areas and functions in the system require greater focus. The objective of risk-based validation is not to simply produce a level of effort and expenditure but ensure that the efforts and resources are applied most appropriately.

Q. How much time and effort can be saved by using the right risk assessment approach?

A. Our experience is that by using a relative risk assessment process rather than a quantitative risk assessment process it is possible to reduce the time and effort spent on assessing risks by between 50 to 75%. We have also studied the outputs of both types of risk assessment process on very similar systems and it is encouraging to note that in many cases both processes have provided very similar outputs in terms of the distribution of high, medium and low risk priorities both in terms of the relative number of each risk priority grouping and the functions allocated to each group.

This means that for enterprise systems with lower risk it is possible to reduce the time spent assessing risks by half or three quarters and still come up with results which are sufficiently accurate to support appropriate risk-based validation. This is why it is so important that regulated companies have a variety of risk management processes and tools available to them so they can use the most appropriate and cost-effective approach.

Q. When would you use a quantitative risk assessment approach? For what type of systems?

A. You would typically use a quantitative risk assessment approach where it is necessary to distinguish low, medium and high risk impact amongst a variety of requirements or functions that are all or are mostly of high GxP significance. In this case a quantitative (numeric) approach allows you to take a more granular view and again focus your verification activities on the requirements or functions which are of the highest risk impact.

Typically these will be systems which are safety critical and while this approach could be very useful in terms of manufacturing systems, in terms of enterprise systems we see this approach being used to the most critical systems such as adverse event systems (AES), LIMS systems used for product release, MES etc. Even with these systems quantitative risk assessment can be used on a selective basis for those modules which the initial risk assessment determines to be most critical.

Q. Who should conduct the risk assessment of EDMS system supporting the whole Enterprise?

A. Risk assessments cannot be conducted alone. This was a key points bought out in this week's GAMP UK meeting where we ran a risk assessment exercise and it was clearly valuable to have a variety of opinions and experience feeding into the process. You need people who understand the requirements, the business processes and the resulting risks to give their expertise with respect to risk impact.

You also need technical subject matter experts from the engineering or IT group who are much more likely to understand the risk likelihood. Both groups can contribute to thinking about risk detectability, either in terms of detecting risks within the system or as part of the normal business process checks.

It is therefore very important to invite the right people with the right breadth and depth of knowledge to any risk assessment exercise and to allow sufficient time for the relevant risk scenarios to be identified and assessed.


Thank you as ever for your interesting questions - we hope we find the answers above useful. Remember that you can join us on 17th October when will be looking at the very thorny issue of validating enterprise systems in the Cloud  as Software-as-a-Service (registration is free and is open here)

Wednesday, September 26, 2012

Global Outsourcing Conference Report

Over the last three days we've been taking part and presenting at the third Global Outsourcing Conference, jointly organized by Xavier University and the US FDA.

Although not the best attended of conferences this year, it proved to be one of the best in terms of content presented and the quality of the invited speakers, including a couple of key note addresses from senior members of US FDA. This resulted in some very interesting and beneficial discussions amongst the attendees, all of whom have taken home some thought provoking material and ideas for implementing positive change in terms of better securing the supply chain, assuring product and patient safety and in optimizing the performance of their extended enterprises.

The conference looked at a wide range of outsourcing and supply chain issues, ranging from the pragmatic management of outsourcing and supply chain management best practices, with a mixture of practical best practices from the pharmaceutical industry and research and experience from a number of leading Universities working in the field (presentations are currently available on the Xavier GOC website).

Of significant interest were the FDA presentations looking at the implications of the recent FDA Safety and Innovation Act (FDASIA - due to be signed into law next month) and the changes that this will have in terms of changes to GMP and GDP regulations.

There was a significant interest in the topic of serialization and ePedigree - which was covered in a number of sessions and signs are that companies are now realizing that rolling these solutions out will be necessary and more difficult than originally envisaged when compared to simpler pilot studies.

Supplier selection, assessment and management were also key topics with the focus on developing partnerships and relationships as the best way of meeting forthcoming regulatory expectations for the management of suppliers.

Business & Decision presented a deep dive session on the future challenges faced by ERP System and Process Owners, looking at the need to integrate with serialization systems, master data management systems, and supply chain partners systems. Acknowledging that many ERP systems were never designed to handle such a level of integration, the session looked at how middleware solutions such as Business Process Management solutions and SOA can be used to better integrate the supply chain.

Outsourcing clearly isn't going away and although some companies are looking to in-source some strategic products and services once again, the issues associated with outsourcing cannot be ignored. Although examples from India and China were much in evidence it was also acknowledged that outsourcing risks do not solely exist in so-called 'emerging economies'

This issues exist not only with product (API, excipients and other starting materials), but also with services such as IT services and it is clear that the US FDA expect companies to better manage their suppliers and supply chain.

For pharmaceutical companies looking to get involved in the debate there is the opportunity to follow the discussion on-line in the LinkedIn "Xavier Pharmaceutical Community".

In summary, the conference provided pharmaceutical companies with a comprehensive list of the topics they will need to be address in the next 1 - 3 years, which now need to be developed into a road map leading to on-going compliance, improved product and patient safety and more efficient and cost-effective supply chain operations.






Thursday, September 20, 2012

Validating Enterprise Systems - Questions Answered

Thanks to those of you who attended the first session on our 'virtual book tour' - in which we looked at the some of the basics of validating corporate applications, based on the the book "Validating Enterprise Systems: A Practical Guide"

A number of people submitted questions before the webcast which we answered during the session and we also had time to answer a couple of questions submitted at the end of the session. You can see a recording of the webcast and download a copy of the slides at the Business & Decision Life Sciences website.

There were however a couple of questions that we didn't get around to and as usual we've addressed them here:

Q. What challenges to you see for Enterprise Systems in the next five years?

A. Over the past five years we've seen the functional footrpint of systems such as ERP, CRM, LIMS, PLM etc being extending beyond their traditional boundaries (as discussed in the webcast). However, even where business processes are extending beyond traditional 'silos' in the business, the extended business process is still very much within the enterprise.

We are already seeing Life Sciences companies struggle to extend their business processes beyond their own boundaries and this is becoming even more important as companies collaborate and outsource more. (I'm talking about this very issue at the Xavier / FDA Global Outsourcing Conference next week).

Enterprise systems will therefore need to be capable or orchestrating business process and consolidating data from across the extended enterprise - meaning across business partner, supplier and even customer processes and systems. While the tools and technical standards exist to start to do this today, very few companies are actually taking this step and it is an area which promises true business advantage.

Q. How do you see the use of Service Oriented Architecture changing the game with respect to validation?

A. This is really linked to the previous question. SOA - and Business Process Orchestration- is one of the enabling technologies that allows end-to-end business processes to be defined, monitored and optimized both within the traditional enterprise boundary and across the extended enterprise.
These changes will bring compliance and validation challenges.

Key regulatory business processes - such as adverse events management and product recall - will need to be coordinated across the extended enterprise. This will require validation of the end-to-end business process as well as the computer systems validation of the individual applications and Middleware / SOA components.

Like the technology, the techniques for this already exist but relatively few organizations know how to achieve such compliance efficiently in complex organizations and with such integrated architectures. This will be a challenge both to business process owners, to IT and especially to Quality and IT Compliance and CSV professionals, who sometimes tend to lag behind in applying well established principles to new technologies. It's therefore important that we keep up not only with the changes that have taken place over the past five years, but with the changes that are still to come.


In the next stage of the 'virtual book tour' we'll be looking at how the approach to risk management has changed in the last five years - we hope you'll be able to join us for that session, which is on 3rd October (registration is available here)

Wednesday, August 29, 2012

Parameterised IQ Protocols

Another question from the 21CFRPart11 forum - not strictly relating to ERES, but interesting all the same:

Q. I am wondering about a project and how the FDA could see it as a validated way to execute qualification protocols.
 

There is the idea: we validated our document management system, where we validated the electronic signature, and the documents could be developed as pdf forms, where some fields are able to be written... and in this case, we could develop our qualification protocols as a pdf forms, with the mandatory fields for protocols, able to be written, and filled with the qualification info.

Is this a situation which FDA could see as a correct way to develop and execute protocols?


A. There should be no problem at all with the approach, as long as the final protocols (i.e. with the parameters entered) are still subject to peer review prior to (and usually after) execution.

We do this with such documents and we have also used a similar approach using HP QualityCenter (developing the basic protocol as a test case in QualityCenter and entering the parameters for each instance of the protocol that is being run).

The peer review process is of course also much simpler, because the reviewer can focus on the correct parameters having been specified rather than the (unchanging) body of the document.

Scaleable Database Controls for Part 11 Compliance

A good question posted on-line in the 21CFRPart 11 Yahoo Group, going to the heart of security controls around Electronic Records

Q Consider an electronic record system utlizing a typical database where a User ID and Password combination are used as an electronic signature to sign the records.  Assume the front end application has sufficient controls to prevent manipulation of the signature by the users.

Without getting too technical, what types of controls and solutions have you folks seen in terms of ensuring compliance with the requirements of 11.70 in regards to access to the database on the back end, typically by a DBA? 

How do you ensure that the linkage from record to signature is secure enough in the database to prevent manipulation by the DBA via ordinary means (i.e. with standard functionality and tools)?

Is there a risk justification for allowing the DBA some ability to manipulate the signature linkage, or should it be prevented by all but extraordinary means (i.e. using non-sanctioned tools, hacking, etc.)?


A. Without getting too technical there are a variety of controls that can be used. Which is implemented depends on risk and should be decided based upon the outcome of a documented risk assessment.

The first step is to use digital rather than electronic signatures. At its simplest, this means calculating some sort of 'hash' (fancy checksum) which is calculated using the contents of both the record and signature components. If either the record or signature components are changed, any attempt to recalculate the hash will come up with a different answer and you will the know that something has changed.

It won't however tell you what was changed - for that you need to rely on audit trails which can be set up at either the applications level (looking at changes to the defined record) or at the database level, to identify any inadvertent or accidental changes at the database level.

There is still the chance that the DBA can turn off the audit trails, so with some databases (e.g. Oracle) you have a couple of other options. The first is to write the audit trails into a separate database schema to which the DBA does not have access. This can use something like Audit Vault technology, which means the audit trails are written to a separate database server. The second is to make the database tables 'virtual private' database tables, which means the DBA can't even see them, let alone access them.

There may be circumstances when the DBA needs to access the database to make changes at the data level e.g. data corruption, or an accidental deletion by a user. It is permissible to make corrections under these circumstances, but you need a strict change control procedure to record what is being fixed, why and when (i.e. to preserve the audit trail). In such circumstances you would typically issue the DBA with a 'Super DBA' user account and password to make the changes, and then change the password afterwards (obviously someone needs to be trusted at some point).

One important principle is the idea of 'motivation' - why would a DBA want to make such changes? Organisational controls should be in place (including training and sanctions) to prevent e.g. a DBA and a system user from working in collusion to falsify records. Clearly system users should not also be DBAs.

It is therefore possible to make your Part 11 records and signatures very, very secure, but the more secure they are the more complex and expensive the solution. That's where the risk assessment is important - to identfy what is appropriate for any given set of records.

Friday, August 24, 2012

Measuring the Value of Validation?

An interesting question today on measuring the value of validation and assigning resources accordingly - something we see very few Life Sciences organizations doing well.

Q. How do others measure the value that validation creates within an organisation?
Does anyone have any experience of assigning value to validation activities?

I'm interested in how others may allocate resource and how within the validation planning process limits are/can be defined in terms of the cost of man hours against the economic benefits created from validation?
 
A. We have a process as part of our 'Lean IS Compliance' programme to put KPIs in place and measure cost and compliance levels (http://www.businessdecision-lifesciences.com/1170-lean-is-compliance.htm if anyone is interested). The challenge with measuring the value of validation is that it's difficult to compare projects with and without validation.
 
Most projects are different and many companies only see the cost of validation and not the value.
However, it can be done when you are e.g. rolling our an ERP system within a wider organisation and some business units need the system validating (APIs) and other business units do not (bulk chemicals).
 
Even in these cases most organisations only measure the cost and see validation as a 'negative', but if you are clever (which is what Lean IS Compliance is all about) you can also measure the value in terms of hard metrics (time and cost to fix software defects that make it into production) as well as soft metrics (user satisfaction when the system does - or does not - work right first time).
 
However, these are general benefits which accrue to any software quality process.
 
Although the principle of risk-based validation is to assign greater resources to systems and functions of greater risk, most companies again see this as an opportunity to reduces the level of resources assigned to lower risk systems/functions and the focus is again not on benefits.
 
It is possible to look at the implementation of systems of comparable size/complexity, where one system is high risk (and has more validation resources/focus) and another system has low/no risk (and few/no validation resources). Our work clearly shows that the higher risk systems do indeed go into production with fewer software issues and that this does have operational benefit (hard and soft benefits).
 
However, few companies track and recognise this and cannot correlate the investment in validation (quality) with the operational benefits. This is often an accounting issue, because the costs are usually capital expenditure and the benefits are associated with (lower) operational expenditure.
 
To really pull together a programme like Lean IS Compliance needs:
  • Quality/Regulatory to truly accept a risk based approach (and that enough is enough)
  • IT to understand the value of software quality activities (to which formal validation adds a veneer of documentation)
  • Accountants to be willing to look at ROI over longer timescales than is often the case.

Thursday, August 23, 2012

Risk-Based Approach to Clinical Electronic Records/Signatures

Q. I'm looking at complying to regulations, in order to validate some lab software collecting data from an ICP. Anyone have any ideas on attacking this? The software claims to be "part 11 capable" but it's pretty soft. E-sigs are 'loose' and audit trails are also 'loose'. For something like this do you feel attaching a risk assessment to each part of the regs to determine what level of testing to perform?

A. The reality is that with the 2003 Scope and Appplication guidance introducing a risk-based approach to e-records, the ability to use digital, electronic or no signatures (the latter being conditional on predicate rule requirements) and with the risk-based approach in Annex 11, taking anything other than a risk-based approach makes no sense.

You should therefore conduct a risk assessment against each of the applicable parts of Part 11, and also for each applicable records and signatures. The latter because the risks impact associated with different records/signatures may well be different (in terms of risk to the subject and subsequent risk to the wider patient population) and also because different technical controls may be applied to different areas of the system.

This will allow you to assess the risk impact, likelihood and detectability for each record/signature and to determine whether the in-built controls are appropriate to the risk. If they are not you can either find alternative solutions e.g. print out the records and sign them, or validate a process to copy data/records to an external, secure system and signing them there or introducing additional procedural controls. If there are no alternative control that are acceptable then you may well need to be looking at an alternative piece of software.

Wednesday, August 22, 2012

Verifying Computer Clocks in Production

Another good question asked online, that we share here...

Q. In a mechanically driven process, when Time is a critical parameter, the timer would be qualified for accuracy and reliability. Probably challenged against another calibrated clock for various durations that would align with the process requirements/settings. No doubt Preventive Maintenance and Calibration schedules would be developed and approved during OQ.

For automated processes that have Time as a critical parameter and the Time is measured by the PLC internal clock, what strategies are best to provide documented evidence that the internal clock is accurate, reliable (especially over time).


A. It's a good question - I can remember a certain DCS (back in the 1980's) that had two versions of a real time clock chip. One had a ceramic chip and the other plastic. The plastic clock chips could lose more that 15 minutes in an hour (honestly - I had to write the code to work around the issue).

On the basis of risk, and following ASTM E2500 / GAMP principles of leveraging the work of your supplier, if you believe that things have improved in the last 20 years you can assume that the system is acceptably accurate and include a check of elapsed time as part of your OQ or PQ.

Confirming this across a number of intervals and process runs over time should allow you to confirm that the clock is sufficiently accurate across a range of intervals with negligible drift over time (you will need to define what 'acceptable' is in your protocol and use a time reference to measure real time).

Taking a risk based approach, I would only expect to specifically test the clock if I had reasons to suspect that their might be an issue.

When are e-Signatures Required?


We answer a lot of questions submitted to us by e-mail or in various on-line groups. We thought that some of these deserve a wider audience, so from now on we're going to publish the best questions (and answers) here on the Life Sciences Blog.

Q. I had a question regarding 21 CFR Part 11 Electronic records. If a system has electronic records but no electronic signature but does contain the Audit trail does it mean that each record that is created within the system has to be printed in hard copy and signed so as to associate that record with the signature.

A. The fact the system has an Audit trail which links the user name and action on that records is sufficient enough to meet the 21 CFR Electronic Records criteria. If the record requires no signature (either explicit or implied from a predicate rule perspective) then there is no need to print and sign anything (either copies of the record or the audit trail).

The (secure) recording of what, (optionally who) and why (inferred from the transaction) will be sufficient to demonstrate compliance with the predicate rule.

A good example would be a training record. The predicate rule does not require training records to be signed, but you would still want to record what (training was undertaken), who (who was trained) and why (the training topic or syllabus). Even though an audit trail would be maintained for the training record, there would be no need to print and sign it because the predicate rule requires no signature.

Tuesday, August 21, 2012

Cloud Computing Comes of Age in Life Sciences

For a while now we have been saying the Cloud was coming of age in the Life Sciences industry.

Business & Decision,  along with a small number of other Providers have been providing Infrastructure-as-a-Service and Platform-as-a-Service for some time.

We also said that as far as Software-as-a-Service was concerned, we would see Life Sciences specialist vendors (e.g. LIMS, Quality Management, Learning Management etc) providing compliant Software-as-a-Service solutions - simply because they understand our industry both at the functional level and also at the regulatory level.

We are working with a number of such vendors to deploy their software on our Platform-as-a-Service solutions, leveraging virtualization to provision solutions that are inherently flexible, scalable and - perhaps just as importantly - compliant.

At the same time, we have just started to engineer our first compliant 'Cloud Anywhere' solutions - which allow us to deploy pre-engineered and pre-qualified Platforms (hardware, power, HVAC, storage, virtualization, operating systems, database servers and applications servers) anywhere in the world. This was an idea first developed with Oracle with their Exadata and Exalogic machines (for which Business & Decision developed standard Qualification Packs).

Based upon a wider and more affordable technology base ‘Cloud Anywhere’ allows Business & Decision to leverage our investment in our Quality Management System to provision compliant Private or Community Cloud solutions with the minimum of additional qualification activities. These can be installed on client sites, in third party data centres of in the data centres of our software partners.

As well as deploying the solution, these 'Cloud Anywhere' solutions also come complete with Managed Services from Business & Decision - meaning that clients, partners etc no longer need to worry about the management of the Platform. All of this is taken care of remotely by our own staff (with the exception of local power and network connections of course) and the solutions can also be engineered to automatically failover to a remote Disaster Recovery site.

In the last couple of years we have seen people asking "How long will it be before everything is in the Cloud?", but the reality is that this will never be case in Life Sciences. There will always be Life Sciences companies who need or want some infrastructure on their own sites (because of network latency issues or data integrity issues) and the reality is that we are moving towards a mixed-Model Cloud Environment.
We will see a mixture of non-clouded Infrastructure, Platforms and Software, and various Cloud models, including On-Premise & Off-Premise and Public & Private Clouds.

The coming of age of safe, secure multi-tenanted Software-as-a-Service and the availability of solutions such as 'Cloud Anywhere' means that Life Sciences companies now have the ability to mix'n'match their Cloud environments to meet their specific business needs - and address their regulatory compliance requirements.

It may not seem like it now, but in the next few years we will see these solutions move from leading-edge to mainstream and we will wonder what all the fuss about Cloud was for.

Thursday, April 5, 2012

Social Networking in Health Sciences: Is Medical Devices Missing a Trick?

In yesterday's webcast "Social Networking in Health Sciences: Is Medical Devices Missing a Trick?", we shared a good volume of information with our viewers, but unfortunately we didn't have time to address everyone's questions.

We've therefore reproduced the questions below, along with our answers.

Q. How precisely can advertising be targeted on a platform such as Facebook?

Because platforms such as Facebook know a great deal about their users - based upon information provided by the users themselves (e.g. hobbies, likes, job, education, home location etc) it is possible for Facebook to 'target' advertising very precisely. When advertising on Facebook it is possible to narrow down the target audience by a number of criteria, such as location, language, education level and work role, age and gender and even likes, interests and connection status to other users.

Note however that Facebooks own advertising guidelines prohibit the advertising of pharmaceutical products and on-line pharmacies (but not medical devices!)

This can be useful when attempting to restrict advertising to appropriate target audiences and geographies but it certainly isn't foolproof

Q. Is it legal to set up patient social networks?

It depends upon the geography and applicable regulations, but in many cases it is acceptable to set up patient network groups on-line. However, when a medical device company effectively pays for or owns the media, careful control over the content must be maintained (as discussed in the webcast). Also as discussed in the webcast, it is much more difficult for medical devices companies to controlled earned media which is posted in a group, forum or page which they have set up.
Another allowable option in some geographies may be for medical device companies to support patient groups financially, who then set up their own social networks.
In all cases is good practice to provide transparency even where this is not required legally.

Q. How effective is social media for marketing purposes?

There is no doubt that using social networks can be effective, but this has to be part of a carefully planned multi-channel marketing strategy, coordinated through the use of a relationship management platform or system.
Our own experience (as a consultancy rather than as a CRO)  is that earned social media can be effective in terms of initiating relationships that can then lead to business. As a business we generally strive to achieve a balance between delivering value in our on-line interactions and promoting our business. Medical device companies are best advised to leverage this 'trusted advisor' status through the use of earned media, but it’s important not to underestimate the amount of time required to develop and maintain such trust.

Thanks to those of you who attended yesterday's event and we hope that this answers your questions. The webcast recording will be on-line for those of you you missed it and remember that you can always get in touch via e-mail or by using the 'Ask An Expert' form on the Business & Decision Life Sciences website.

Friday, March 30, 2012

Computer System Validation Policy on Software-as-a-Service (SaaS)


In a recent LinkedIn Group discussion (Computerized Systems Validation Group: Discussion "Validation of Cloud), the topic of Software-as-a-Service (SaaS) was widely discussed and the need to identify appropriate controls in Computer System Validation (CSV) policies was discussed.

The reality is that relatively few compliant, validated SaaS solutions are out there, and relatively few Life Sciences companies have CSV policies that address this. 

However, there are a few CSV policies that I’ve worked on that address this and although client confidentiality means that I can’t share the documents, I did volunteer to publish some content on what could be included in a CSV policy to address SaaS.

Based on the assumption that any CSV policy leveraging a risk-based approach needs to provide a flexible framework which is instantiated on a project specific basis in the Validation (Master) Plan, I've provided some notes below (in italics) which may be useful in providing policy guidance. These would need to be incorporated in a CSV Policy using appropriate language (some Regulated Company's CSV Policy's are more prescriptive that others and the language should reflect this).

"When the use of Software-as-a-Service (SaaS) is considered, additional risks should be identified and accounted for in the risk assessment and in the development of the Validation Plan computer system validation approach. These are in addition to the issues that need to be considered with any third party service provider (e.g. general hosting and managed services). These include:
  • How much control the Regulated Company has over the configuration of the application, to meet their specific regulatory or business needs (by definition, SaaS applications provide the Regulated Company (Consumer) with little or no control over the application configuration)
o   How does the Provider communicate application changes to the Regulated Company, where the Regulated Company has no direct control of the application?
o   What if Provider controlled changes mean that the application no longer complies with regulatory requirements?
  • The ability/willingness (or otherwise) of the Provider to support compliance audits
  • As part of the validation process, whether or not the Regulated Company can effectively test or otherwise verify that their regulatory requirements have been fulfilled
o   Does the Provider provide a separate Test/QA/Validation Instance?
o   Whether it is practical to test in the Production instance prior to Production use (can such test records be clearly differentiated from production records, by time or unique identification)
o   Can the functioning of the SaaS application be verified against User Requirements as part of the vendor/package selection process? (prior to contract - applicable to higher risk applications)
o   Can the functioning of the SaaS application be verified against User Requirements once in production use? (after the control - may be acceptable for lower risk applications)
  • Whether or not the Provider applies applications changes directly to the Production instance, or whether they are tested in a separate Test/QA Instance
  • Security and data integrity risks associated with the use of a multi-tenanted SaaS application (i.e. one that is also used by other users of the system), including
o   Whether or not different companies data is contained in the same database, or the same database tables
o   The security controls that are implemented within the SaaS application and/or database, to ensure that companies cannot read/write delete other companies data
  • Where appropriate, whether or not copies of only the Regulated Companies data can be provided to regulatory authorities, in accordance with regulatory requirements (e.g. 21CFR Part 11)
  • Where appropriate, whether or not the Regulated Companies data can be archived
  • If it is likely that the SaaS application is de-clouded (brought in-house or moved to another Provider)
o   Can the Regulated Companies data be extracted from the SaaS application?
o   Can the Regulated Companies data be deleted in the original SaaS application?

If these issues cannot be adequately addressed (and risks mitigated), alternative options may be considered. These may include:
  • Acquiring similar software from an acceptable SaaS Provider,
  • Provisioning the same software as a Private Cloud, single tenancy application (if allowed by the Provider)
  • Managing a similar application (under the direct control of the Regulated Company), deployed on a Platform-as-a-Service (PaaS)"
Hopefully these ideas will help people to develop their approach to SaaS, but CSV Policies should also address the use of PaaS and IaaS within the broader context of outsourcing.

Wednesday, March 14, 2012

Successful and Compliant ERP Projects

Unfortunately we ran out of time in yesterday's webcast “Secrets to Success - Plan and Implement Compliant ERP Projects”.

That was partly my fault because I was late dialing in. (Apparently, Microsoft Exchange/Outlook still doesn't automatically recognize that the US and Europe change to daylight savings on different weekends - doesn't anybody validate this software?). My apologies for that, and for the fact that we ran out of time to answer all of your questions as fully as we would have liked.

During the webcast we discussed how small to medium-sized life sciences companies can plan for the successful implementation of their ERP systems. We looked at how to align project planning and validation planning activities, we reviewed the typical project activities that are the responsibility of the regulated company and we looked at the importance of assigning the right people to the project. Below are the questions we didn't have time to answer fully and our answers - we hope that you find them useful.

Q. How does implementing ERP in pharmaceuticals vary from other industries?

A. The main differences are that in many cases the system requirements represent mandatory regulatory requirements which have to be fulfilled. There is no option to defer these to a later release and so many of the software vendor's ‘accelerated’ implementations using out-of-the-box software configurations cannot be used. There is also the fact that requirements, design specifications and testing all need to be formally documented and there may also be the issue of electronic records and electronic signatures to consider.

Although it is possible to implement an ERP system in a small to medium business within 8-12 weeks, the above factors make this virtually impossible in the life sciences industry. The fastest that we have ever been able to implement an ERP system in life sciences has been 16 weeks and for small to medium business 6 to 8 months is more typical.

Q. Does the increased focus on formal project management have any benefits?

A. The focus on formal project management controls means that time and cost overruns are usually better controlled. The focus on the formal definition and documentation of requirements also means that systems are much more likely to meet the real requirements of the real users. While the time and cost of implementing in life sciences is greater than some other industries, the fact that the system more completely fulfills the user requirements generally provides a better return on investment.

As an industry we must do a better job in demonstrating return on investment in order to justify the increased time and cost when compared to other industries. Where case studies are available they clearly show that a formally defined project management process, documented requirements specifications and tests and the need to demonstrably confirm that user requirements have been fulfilled delivers an ERP system that is fit for purpose, better meet the needs of users and provides better return on investment over the life of the system.

Q. How realistic are regulated companies in their expectations when looking to implement ERP or CRM?

A. Clients can certainly be very demanding and their expectations can be difficult to manage, especially when those expectations are informed by software vendors and system integrators who don’t really understand the life sciences industry.

As a company constantly engaged in implementing ERP and CRM systems but also competing to win such projects we often see small to medium life sciences companies with unrealistic expectations with respect to the real project budget, how long it will take to implement the system and the level of commitment their people will need to devote to the project. This is natural where the procurement process doesn't really understand the need for regulatory compliance, under values the benefits of formal validation and focuses mainly on comparing costs and implementation timescales.

On the validation side of business we have worked with a number of system integrators who are inexperienced in the life sciences industry and as a result we’ve had to help a lot of regulated companies bridge the gap between their initial expectations and what is really required for a successful and compliant project.

The reality is that it takes a minimum amount of time and effort to successfully implement a compliant ERP or CRM system. Small to medium life sciences companies would be better served by starting projects with realistic expectations and thereby avoiding having to go back to stakeholders to ask for additional funding and to explain why the project is “late”.

Q. Where do most ERP implementations fail?

A. Failure is a relative term. Most projects go live and deliver acceptable return on investment but are often seen as challenging projects or having failed because of initial unrealistic expectations with respect to the level of effort required of the regulated company. As discussed during the webcast, it is important that regulated companies really understand the activities that they will be responsible for and the deliverables that they will have to produce.

These need to be resourced appropriately; funding needs to be available and realistic timescales need to be set. If realistic timescales were put in front of stakeholders at the beginning of a project far fewer projects would be considered to have ‘failed’. Key to this is involving experienced resources in the concept phase of the system life cycle and during the early stages of the project planning.

Such resources need to have experience of implementing and validating ERP (or CRM) systems in the life sciences industry and the experience and knowledge that they bring to the table is invaluable.

As ever, if anybody has any follow-up questions from the webcast they can comment on the blog will get in touch through a usual e-mail address life.sciences@businessdecision.com. If you missed the webcast and would still like to view it the recording is available here.