Tuesday, October 23, 2012

Using non-ERES Compliant Business Intelligence Tools in Support of GxP Decision Making

Another interesting question here - this time posted in the LinkedIn 21 CFR Part 11 discussion group on the use of business intelligence tools to support GxP decision making,. However, the business intelligence tools are not 21 CFR Part 11 compliant!

Q. We are implementing a 'system' which comprises of replicating software from Oracle and a destination replicated Oracle database. The purpose of this 'system' will be for running Oracle and Cognos reports from this replicated database instead of the high transactional source database. This system will be accessed only by the DBA.

The replicating software is pretty much command line based which is basically to say it does not have a GUI. From the command prompt user interface of this software, we can define (enter commands) for the source and target database; define the number of processes to run; set filters for what data are to be replicated and the frequencey for replication.

We did an assessment and found that part 11 is applicable. The problem is that we cannot satisfy all the part 11 requirements.

Although we deal with e-records (store the data from the source system including setup/ configuration of replication process) how do we still justify that e-signatures, audit trail requirements or password aging/restrictions are not applicable and not supported by the replicating software?


A. We've just finished doing exactly this for another project where we set up a reporting database to reduce the load on the transactional system. Some of the reports from the data warehouse were GxP significant.
 
We also had similar issues consolidating medical device traceability data for the device history record from signed records in SAP to the SAP Business Warehouse (BW) where we lost the secure relationship between the signed records in the transactional database and the copy in BW.
 
It's all about clearly identifying what records and data are used for what purposes, documenting the rationale and design, developing or updating appropriate SOPs and training your users and DBAs accordingly.
 
The first thing to do is to clearly differentiate between the records that you will ultimately rely on for regulatory purposes versus the data that you use for reporting. We maintained (and clearly documented) that the signed records in the transactional system would always be the ultimate source of data for regulatory decision-making (i.e. product recall, CAPA investigations etc). Where applicable these were signed and could be verified and were still Part 11 compliant.
 
Our rationale was that the data warehouse (and associated GxP reports) were acting as a grand 'indexing system' which allowed us to find the key regulatory records much faster (which has to be good for patient safety). We would not use these reports for making regulatory critical decisions, but we did use these reports to be able to more quickly find the key records in the transactional database which we could then rely upon for regulatory decision-making. Under that rationale the records and signatures which was subject to Part 11 remained in the transactional database.
 
We made sure that SOPs for key processes such as product recall, CAPA investigation etc were updated to reflect the appropriate use of the reports and the signed transactional records. We were able to demonstrate that using the data warehouse reports was between 2 to 3 times faster than running queries and filters in the transactional system and we could find the records much faster. In our testing we were also able to demonstrate that we always found the correct record (more than 99.9% of the time) unless something had subsequently been changed in the transactional database (less than 0.1% of the time) and even where this was the case the discrepancy was obvious and would not lead to erroneous decision-making.
 
However, that's not to say that the GxP significant reports were not important. We still validated the software which replicated the databases, we qualified the underlying IT infrastructure and we validated the GxP significant reports.
 
We essentially had three categories of report:
  • Non-GxP reports, which were not validated (this was the majority of reports)
  •  GxP significant reports, which were not based upon signed copies of Part 11 records, but which were validated.
  • GxP significant reports, which were based upon signed copies of Part 11 records help in the transactional system.These were validated and were clearly annotated to the effect that they should not be relied upon for regulatory decision-making and they also gave a reference to the signed, transactional record in the transactional database.
 
On both of these projects we had much better tools for the replication of the data. Since you're using Oracle databases we would recommend that you create (and maintained under change control/configuration management) PL SQL programs to replicate the data bases. This will significantly reduce the likelihood of human error, allow you to replicate the databases on a scheduled basis and make it much easier to validate replication process.
 
For more background (discussion paper and webcast) on the use of validated Business Intelligence tools for GxP decision making on our website at http://www.businessdecision-lifesciences.com/2297-operational-analytics-in-life-sciences.htm.




Monday, October 22, 2012

Validating Clouded Enterprise Systems - Your Questions Answered

Thank you once again to those of you who attended the latest stage on our virtual book tour, with the latest stop looking at the validation of enterprise systems in the Cloud. This is in relation to chapter 17 of "Validating Enterprise Systems: A Practical Guide".

Unfortunate we had a few technical gremlins last Wednesday (both David Hawley and myself independently lost Internet access at our end just before the webcast was due to start) and so the event was postponed until Friday. Our apologies again for that, but we nevertheless received quite a number of registration questions which were answered during the event (you can find a recording of the webcast and copies of the slides here).

We did manage to get through the questions that were asked live during the webcast but we received one by e-mail just after the event which we thought we would answer here in the blog.

Q. "What elements should go into a Master VP for Clouded application / platforms?

A. It depends on the context that the phrase Master Validation Plan is being used. In some organisations a Master Validation Plan is used to define the approach to validating computerised systems on an individual site, in an individual business unit or, as will be the case here, for applications in the Cloud.

In other organisations a Master Validation Plan is used to define the common validation approach which is applied to an enterprise system which is being rolled out in multiple phases to multiple sites (each phase of the roll-out would typically have a separate Validation Plan defining what is different about the specific phase in the roll-out)

Logically, if we are implementing a Clouded enterprise application it could (and often would) be made available to all locations at virtually the same time. This is because there is limited configuration flexibility with a Software-as-a-Service solution and different sites have limited opportunities for significant functional differentiation. In this context is it is unlikely that the second use of a Master Validation Plan would be particularly useful so we'll answer the question in the first context.

Where a Master Validation Plan is being used to define the approach to validating Clouded enterprise systems it need to define the minimum requirements for validating clouded applications and provide a framework which:
  • Recognises the various cloud computing models (i.e. Infrastructure-as-a-Service, Platform-As-a-Service, Software-as-a-Service; Private Cloud, Community Cloud, Public Cloud and Hybrid Cloud; On-Premise and Off-Premise
  • Categorises platforms and applications by relative risk and identifies which cloud models are acceptable for each category of platform/application, which models are unacceptable and which ones may be acceptable with futher risk controls being put in place
  • Identifies opportunities for leveraging provider (supplier) activities in support of the regulated company's validation (per GAMP 5/ASTM E2500)
  • Stresses the importance of rigourous provider (supplier) assessments, including thorough pre-contract and surveillance audits
  • Highlights the need to include additional risk scenarios as part of a defined risk management process (this should include risks which are specific to the Essential Characteristics of Cloud Computing as well as general risks with the outsourcing of IT services)
  • Lists additional risk scenarios which may need to be considered, depending upon the Cloud Computing model being looked at (these are discussed in our various webcasts)
  • Identifies alternative approaches to validating clouded enterprise systems. This would most usefully identify how the use of Cloud computing often prevents traditional approaches to computer systems validation from being followed and identifies alternative approaches to verifying that the Software-as-a-Service application fulfils the regulated companies requirements

With respect to the last point our webcast "Compliant Cloud Computing - Applications and Software as a Service" discusses issues with the validation of Software-as-a-Service applications using traditional approaches and outlines alternative verification techniques that can be used.

Whether it is in a Master Validation Plan or some form of Cloud strategy document, it is important for all regulated companies to start to think about how they will validate Clouded applications. This is clearly a topic that is not going to go away and is something that all life sciences companies will need to address.

You may also be interested to know that on 15th November 2012 we're going to be looking more closely at the current state of the Cloud computing market specifically with respect to meeting the need of regulated companies in the life sciences industry .  We'll be talking about where the market has matured and where appropriate providers can be leveraged - and where it hasn't yet matured. Registration is, as ever, free of charge and you can register for the event at the Business & Decision Life Sciences website.

We look forward to hearing from you on the last stage of our virtual book tour when we'll be looking at the retrospective validation of enterprise systems, which we know is a topic of great interest to many of our clients in Asia, Eastern Europe, the Middle East and Africa and in Latin and South America.

Wednesday, October 17, 2012

Part 11 and "Disappearing" Signature Manifestations

An interesting question appeared on-line today which we thought deserved airing with a wider audience via the blog.

Q. When implementing an electronic document management system, is it acceptable to make the author/approver names and dates disappear? Is this still in compliance with 21 CFR Part 11.50, Signature Manifestations?

A. Let's remind ourselves of the relevant rule:

§ 11.50 Signature manifestations.
(a) Signed electronic records shall contain information associated with the signing that clearly indicates all of the following:
(1) The printed name of the signer;
(2) The date and time when the signature was executed; and
(3) The meaning (such as review, approval, responsibility, or authorship) associated with the signature.
(b) The items identified in paragraphs (a)(1), (a)(2), and (a)(3) of this section shall be subject to the same controls as for electronic records and shall be included as part of any human readable form of the electronic record (such as electronic display or printout).


The first question to ask ourselves is the question of scope. Not all of the documents stored in the EDMS will fall within the scope of 21 CFR Part 11. In fact, this is a notoriously difficult area in which to interpret the predicate rules. Some rules will clearly state that documents need to be signed and in other areas it must be inferred from the use of words like "authorised" or "approved".

The first thing to therefore do is to clearly decide which categories of document fall within the scope of 21 CFR Part 11 OR (to be on the safe side) to decide that the approval of all documents will meet the technical requirements of part 11.

Looking at the specific question of signature manifestations, subpart 11.50 (b) clearly states that the name of the person signing, the signature date and the meaning of the signature must be included in any printout or electronic display.

Making the names and dates "disappear" in some way clearly contravenes the requirements of 11.50 (b) if these components are not readable in either the on-screen display or the hardcopy printout of the document. If this were to be implemented we would consider the solution to be non-compliant with Part 11, at least with respect to the documents that fall within the scope of Part 11.

Monday, October 8, 2012

PLCs and GAMP Categorization

Here's another interesting question and answer that we thought we'd share on-line with a wider audience:

Q. We are developing a standard template to help in identifying and classifying PLCs into GAMP 5 categories. Could you please guide us to develop the right tool for the same.

A. The 'questions' to include are relatively simple and relate to the GAMP definitions of the categories, specifically on the context of PLCs - for example:
- Is the PLC (or parts of the software) used 'as is' with no modification or with simple changes to parameters e.g. run time, setpoint etc (typically with a machine or piece of equipment)? [Cat 3]
- Is the PLC (or parts of the software) reconfigured using standard graphical programming languages e.g. ladder logic, function blocks? [Cat 4]
- Is the PLC (or parts of the software) programmed e.g. are you writing code to achieve a function or operation that is not a standard feature of the PLC? [Cat 5]

You should also remember that it is quite likely that PLCs will contain a combination of Cat 3, 4 and 5 software and that your validation approach should reflect this.

However, the on-going  problem you will have is two-fold:

1. PLC systems have a great deal of variation in the way they they are parametrized/configured/coded, and sometimes the lines are blurred eg. some 'configuration' can be as complex and error prone (risk-likelihood) as traditional coding. You can't possibly expect to come up with questions and guidance on every type of PLC out there.

2. You need to educate people on what all of these terms mean, in the context of the PLC you are looking at. This takes time and experience.

The only time I have seen this done successfully is when a company standardises on just one or two manufacturers PLCs and then they can provide a checklist / guidance based on the manufacturers specific software development techniques.

When developing the validation approach you will also have to take into account the work that may already have been done by the supplier if the PLC is part of an embedded system.

If you are implementing a lot of PLC systems it will be worth developing an SOP or guidance document on the topic. You will however need to develop some subject matter expertise to be able to guide and support your engineers, because this is a specialist area and needs experience to do cost effectively.

Friday, October 5, 2012

Reflecting on 21 years of GAMP

Last Tuesday saw the 21st anniversary celebrations of the GAMP Forum, now of course part of ISPE. It was a great opportunity to reflect on the history of GAMP and to catch up with some of the original founder members.

Although much of the talk was backward looking, rehashing events that led to the formation and subsequent growth of the GAMP forum, Randy Perez (current chairman of ISPE) did reflect upon the role that GAMP has played and continues to play within ISPE i.e. the GAMP Community of Practice has the best selling publications, the best attended conferences etc.

No one stopped at the time to really comment on why that might be. Having thought about it it occurs to me that this simply reflects the increasing importance of computerised systems not only in the pharmaceutical industry but also in everyday life. The pace of technology change is unending and it is perfectly understandable why GAMP came into being and why it continues to look into topical issues such as cloud computing, mobile platforms etc.

As long as this technology change continues apace GAMP will always have a role to play in applying well founded good practices to new technologies and new applications. There is however a significant challenge that ISPE/GAMP faces with the pace of technology change.

Looking back over recent years it appears that new technologies appear and are adopted by leading edge regulated companies faster than organisations such as ISPE/GAMP are currently able to respond. This is perfectly natural because the strength of organisation like ISPE and GAMP are that they are consensus driven and volunteer led. The time taken to achieve consensus and the limited time available from volunteers means that it takes months or years to discuss new technologies, the understand the implications, identify risks and how they can be mitigated and then to publish consensual good practice.

However, we're all aware that other technologies such as blogging, websites and social networking allow interaction between industry professionals in much shorter timescales. In many cases we are starting to see individuals and commercial organisations provide pragmatic and acceptable guidance well ahead of organisations such as GAMP/ISPE. The other part of the challenge is that although ISPE exists to serve the needs of the pharmaceutical community, the move towards greater outsourcing means that it is very often suppliers who are the subject matter experts with new technologies and consultants who have a broader experience in how new challenges are being tackled across the industry.

The challenge for ISPE and the GAMP Community of Practice is to get the balance right between achieving consensual good practices which regulatory agencies can buy in to and providing guidance in a timely manner. This will require more widespread use of some traditional channels such as ISPE Pharmaceutical Engineering and the greater use of Internet channels such as webcasts, web publishing and social networking. This will also mean continuing towards a model where suppliers and consultants provide valuable input but users from regulated companies are the final arbiters of what is acceptable with respect to good practice.

In some cases this will mean identifying a smaller number of thought leading subject matter experts and asking them to focus on providing pragmatic guidance in shorter timescales. This is certainly the way commercial organisations such as IVT and Concept Heidelberg are working when organizing conferences and commissioning articles and although ISPE/GAMP is a not-for-profit organisation it's important to realise that the lines between not-for-profit and commercial are indistinct in these areas. Another part of the challenge will be to identify appropriate subject matter experts in new technologies who may not be working in the pharmaceutical industry and who may not be part of the existing ISPE/GAMP community.

These challenges can however be overcome and ISPE is certainly moving towards this model, led as so often been the case by the GAMP Community of Practice.

Over the last 21 years GAMP has done an excellent job in providing practical guidance to the industry during what has certainly been the greatest period of technological change industry has seen. The fact that this has been led by volunteers (of whom both I, and Business and Decision Life Sciences are proud to be part) is perhaps one of the most amazing parts of the GAMP story. The fact that this extended community has developed good practices behind which most regulated companies and regulatory agencies now stand is a significant achievement and certainly one to be celebrated.

Happy 21st birthday GAMP - and here's to many more!

Thursday, October 4, 2012

Practical Risk Management - Webcast Follow Up

Thanks once again to those of you who joined us for yesterday's webcast - the second stop on our "virtual book tour" which looked at practical risk management. We had a good number of questions asked as part of the registration process which we handled in yesterday's webcast (you can watch a recording of the webcast and download the slides here) but unfortunately we didn't have time to answer all of your questions that were asked during the session.

As usual, we taken the time to answer the outstanding questions here on a life sciences blog.

Q. Can you say more about regulators who are worried about misused risk assessments?

A. During the webcast we mentioned that the number of inspectors from European and US regulatory agencies had commented that they have concerns about the quality of risk assessments and the resulting validation. This comments have been made during informal discussions and in one case at a conference.

Their concern is that the resulting validation is either not broad enough in terms of scope or not rigorous enough in terms of depth and that this has been uncovered during inspection of what they believe to be relatively critical systems. In a couple of cases inspectors have commented that they believe that this is a case of the companies involved using risk assessment as an excuse to reduce the level of effort and resources applied in validating such systems.

We know from their comments that in a number of cases this has led to inspection observations and enforcement actions and it appears that a number of regulatory inspectors are in their words "wise to the trick". As we said in a webcast yesterday is important that the scope and rigour of any validation is appropriate to the system and the risk assessment is used to determine which areas and functions in the system require greater focus. The objective of risk-based validation is not to simply produce a level of effort and expenditure but ensure that the efforts and resources are applied most appropriately.

Q. How much time and effort can be saved by using the right risk assessment approach?

A. Our experience is that by using a relative risk assessment process rather than a quantitative risk assessment process it is possible to reduce the time and effort spent on assessing risks by between 50 to 75%. We have also studied the outputs of both types of risk assessment process on very similar systems and it is encouraging to note that in many cases both processes have provided very similar outputs in terms of the distribution of high, medium and low risk priorities both in terms of the relative number of each risk priority grouping and the functions allocated to each group.

This means that for enterprise systems with lower risk it is possible to reduce the time spent assessing risks by half or three quarters and still come up with results which are sufficiently accurate to support appropriate risk-based validation. This is why it is so important that regulated companies have a variety of risk management processes and tools available to them so they can use the most appropriate and cost-effective approach.

Q. When would you use a quantitative risk assessment approach? For what type of systems?

A. You would typically use a quantitative risk assessment approach where it is necessary to distinguish low, medium and high risk impact amongst a variety of requirements or functions that are all or are mostly of high GxP significance. In this case a quantitative (numeric) approach allows you to take a more granular view and again focus your verification activities on the requirements or functions which are of the highest risk impact.

Typically these will be systems which are safety critical and while this approach could be very useful in terms of manufacturing systems, in terms of enterprise systems we see this approach being used to the most critical systems such as adverse event systems (AES), LIMS systems used for product release, MES etc. Even with these systems quantitative risk assessment can be used on a selective basis for those modules which the initial risk assessment determines to be most critical.

Q. Who should conduct the risk assessment of EDMS system supporting the whole Enterprise?

A. Risk assessments cannot be conducted alone. This was a key points bought out in this week's GAMP UK meeting where we ran a risk assessment exercise and it was clearly valuable to have a variety of opinions and experience feeding into the process. You need people who understand the requirements, the business processes and the resulting risks to give their expertise with respect to risk impact.

You also need technical subject matter experts from the engineering or IT group who are much more likely to understand the risk likelihood. Both groups can contribute to thinking about risk detectability, either in terms of detecting risks within the system or as part of the normal business process checks.

It is therefore very important to invite the right people with the right breadth and depth of knowledge to any risk assessment exercise and to allow sufficient time for the relevant risk scenarios to be identified and assessed.


Thank you as ever for your interesting questions - we hope we find the answers above useful. Remember that you can join us on 17th October when will be looking at the very thorny issue of validating enterprise systems in the Cloud  as Software-as-a-Service (registration is free and is open here)