Thursday, November 22, 2012

FDA Draft Guidance “Electronic Source Data in Clinical Investigations”

The revised FDA draft guidance document  “Electronic Source Data in Clinical Investigations” provides guidance to clinical trial sponsors, CROs, data managers, clinical investigators and others involved in the capture, review and archiving of electronic source data in FDA-regulated clinical investigations.

The original January 2011 draft guidance has been updated to clarify a number of points made to the FDA by commentators in the industry and the new draft guidance is published to collect additional public comments.

It's good to see industry and regulators working to develop guidance on the use of electronic Case Report Forms (eCRFs), recognising that capturing clinical trial data electronically at source significantly reduces the number of transcription errors requiring resolution, does away with unnecessary duplication of data and provides more timely access for data reviewers.

While much of the guidance contained in the draft would be seen as common sense in much the industry it does start to provide a consensus on important issues such as associating authorised data originators with data elements, the scope of 21CFR Part 11 with respect to the use of such records, interfaces between medical devices or Electronic Health Records and the eCRF.

No doubt a number of the recommendations contained in the draft guidance document will be of concern to software vendor's whose systems do not currently meet the technical recommendations provided. We will therefore surely see a variety of comments from “non-compliant” vendors trying to water down the recommendation until such a time their systems can meet what is already accepted good practice.

One key issue that would appear to be missing is the use of default values on eCRFs, which we know has been a concern in a number of systems and clinical trials i.e. where the investigator has skipped over a field leaving the data element at the default value. This is something we have provided feedback on and we would encourage everybody in the industry to review the new draft guidance and provide comments.

You can view a copy of the new draft guidance at http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/UCM328691.pdf and comment through the usual FDA process at https://www.federalregister.gov/articles/2012/11/20/2012-28198/draft-guidance-for-industry-on-electronic-source-data-in-clinical-investigations-availability

Thursday, November 15, 2012

Penetrating the Cloud

IVT's Conference on Cloud and Virtualization (Dublin, 13-14th November 2012) was everything I'd hoped it would be. After two year of conference sessions simply peering into the Cloud to understand what it us, or sticking a head in to the Cloud to see what the risks are, it was good to spend two days looking through the Cloud to see how these risks can be managed and to review some case studies.

It served to endorse our opinion that while generalist Cloud providers are either not interested in the needs of the Life Sciences industry, or are still struggling to understand Life Sciences requirements, what some people have called the 'Pharma Cloud' (or 'Life Sciences' Cloud, and what we define as Compliant Cloud Computing) is here. As we report in one of our latest Perspectives opinion pieces, while specialist providers are relatively few, Infrastructure, Platform and Software as a Service can now be provisioned in a manner that meets the expectations of most regulators.

While it would have been good for an organization such as ISPE to provide such clarity, well done to IVT for organizing events in the US and Europe and giving people a chance to unpack such issues. To be fair to ISPE, many GAMP sessions have looked at Cloud at country specific meetings and conferences, but the topic really does need a couple of days to get your head around.

What also emerged was the ability to select the right Cloud model, including On-Premise options and discussions with a number of delegates confirmed the attractiveness of the Compliant Cloud Anywhere solution (IaaS, installed On-Premise, but owned and operated by a specialist Cloud Services provider).

At the end of the IVT event delegates (many of whom are from QA or IT Quality) went home with a much better understanding of what Cloud and Virtualization is and what the risks are. Perhaps more importantly, what also emerged were some good examples of how to mitigate the risks and the outline of a strategy to move further into the Cloud without risking regulatory compliance.

As we'll explore in our webcast "State of the Art in Compliant Cloud Computing", relatively few Life Sciences companies have a real Cloud Strategy that also addresses regulatory compliance and this is quickly becoming a necessity for organizations looking the take advantage of the business benefits that Cloud and Virtualization offers.

As clarity emerges we expect to see things move significantly further into the Cloud in the next 12 months - "watch this space" as they say!

Wednesday, November 7, 2012

Applying Anti-Virus and Automatic Updates

Another question from LinkedIn, which has been popping up quite a few times on-line lately. We therefore thought we'd share the question and answer with a wider audience.

Q. What are the impact of Microsoft patch upgrades on validated computer systems. How we can consider them.

A. The GAMP IT Infrastructure Good Practice Guide and GAMP 5 have good appendices on patch management, which includes security patches.

These patches (and the updating process) are pretty mature by now and are generally considered to be of low risk likelihood. The impact on validated systems should therefore be low risk.

In most cases, for low-medium risk enterprise systems / IT platforms, organizations rely on automatic updates to protect their systems, because the risk of contracting some malware or leaving a security vulnerability open is greater than that of applying an 'untested' patch - the security patches are of course tested by Microsoft and should be fine with most validated systems / applications.

However, for some systems controlling processes directly impacting on product quality another strategy is often applied which is to place such systems on a segregated (almost isolated), highly protected network domain and not allow automatic updating of patches, but to update manually.

Placing them on such a protected network limits business flexibility but significantly reduces the likelihood of most malware propagating to such systems, or of malware being able to access such systems to exploit security vulnerabilities. If such systems e.g. SCADA are using Microsoft Windows it may well be an older version and these can be particularly vulnerable to malware, especially if connected to the Internet via anything less that a robust and multi-layered set of security controls (for licensing reasons, on a machine that was being decommissioned I once uninstalled the malware protection from an machine running Windows XP - which is still relatively common in some parts of the corporate world - and even siting behind a reasonably secure firewall it was exploited in less than two minutes...)

In these cases anti malware should be installed and Windows updates applied, but applied manually after assessing the patch i.e. reading the knowledge base articles. The risk associated with applying a patch which has not been tested by the regulated company with the specific control software may pose a greater risk to the system and hence to product safety. In these cases the regulated company will test patches in a test environment and patch relatively infrequently by hand, or only to fix known issues.

Key to all of this is a risk-based patching strategy, which should be defined in e.g. a Security Policy and appropriate SOPs. Key considerations are:
  • Understanding the risk vulnerability of different platforms e.g. Windows XP vs Windows 7 vs Windows Server etc
  • Understanding the risk vulnerability of different network segments
  • Understanding the risk likelihood of automatically applying updates i.e. the extent of the interaction between the operating system and validated applications

Friday, November 2, 2012

Collaborating on Collaboration - Validating Open Source Software


I'm delighted to hear that "Open Source Software in Life Science Research: Practical Solutions to Common Challenges in the Pharmaceutical Industry and Beyond" has just been published (available from Woodhead Publishing and also on Amazon).
 

The book has an interesting history, having started life as a discussion on LinkedIn on the use of open source software in the Life Sciences industry. After a very illuminating discussion it was suggested that we write a book on the subject and so Lee Harland of Pfizer and Mark Forster of Syngenta U.K. agreed to take on the editing of the book, which would look at the how open source software is used in our industry.

The result is a comprehensive look at the the current state of the market, with chapters looking at the use of various open source tools and packages looking at predictive toxicology, mass spectrometry, image processing, data analytics, high throughput genomic data analysis, web based collaboration and many, many more applications. As well as addressing general issues the book looks at specific tools and applications and is a useful reference for anyone looking for a guide as to the kind of software that is out there (many of these applications are quite well 'hidden' on the Internet!)

Without doubt, open source software is widely used in pharmaceutical research and development and is transforming the way the industry works in many ways. Open source software has many advantages - it's free to acquire and rather than wait for a software vendor to understand and respond to market requirements, the developer community just goes ahead and extends functionality in the direction that research and development needs.

My contribution to the book ("Validation and Regulatory Compliance of Free/Open Source Software") sounds a slightly more cautious note, highlighting when open source software may require validation, the challenges of validating open source software and sharing some ideas on how to go about it - including collaboratively!

This can be challenging, time consuming and does of course have costs associated with it, which is why we see less open source software in GMP areas. However, I have no doubt that the trend to collaboratively develop niche applications will continue to expand, especially with the prevalence of mature software development tools and Platform-as-a-Service development environments.

The process of collaborating to write a book is quite interesting - just like collaborating to develop software you're not quite sure what you're going to get, sometimes some contributors may head off at a tangent and you're never quite sure when everything is going to be delivered. Well done to Lee and Mark for sticking with the process. I think that the end result is well worth the wait.

Thursday, November 1, 2012

Retrospective Validation of Enterprise Systems

Yesterday was the fourth and final stop on our virtual book tour, looking at important topics from "Validating Enterprise Systems: A Practical Guide" (published by the Parenteral Drug Association and available via their bookstore)

In yesterday's session (recording and slides available here) we looked at the topic of retrospectively validating enterprise systems including why it is important to retrospectively validate systems that have not previously been validated and various reasons which lead to retrospective validation.

As we discussed yesterday, some of these reasons are understandable and acceptable to regulatory authorities while other reasons (such as ignorance or cost avoidance) are not.

We got through most of your questions yesterday but there were a couple of questions we didn't quite get around to answering so here they are.

Q. How is retrospective validation different to normal validation?

A. Many of the actual activities are identical but when retrospectively validating the system it is possible to leverage existing documentation (if any) and actual experience with the system.
Where good documentation exists and has been kept up-to-date the task of retrospective validation can be relatively quick. However, when little or no documentation exists it can take almost as long to retrospectively validate a system as it did implement it.

If the system has been maintained and supported using well documented processes such as help desk, incident management and problem management it will also be possible to leverage this information and use it to inform detailed risk assessments. With actual empirical data it is possible to make a more accurate assessment of risk likelihood and probability of detection.

Where a system was well implemented and has been well maintained this additional information can be justifiably used to limit the extent of the verification activities required as part of the retrospective validation.

Where this empirical data highlights problems or issues with the system it can also be used to determine which areas of the system require greatest focus as part of the risk-based validation effort.

This can mean that in some cases retrospective validation can be more successful than prospective validation in terms of appropriately addressing real risks in a cost-effective manner. However, as we stated in the webcast yesterday, because retrospective validation is not conducted as part of the implementation activities it is generally significantly more expensive than prospective validation which is well integrated with the original implementation effort. For this reason retrospective should be avoided.

Q. How common is retrospective validation? Do many companies have this problem?

A. There is a good deal of variation in the industry. At one end of the scale there are well-managed companies who are well aware of their regulatory responsibilities and who prospectively validate all appropriate enterprise systems at the time of implementation. At the other end of the scale there are companies who are either ignorant of the need to validate certain enterprise systems, or who choose to ignore the requirement in order to save money.

To some extent this depends on the maturity of the market and of the organisation. The most mature organisations have learned that risk-based prospective validation adds relatively little to the cost of implementation and provides good return on investment in terms of more reliable and robust solutions.

Less mature organisations still do not understand the cost benefit of risk-based validation or are not aware of the need to validate certain enterprise systems. While to some extent the US and Europe are more mature markets in this regard this is not universally the case. There are still new start-ups and companies where profit margins are slim who still do not prospectively validated their systems.

In markets which have historically been seen as less mature (e.g. Asia, Africa, Latin and South America) there is a growing realisation of both the need and attractiveness of validation with respect to implementing reliable and robust enterprise systems which help to streamline business operations. While retrospective validation is currently quite common in these markets (as they move to meet the growing expectations of changing local regulations and loom to export products to markets where the need for validation is already well established) this will change over time - and quite rapidly if other indicators of change are reflected.

This means that while retrospective validation will continue to be required in many markets for a number of years to come (and hence the need for a chapter on the subject in "Validating Enterprise Systems: A Practical Guide") we predict that this will be the exception within the next 5-10 years, with retrospective validation becoming rarer in all markets.


Thanks to all of you who have supported the virtual book tour we do hope you will join us for our forthcoming series of IS compliance and computer system validation webcasts over the next few months (details and registration available here)