Friday, November 19, 2010

Qualifying the Cloud: Fact or Fiction?

There was a great deal of interest in last Wednesday’s webcast “Qualifying the Cloud: Fact or Fiction?”. Cloud Computing is certainly an issue with a number of people and your responses during the session clearly indicate that there are some regulatory concerns.

Despite adding 15 minutes to the originally scheduled session there were still more questions than we could fully answer in the time allowed and as promised we have provided written answers to your questions below.

Q. In your audit and/or customer experience, have you found that a SLA or service level agreements indicating demonstrative control over the infrastructure in the cloud is sufficient to meet GxP regulatory compliance, or are auditors still looking for IQ/OQ "installation operational qualification" checklists against a specific list of requirements

Different auditors look for different things and let’s start by saying that it’s pretty rare for regulatory inspectors to be spending any time in data centers unless there is due cause. Nowadays this is usually because of issues with an uncontrolled system that are encountered during a broader inspection.

When I am auditing on behalf of Life Sciences clients I will always look for evidence that IQ/OQ (or combined  IOQ) is performed properly. By this I mean not just that the as-built/installed infrastructure matches the configuration management records, but that the as-built/installed infrastructure complies with the design specifications and client requirements.

I once audited a major managed services and hosting provider and their processes for building and installing infrastructure platforms were very good and highly automated – which is good for the rapid elasticity required in Cloud Computing. They literally selected the options off a pick list – how much memory, how many CPUs, what disk capacity etc – and the system was built and installed accordingly in their data center accordingly.
However, there was no independent review of the specifications against the client requirements and no independent review of the as built/installed server platform against the specification. Configuration management records were generated directly from the as built/installed server and never compared against the specification.

As Neill described in the webcast, if someone had accidentally selected the wrong build option from the pick list (e.g. 20GB of storage instead of 40GB) no-one would have noticed until the Service Level Agreement requirements were unfulfilled. That’s why I will always check that there is some element of design review and build/install verification.

However, I’ll usually review the specification, design, build and verification procedures as part of the initial audit to check that these reviews are part of the defined process. I’ll also spot check some of the IOQ records to check that the verification has been done. During subsequent surveillance audits I’ll also check the IOQ records as part of whatever sampling approach I’m taking (sometimes I’ll follow the end-to-end specification, design, build/installation and verification for a particular platform or sometimes I’ll focus on the IOQ process). I'm not looking to verify the build/installation of the infrastructure myself, but I am looking for evidence that there is a process to do this and that someone has done it.

IOQ needn’t be a particularly onerous process – the use of checklists and standard templates can help accelerate this process and as long as people are appropriately trained I’m usually prepared to accept a signature of someone to say that the review activity was done i.e. a signed design specification signed by the reviewer.
As we've found in our own data center, if it's an integral part of the process (especially a semi-automated process) it doesn't have a significant impact on timescales and doesn't detract from the 'rapid elasticity' which as an essential characterristic of Cloud Computing. While issues of capacity are less of a problem in a extensible Cloud the process of IOQ does help catch other types of error (patches not being applied, two or three steps in an automated install having failed etc).

Q. Your early descriptions were good but how would you explain the concept of a the cloud to a traditional Quality person with only a very basic knowledge of Network Architecture? 

I don’t think I would!

Trying to explain to a non-IT specialist in the Quality Unit what the Cloud is always going to be difficult if you take the approach of saying that the Cloud is undefined and the Users don’t need to know what’s going on.
The way to explain it is to say that although the Users in the Regulated Company don’t need to know what the Cloud is, the Regulated Companies IT Department and their IT Quality Group do know what is going on in the Cloud, and that they have checked that it is appropriately controlled

You then need to demonstrate to your Quality Unit that you do know what’s going on in the Cloud. If it’s a Private Cloud you do this by showing them diagrams and specifications, qualification documents and so on. If it’s a Public Cloud (or an externally hosted Private Cloud) you do this by showing that you have audited the Cloud Provider to check that they have the diagrams and specifications, qualification documents and so on.
It’s all about perception. It’s okay for the Users not to know what’s going on in the Cloud, but someone clearly has to be in control. This needs to be the appropriate subject matter experts (either your own IT people or the Cloud Service Providers) and your own IT Quality Unit.

If you’re a small company without the resources or technical knowledge to assess your Cloud Providers you can rely on independent consultants for this support, but you have to select the right consultants and demonstrate due diligence in their selection.

Q. In the event of a regulatory audit, when you are using cloud resources (non-private), how does the Cloud Service Providers responsibility factor in?

Basically, you need your Cloud Service Providers to be on the hook with you and this means clearly defining what support they will provide both in terms of day to day service level requirements and in the event of a regulatory inspection.

Again, let’s emphasize that regulatory authorities rarely look in the data center without due cause and although we are prepared for them to come to our data center in Wayne, we’re not aware of any regulators actually having visited a true third party hosting facility. (However, with the concerns the industry is demonstrating around this issue we think that it’s only a matter of time before they visit someone’s third party data center, somewhere).

The worst case scenario is when, during a regulatory inspection and Inspector asks the question “Is the System Validated” and you have to say “We don’t know…” That’s when further questions will be asked, the answers to which will eventually lead to your Cloud Service Provider. A failure to have properly assessed your Provider will clearly demonstrate to the regulatory authorities a lack of control.

We know of a LOT of Life Sciences Regulated Companies who have outsourced based solely on cost, with the process driven by IT management and the accountants. They usually accept the Providers standard service levels and any involvement from quality/regulatory is often late and sometimes ignored. The result is that ‘compliance’ then becomes an added activity with added costs, the promised cost savings disappear and there is often no right to adequately audit or provide support for regulatory inspections including in the Service Level Agreement.

  • Always conduct a full audit well before signing a contract (at least two days on-site, at least a month before the contract is due for signing).
  • Agree in the contract how and when any quality/compliance/control ‘gaps’ from the audit (and any surveillance audits) will be addressed.
  • Identify the penalties for not addressing any quality/compliance/control ‘gaps’ in the contract (this might include reducing service charges to cover the cost of the Regulated Companies additional quality/compliance/control activities or even cancellation of the contract – which we know one pharmaceutical company actually did).
  • Include the right for surveillance audits in the contract.
  • Include the need to support any regulatory inspections in the contract (this may never happen so can be a justifiable additional cost).
If the Cloud Service Provider won’t agree to these things we would recommend looking elsewhere or building your own Private Cloud in-house. Regulated Companies should remember that they are accountable for the control of systems they are using and that although they are the customer, the most power you’ll have in the relationship is immediately before the contract is signed.


Finally we’d just like to highlight a comment made by one of the listeners “Audit and assessment of the provider should be seen as the Insurance Certificate!” This is an excellent point and really emphasizes the key issue about Cloud Computing – you need to dig below the surface, get behind all of the hype and really understand the what, who, where and how.

There’s no reason why Cloud Computing shouldn’t be used for regulatory purposes as long as Regulated Companies exercise their responsibilities and work with Service Providers who are willing to be open about what they are doing. As far as the Users are concerned, the Cloud is still a Cloud (on-demand, rapid elasticity etc), but the Regulated Companies IT department and IT Quality group need to be in the Cloud with the Service Providers, understanding what’s going on and making sure that things are controlled.

Thank you again to everyone for their interest. The recording is still available online for anyone who didn’t catch the entire session and you can still register for the final webcast in the series via the Business & Decision Life Sciences website.

Thursday, November 18, 2010

FDA to join PIC/S

On 8th and 9th November the Pharmaceutical Inspection Cooperation scheme (PIC/S) met in Kuala Lumpur Malaysia. The main news to come out of that was that the US FDA will become full members of PIC/S as of 1st January 2011.  Some commentators may be saying “about time too” but what does it actually mean to the industry?

The FDA has rightly or wrongly been thought of (at least in the USA) as having the highest standards in the world for pharmaceutical products and medical devices. However the FDA applied for membership of PIC/s as far back as 2005.  So why have they not joined?

It was rumoured that under the previous US Administration there was a reluctance to allow overseas Agencies  review the FDA's own internal quality management system, which is required for new members to join PIC/S.

Another reason, if recent press reports are anything to go by, is that the PIC/s organization looked at the FDA regulations and concluded that they were insufficiently rigorous for admission to PIC/S.  Given that the FDA has now been admitted, it is an open question as to how far the FDA will go to comply with the full intentions of the scheme. Will this mean a tightening of regulations or just reciprocal recognition of inspections?

At the same time as the FDA joins PIC/S the Ukrainian regulatory agency (SIQCM) is being admitted too.  Does this mean that inspections of pharmaceutical plants in the Ukraine by the SIQCM will be recognized by the FDA as being of the same rigor as their own inspections?  This is as much a political as a regulatory question and it remains to be seen how far the FDA is prepared to go to comply with the PIC/s principles, and whether we can expect any change to regulatory law or guidance in the USA as a result.

[Posted on behalf of David Hawley]

Tuesday, November 9, 2010

Supplier Involvement - Don't Sign the Contract!

GAMP 5 tells us that Regulated Companies should be leveraging Supplier Involvement in order to efficiently take a risk-based approach to validation - but surely that's no more than common sense? Anyone contracting services from a Supplier should be looking to get as much out of their Suppliers as possible.

We constantly hear stories from clients, complaining about how poor some of their Suppliers are and in some cases the complaints are justified - software full of bugs, known problems not being acknowledged (or fixed), failure to provide evidence of compliance during audits, switching less skilled resources for the consultants you expected and so on.

The software and IT industry is no better or worse than any other - there are good Suppliers and less good Suppliers, but in a market such as Life Sciences the use of a less good Supplier can significantly increase the cost of compliance and in some rare circumstances place the safety of patients at risk.

Two years after the publication of GAMP 5 and five years after the publication of the GAMP "Testing of GxP Systems" Good Practice Guide (which leveraged the draft ASTM E2500) the Life Sciences industry is still:
  • Struggling to understand how to get the best out of Suppliers,
  • Complaining about compliance issues associated with outsourcing.
GAMP has Special Interest Groups looking at both of these issues, but it's not exactly rocket science. The real problem is that too many companies don't really understand what they're buying and focus solely on cost as a differentiator. At a time when IT industry standards provide excellent frameworks for defining requirements and service levels there are still a lot of people in the industry who go out to tender without a good definition of what they want.

This is especially true when it comes to defining quality and compliance requirements, so it's no wonder that Life Sciences companies struggle to leverage their Suppliers when they've failed to define what it is that they really expect.

In many cases quality and compliance people are involved in the selection of suppliers too late in the process to add any real value. In some circumstances there is no viable option than going with a 'less good' supplier (for instance, when a new Supplier has a really novel application or service that adds real competitive advantage) but in most cases it is possible to identify any gaps and agree how they should be rectified prior to signing a contract.

However, once a contract is signed it's too late to define quality and compliance requirements without Suppliers claiming that these are 'extras' which are outside the contract. While I've heard Regulated Companies make statements like "as a supplier to the Life Sciences industry you must have known that we'd need copies of your test results" (or whatever it is) you can't rely upon those unstated expectations in a court of law.

The result is that achieving the required quality and compliance standards often costs more that anticipated, either because the Supplier charges extra or the Regulated Company picks up the cost of the additional quality oversight. Very few Life Science's companies have actually achieved the promised cost savings with respect to the outsourcing of IT services, usually because the people driving the contract (purchasing, finance and IT) don't really understand what is required with respect to quality and compliance.

When Business & Decision are engaged in a supplier selection process we tell clients "don't sign the contract until you're happy - that's the best leverage you'll ever have over a Supplier" and it's advice worth repeating here.

At its best, the IT sector is a mature and responsible industry with standards and best practices that can be leveraged to assure that clients requirements are met. It's just a pity that the Life Sciences industry - which prides itself on being in control of most things it does - can't find a way to effectively leverage good practices like GAMP and standards like ISO 9001 and ISO 20000 to select and leverage the best IT Suppliers.

Monday, November 1, 2010

IT Infrastructure Qualification - Your Questions Answered

There we're a couple of questions relating to last week's "Pragmatic Best Practices for IT Infrastructure Qualification" webcast that we didn't get around to answering... so here are the questions and the answers.

Q.  What are the qualification strategies to be followed for infrastructure for which the applications it will support are unknown?

There are three basic approaches here:

The first is to qualify infrastructure platforms and components on the assumption of high risk severity of supported applications. This will mean that all infrastructure platforms and components are qualified such that they can support any application with no additional qualification activities being required at a later date. This is the approach taken by Business & Decision for shared platforms and components in our own data center and this provides us with the flexibility needed to meet changing customer requirements.

While this would appear ‘over kill’ to some, because qualification is really based on well documented good engineering practice (as per ASTM E2500) there is relatively little additional overhead over and above what any Class A data center should be doing to specify, build/install and test their infrastructure (this was covered in more detail in our webcast "A Lean Approach to Infrastructure Qualification").

The second approach is to qualify specific platforms and components for the risk associated with those applications that are known. This is possible for infrastructure that is dedicated to defined applications e.g. specific servers, storage devices etc. This can reduce the level of documentation in some cases, but this means that whenever a change is made at the applications layer, the risk associated with the infrastructure may need to be revisited. While additional IQ activities would not be required, it may be necessary to conduct additional OQ activities (functional or performance testing) of the infrastructure components prior to re (validating) the applications. This requires an on-going commitment to more rigorous change control impact assessment and can slow down the time taken to make changes. While Business & Decision might consider this approach for some client specific platforms and components our clients generally prefer the responsiveness the first approach provides.

The third approach (taken by some very large Regulated Companies in their own data centers) is to qualify different platforms according to different levels of risk e.g. there could be a cluster of servers, virtual machines and network attached storage dedicated to high risk applications, with the same (or a similar architecture) being dedicated to medium and low risk applications. This is probably the best solution because it balances flexibility and responsiveness with scalable risk-based qualification, but can tend to lead to over capacity and is only really a good solution in large data centers.

Q. Each component of our network software is individually validated.  What is the best strategy for qualifying the network itself?

The network isn’t really qualified in its entirety, but is qualified by way of qualifying all of the network platforms and components. This may include some functional testing of platforms or components, but the correct functioning of the network is really verified by validating applications.

The network can essentially be considered to be made up of the non-functional cables, fiber etc, the hardware (which may include firmware) and the software components that are necessary to make it work.

The software components (e.g. software based firewalls, server monitoring software, time synchronization software etc) should be designed, specified (including all configuration parameters), installed, configured and verified. Verification will include installation qualification, verification of configuration parameters and may also include some functional testing (OQ) which will be based on meeting the functional requirements of the software.

Hardware components such as bridges, switches, firewalls etc will be designed, specified, built/installed and verified. Verification will include IQ and if the ‘hardware’ component includes software (which is often the case) there will again be an element of configuration parameter verification and some functional testing. Business & Decision usually combine the IQ and OQ into a single a single verification test, simply for efficiency.

For the basic network (backbone cables, fiber, fiber switches and really ‘dumb’ network components with no configurable software element such as hubs), these will again be designed, specified, built/installed and verified, but verification will be limited to a simple IQ (recording installation details such a cable numbers, serial and model numbers etc). This can of course be done retrospectively.

All of the above can be scaled, based upon risk as discussed in the answer above.

Remmeber, if you have a question that you'd like us to answer, you can contact us on validation@businessdecision.com or you can submit your questions via the 'Ask An Expert' page on our Life Sciences website.

Thursday, October 28, 2010

Computer Systems in China - to Validate or Not?

At today's Computer System Compliance session at the ISPE-CCPIE conference in Beijing we considered the new draft SFDA GMP regulations and the requirements (or not) to validate computer systems.

As blogged yesterday, the Chinese State FDA (SFDA) are committed to improving compliance, but the current draft of the new Chinese GMP regulations are ambiguous with respect to computer systems validation.

While certain Articles and Annexes infer a need to validate, calibrate etc certain control and monitoring systems there is no equivalent to US 21 CFR211 subpart 68 or PIC/S / EU Annex 11, which states a clear requirement to validate computer systems.

Some would claim that this is deliberate ambiguity on the part of SFDA and that it suits them to provide some 'wriggle room' for Chinese manufactures, but more likely is that computerized systems have been of relatively low priority. Re-writing and updating the GMP regulations is a significant undertaking and it is only reasonable that focus is given to basic GMP, especially when China is only just catching up with more developed countries with respect to the use of computer systems and there is no history and significant risk to patients as a result of failure to validate a computer system.

However, this ambiguity does not meet the stated aim of SFDA, which is to generally align national GMP regulations with World Health Organizations (WHO) guidelines. WHO guidelines (WHO Technical Report Series 937, 2006: WHO Expert Committee on Specifications for Pharmaceutical Preparations, Appendix 5) clearly have a requirement to validate computer systems, and this does appear to be missing from the current draft of the SFDA regulations.

There is also a view that SFDA want to limit the costs they are imposing on their local manufacturers at a time when the Chinese government is looking to reduce the cost of drugs and devices while at the same time as it is looking to provide healthcare to 900 million people.

However, if SFDA is serious about building a risk-based approach into the new GMP regulations it is perfectly feasible to include a clear requirement to validate computer systems while leveraging a cost effective risk-based approach which will limit the costs involved in computer systems validation according to risk.

Unless this is included in the new regulations Chinese companies face the prospect of a two tier approach to computer system validation depending on whether products are intended for the domestic or export market. This would be confusing, limit flexible operations and potentially cause problems where Enterprise Systems support domestic and export operations.

Let's hope that SFDA take on-board the need to align with international regulations in this regard and revise the current draft regulations to provide the clarity that the Chinese Life Sciences industry is looking for with respect to computer system validation.

Wednesday, October 27, 2010

Regulatory Changes In - and For - China

There were some interesting sessions at this morning's keynote sessions at the ISPE-CCPIE conference in Beijing.

The Chinese State FDA presented a brief history of the Chinese GMP regulations, comparing these to other international regulations (e.g. WHO) and although they provided an outline of the new Chinese GMP regulations there was no commitment in terms of a date by which these will be made effective.

Cynics at the conference suggested after the session that this is because 90% of local Chinese companies would not comply with the new Chinese GMP regulations, but while the SFDA do appear to be keeping their options open regarding timing (and appear to be moving away from introducing a target date with a period of grace during which companies could move to compliance) a session from the US FDA gave a different picture.

Its 12 months since the US FDA set up shop (a field office) in China and although there are still only seven full time FDA staff in country - with only half of these conducting for-cause and high priority inspections - there appears to have been good progress in working with the Chinese State FDA (SFDA) as well as some of the Provincial FDA offices.

What we appear to be seeing is the Chinese authorities committing to address the regulatory/quality concerns that threatened to impact their export markets last year while also starting to address the reform of regulations in their home market, recognizing that the latter will take a while to address in a market consisting of literally thousands of (rapidly consolidating) manufacturers and distributors.

The US FDA and SFDA now meet on a monthly basis, with the SFDA acting as observers to some US FDA inspections. The US FDA is helping to fund training for the SFDA, reviewed the pending Chinese GMP regulations and have provided 10 GCP regulations for the SFDA to translate. This is all part of the US FDA strategy of helping to educate other regulatory agencies on the requirements of the US market and to help build inspection capacity (through the education of a cadre of SFDA inspectors trained in international regulatory expectations).

At the same time, multinational Life Sciences companies are sharing concerns about Chinese products with the US FDA, who are in turn discussing issues with the SFDA and there is also agreement between the US FDA and SFDA to focus on ten high risk products (mainly pharmaceutical, but some medical devices).
This co-operation provides evidence of the US FDA's desire to work more effectively with other regulatory agencies and will certainly start to address concerns about Chinese product.

At the same time it will also help the Chinese authorities to better regulate their own market, which is forecast to be world's largest by 2013 (behind the US and Japan). While 'rogue traders' operating out of China will undoubtedly be of continuing concern with respect to product quality and counterfeiting at least problems with the legitimate market are starting to be addressed.

Whatever people may think about the Chinese governments method of implementing change, there is no doubt that effective reforms can be implemented and probably more quickly than in many other markets. Although this is just the beginning there is no doubt that regulatory change is happening - which will be good for patients in China as well as the rest of the world.

Tuesday, October 26, 2010

What’s in a (Unique) Number?

Over the past eighteen months I’ve been contributing to a hefty tome which focuses on supply chain management in the pharmaceutical sector, specifically looking at some of the regulatory issues associated with information systems and information technology.

Amongst the issues I identified as still requiring resolution was the need to be able to more effectively track products through the automated supply chain in order to counter the growing problems of counterfeiting, to reduce dispensing errors and to better support product recalls.
The good news is that in the time it has taken to write, review, edit and publish a book there has been significant progress by various industry bodies and regulators in defining what is needed in terms of:
  • Uniquely identifying product at multiple levels, down to the final package or device,
  • Establishing databases to share information and reduce duplication of data
  • Leveraging existing standards such as GS1 Healthcare Standards
This not only includes drug products but also medical devices and while there is still a way to go with respect to international harmonization the increasing adoption or incorporation of standards such as GS1 means that it should be possible for manufacturers to develop internal identification codes that can be used internationally in the commercial supply chain and also meet the various requirements of different regulatory bodies.

While there is still a long way to go with respect to tackling specific issues such as how to label a re-usable, sterilizable medical device or how to label combination products it seems as if solutions are within the grasp of the industry – a fact borne out by the number of large pharmaceutical and medical device companies now investing considerable sums of money in developing solutions.

Regulators are accepting that any regulations should be indepedent of any specific technology and that sensible exceptions (or alternatives) may be required for specific product profiles or devices.
However, to truly develop and deliver workable and cost effective solutions it is necessary to look beyond the basic information technology that assigns and registers identification numbers, prints and scans and compiles data bases.

Much of the discussion is currently being driven by technology vendors who have admittedly gone a long way to making such solutions possible. However, there appear to be very few technology vendors who are looking at the big picture. It also appears that individual pharmaceutical and medical device companies focus upon specific aspects of the full end-to-end solution, depending on the specific experience and ‘hot buttons’ of the teams looking at these solutions. While some teams focus on how to use RFID others are concerned about how relabeling should be managed, but I've seen relatively few companies putting everything together to deliver not just regulatory compliance, but real business advantage.
While information technology will of course underpin any serialization, track ‘n’ trace or electronic pedigree solution it is information systems that will deliver efficient and cost effective solutions.

This requires that Regulated Companies look beyond the printers, laser etching, label stock, etc and take a look at the strategic information flows and business processes that will be needed to make these technologies work as a business solution.

This requires a strategic review of:

  • Product master data management, at multiple levels and across all geographies - addressing the critical issues of consistent product codes and the federation and synchronisation of products codes between systems,
  • Serial number allocation, management and security on a global basis - remembering that real time solutions will need to work even while the Internet connection to some central database is down,
  • Changes to packaging and labelling operations, which may restrict the flexibility that some organizations are used to - it will no longer be possible to assemble-to-order or label in country unless unique serial numbers can be generated at the right point in the supply chain,
  • Modifications to business processes to support serialisation - including necessary changes to supporting transactional applications such as MES, ERP and CRM systems,
  • Integration of systems ranging from printing, labelling and scanning systems at the bottom end, through MES and into ERP systems at the top end – and the realignment of the supported business processes,
  • Collaborative working with third parties including contract manufacturers, customers and regulators – ensuring that identification data is available to trusted and authenticated users while assuring the security of individual serial numbers. 
These key business challenges need to be addressed while at the same time tackling the necessary information technology challenges. As even small pilots have discovered, relying on people to populate this data is extremely time consuming and inefficient - in many cases we're talking about maintaining tens of thousands of product codes and serial numbers into the hundreds of millions. To work properly solutions will need to rely upon internet services and service oriented architecture (SOA) – and area where the Life Sciences industry has been lagging behind other industries.
With regulatory deadlines (e.g.now looming in the US and Europe Life Sciences organizations are now starting to address the issue seriously – but let’s just hope that it isn’t too late. While it is certainly possible to implement partial solutions in time to meet key regulatory milestones unless these issues are tackled at a strategic business level the cost to industry may well outweigh the benefit in terms of risk mitigation.

Friday, October 22, 2010

IT Implications From A Shift in Regulatory Focus

At yesterday’s ISPE GAMP® UK Forum meeting it was interesting that while the usual regulatory roundup included the usual feedback from Regulated Users who have undergone regulatory inspections of their GMP and GDP areas, there was also a perceived shift in emphasis.


While a number of GMP inspections did look at some computer systems related issues (reviewing validation plans and reports, disaster recovery plans, test protocols, change controls and the like) this appeared less detailed than is usually reported and there were a couple of interesting things of note:
  • A significant number of inspections weren’t looking at computer systems validation in detail,
  • There was significant discussion around enforcement actions in other parts of the business, specifically in sales and marketing.

As last weeks coordinated action against counterfeit products in the supply chain showed, there is a growing concern with what is going on outside reputable manufacturing. These actions were coordinated by Interpol and supported by industry bodies, national law enforcement agencies and regulatory agencies, resulting in the seizure of more than 1 million tablets, 76 individuals now being actively investigated and leading to the closure of almost 300 websites (with more to come).

Reputable companies may not be the focus of such activities, but regulators have other concerns with regards to what respectable pharmaceutical companies are doing as part of their everyday business and which is perceived as also putting patients at risk.

As highlighted in an opinion piece in last week’s New Scientist (written by Dr Paul Thacker, in the 16th October edition) the increase of off-label prescribing is potentially placing patients at risk, and this is increasingly likely where pharmaceutical companies exercise less than tight control on the activities of those selling their products. As a result of the US Health Reform Bill, from 2013 companies will be required to disclose payments to doctors in excess of $10 and explain why the payment was made.

All of this means that regulations and enforcement actions are extending into almost every part of a Life Sciences company. However, very few companies are prepared to deal with this situation and fail to see that Information Systems can be both a blessing and a curse.

A well-defined and ‘validated’ computerized process can enforce actions that comply with regulatory requirements and can also significantly ease the burden of regulatory reporting. However, computerized systems that are poorly defined, insecure and easy to by-pass allow people to operate outside of the regulations, either mistakenly or by intent.

As such, Life Sciences companies should be looking to leverage experience from the GMP and GDP areas, where the validation of computerized systems is, for large parts of the developed world, a way of life. To rise to these new regulatory challenges it will however be necessary to:

  • Take a sensible, cost effective risk-based approach, recognising that GMP and GDP systems are usually of much higher risk and that techniques will need to be adapted to suit lower risk profile systems and applications
  • Recognize that the risk is inherent within the overall business and take an approach that considers the computerized system and information technology as just one element of the overall business process and information system (remembering that information also exists in other forms).

When used wisely, information technology can help Life Sciences companies respond to the changing regulatory landscape in a cost effective manner, delivering business improvement and supporting compliance.

The other side of the coin is that we can expect increasing regulatory focus and enforcement when the regulatory issues associated with the use of websites, collaborative portals, social networking sites, CRM systems and the like are ignored.

On the positive side, Life Sciences companies have time to respond and think about how to strategically leverage the knowledge and experience that already exists within other domains within the industry (GMP etc). On the downside is that few appear to be thinking about this at a strategic level just yet. Only time and on-going enforcement actions will tell how long any period of grace will be.

 

Tuesday, October 12, 2010

Marked Decline in Sponsored Link Advertising following US FDA Enforcement

An interesting recent piece of news was that sponsored link advertisements for pharmaceutical products have declined more than 50% following a spate of Warning Letters from the US FDA. According to various new articles published on March 26, 2009, the Division of Drug Marketing, Advertising, and Communications (DDMAC) of the U.S. Food and Drug Administration (FDA) sent warning letters to 14 major pharmaceutical manufacturers identifying specific brands as being in violation of FDA fair balance guidelines. The letters stated that sponsored link advertisements for specific drugs were misleading due to the exclusion of risk information associated with the use of the drug.

Most of these companies quickly removed their sponsored ads for these products and others not specifically mentioned in the letters. As a result, the number of sponsored links for pharmaceutical brands has dramatically declined as manufacturers changed their strategies to ensure compliance.

This illustrates neatly what we have been pointing out for some time; there is nothing special about the Internet as far as regulation is concerned. Advertising on the Internet is governed in the same way as advertising using any other medium, and non-compliance with the rules that govern advertising will be dealt with similarly.

This poses a challenge on a number of levels for the companies concerned. The very nature of sponsored link advertisements mandates brevity. In a magazine you can have your ad on one page and the list of warnings in small type on the next page or two, but how is this to be handled in a sponsored link of 25 words or even fewer?

This isn’t the only problem with on-line ads. They might be a common sight to US-based consumers but on a global stage they are unusual. In fact the only countries that I know of where direct-to patient advertising is permitted are the USA and New Zealand. Other countries such as the UK specifically prohibit advertising of Prescription Only Medicines to non-health professionals. This places the onus squarely on the regulated company to make sure that they target their adverts correctly otherwise they risk action from other regulatory bodies and not just the FDA.

For more on this topic see the Business & Decision webcast "Controlling Life Sciences Promotional Activities on the Internet"

Wednesday, May 19, 2010

New Part 11 Assessment (and Enforcement) Is Coming

As we report in this month's copy of our ValidIT newsletter (focusing on IS Compliance and Computer System Validation issues) it looks as if the US FDA's CDER division wants to assess the state of play in the industry with respect to 21 CFR Part 11 (Electronic Records, Electronic Signatures).

Although the scope and specific program areas aren't yet decided the way they intend to do this is to ask specialists in the Agency to accompany Inspectors from the Field Divisions on routine inspections and look at issues around Part 11, taking appropriate enforcement action where necessary. This is to help them to understand how the industry is responding to the 2003 Scope and Applications Guidance and to help the Agency decide how to revise Part 11.

This demonstrates a pragmatic approach to resolving this open issue and the Agency is to be applauded in taking a proactive yet measured approach (other Divisions within FDA aren't directly involved and are playing a watching brief).

I hope that what they'll find - certainly on inspections in North America and Europe - is that:
  • Most Regulated Users have taken Part 11 on board and are responding well, applying a risk-based approach where appropriate,
  • Technology has moved on, allowing Suppliers to meet Part 11 technical requirements much more easily, leveraging significant advances in security, encryption and digital signatures.
Of more concern is not what is happening in North America and Europe, but what is happening in the emerging economies where an increasing proportion of pharmaceutical (and medical device components and products) are being manufactured. Major international Life Sciences companies who have set up operations in countries like China, India and Brazil are largely applying their own internal processes and procedures and are leveraging mature software products from experienced vendors - Part 11 compliance shouldn't be a problem here - so far, so good.

However, there is also a significant number of indigenous API, pharmaceutical and medical device companies in these markets often using local software developers to avoid the licensing costs or overseas development costs of using more established software from multi-national vendors. Our experience in these markets is that in many of these cases the requirements of Part 11 are well less understood.

Looking at any of the on-line forums that exist, anyone who reads some of the Part 11 questions posed by certain individuals and organizations from some countries will realize that in many cases the current level of understanding (and in some cases the level of technology) is where North America and Europe was ten or more years ago. What we might consider to be 'simple' questions and answers only appear simple based on more than fifteen years discussion and experience in the industry, which many newcomers understandably lack.

Now this isn't a rant about moving jobs 'overseas' or about how unfair lower cost labor is in emerging markets - we all compete in a global economy. Working with end users and suppliers in these markets I know how well educated their labor pool is, how hard working they are and how innovative they can be.

What I hope is that the FDA will not restrict their assessment of the state of Part 11 to just their domestic market or traditional developed markets (Canada and Europe), but that they will also include a broader set of overseas manufacturers, to determine what the overall state of the market is with respect to Part 11 compliance.

Without this representative sample there are two potential risks:
  • The Agency concludes that things are in relatively good shape and that no great changes are needed to Part 11, or the enforcement thereof. This has the potential to miss possible issues in emerging economies where fraud can be as much as an issue as accidental problems with electronic records and signatures,
  • The Agency develops a revised Part 11 based upon an assumption that all software developers (and their clients) generally have access to the latest technology, which can again lead to compliance risks. Any new Part 11 should clearly avoid the problems created by the original preamble and should not focus on any specific technologies or techniques.
I also hope that the Agency will continue to work in conjunction with the industry to understand the underlying causes of any issues they do find and work with the industry to ensure that all manufacturers and their software suppliers have an adequate understanding of Part 11 and the current (and future) expectations of the Agency.

Thursday, April 22, 2010

Computer System Validation – Business as Usual?

A colleague asked me earlier today what were the big issues at the moment in computer system validation – and I couldn’t really think of any.


After more than twenty years introducing computer system validation to a lot of companies, consulting on Part 11, getting ready for Y2K, responding to Part 11, addressing infrastructure qualification and adopting a risk-based approach to validation the question is very much ‘where next?’.

To some extent it depends on what happens with risk-based validation. As the results from our webcast polls show, many Life Sciences organisations are still struggling to adopt a justifiable risk-based and cost effective approach to computer system validation.

At the moment it does appear to be business as usual – we still see computer system validation issues cited in FDA Warning Letters (and anecdotally reported by other regulatory agencies) but its justified and at a reasonable level in comparison to other more pressing topics – very much what we were used to around a decade ago.

However, if companies continue to use taking a risk-based approach as an excuse for simply doing less – rather than providing a real risk-based rationale for shifting resources to areas of the greatest risk – things may change. Some regulatory agencies have already commented that they are getting wise to ‘risk-based’ equating to ‘simply doing less’ and companies simply adopting GAMP® 5 as a flag of convenience for reducing spending on computer system validation without any clear rationale for doing less. Some inspectors have warned that they will take enforcement actions unless there is a clear and sound risk-based rationale for reducing the level of validation. Efficiency savings are fine, but only when the same goals are met.

There is then a possibility that we could see an increase in enforcement actions in response to Life Sciences companies taking the cost savings too far, but hopefully common sense will prevail as more individuals and organisations really start understand how to achieve the same objectives with less time and effort.

That leaves us with the other ‘big issue’ – which is how the industry is looking to changes in IT - such as cloud computing, virtualization, outsourcing and the like – and wondering how to apply risk-based principles to new technology and different business models.

While many Life Sciences companies are still relatively slow to change others are quietly moving ahead and the immediate future is probably one of evolution and not revolution. That’s not to say however that such evolution isn’t exciting – there is great potential to leverage newer technologies and models to deliver enhanced business performance, reduce costs and help restore the bottom line. If we can seize these opportunities and also address the compliance and validation issues in a cost effective manner then we’re in for a very interesting time – even if it’s not quite as exciting as when the regulators were giving everyone a hard time.

Wednesday, April 21, 2010

Answers to Webcast Questions - Compliant Business Intelligence and Analytics in Life Sciences

Thank you to everyone who attended the webcast "Compliant Business Intelligence and Analytics" and who submitted questions. The recording is now on-line and subscribers can download the slides from the Business & Decision Life Sciences website via the Client Hub.
Listed below are the questions that we didn't have time for in the live webcast, along with the answers we promised to provide.


Q. How could BI be beneficial in an IT industry "IT Project"?
A. IT projects and processes are another subset of business processes and Business Intelligence and Analytics can certainly be applied there. The use of Key Performance Indicators in IT Projects and Processes was covered extensively in our webcast "Measuring IS Compliance Key Performance Indicators". This includes the use of Business Intelligence applications for supporting project and process improvement, both in terms of efficiency and cost effectiveness and also in terms of regulatory compliance.

Q. How would you qualify a BI solution provider (if one ever needed to be hired for a project)?
A. No differently from qualifying any other vendor. We would focus on the maturity of the solution provider in terms of:
- Track record in Business Intelligence (do they know the specific technology/application, can they help develop a BI strategy and architect a BI solution?).
- Track record in Life Sciences (and in the particular business domain [e.g. clinical trials versus sales and marketing] and the particular sector [e.g. pharmaceuticals, medical devices, biomedical etc].
Assuming that a BI solution had already been selected we would also look to the BI vendors to make recommendations with respect to which solution provider they would recommend.
When combined, these factors would reduce a list of potential suppliers to a manageable number.

Supplier selection seems to be a question that has been asked a few times in various webcasts and is something we'll look at covering in more detail in a future webcast.
Thanks again for joining us for the webcast and if there are any follow up questions you can submit them via the Life Sciences website

Wednesday, February 24, 2010

Answers to Webcast Questions - Leveraging ICH Q9 / ISO 14971 in Support of IS Compliance

Thanks to everyone who attended the webcast "Leveraging ICH Q9 / ISO 14971 in Support of IS Compliance" and who submitted questions. The recording is now on-line and subscribers can download the slides from the Business & Decision website as usual.

Listed below are the questions that we didn't have time for in the live webcast, along with the answers we promised to provide.


Q. Do you find that IT teams want to take the time to conduct proper risk assessments?
A. It all depends on the risk assessment process and model, whether it is scaled appropriately to the project / system and how well trained the IT team is. Assessing the risk severity is best left to the quality / regulatory and business subject matter experts, leaving the IT staff to think about technical risk scenarios and the risk likelihood and detectability.
Most professional IT staff evaluate and mitigate risk on an automatic basis, at least as far as the technology is concerned. For example, if it’s a critical business system the IT team will usually suggest redundant discs or mirroring to a DR site as a matter of course. In many cases you need them to reverse engineer their logic and document the rationale for their decisions using appropriately scaled tools and templates.
If you can make it clear to the IT staff that their expertise is valued and respected, that we just want them to rationalize and document their decisions with a process that isn’t too onerous we usually find that there is good buy-in

Q. Why do all your risk diagrams or maps make a low impact/high probability event equivalent to a high impact/low probability event....surely this is both misleading and dangerous.
A. They’re not our diagrams and maps – they are from the GAMP® Guide or GAMP® Good Practice Guides. Using the GAMP® risk assessment model gives Risk Class 2 for both high severity/low likelihood and low severity/high likelihood.
Equating severity and likelihood in the way wouldn’t be wise and could possibly increase the possibility of an unacceptable risk being seen as acceptable when considering the hazards associated with a medical device or risk to a patient through the use of a new drug. However, GAMP® attempts to provide a relatively simple risk assessment model which is cost effective when used in the implementation of computerized systems.
What wasn’t shown in the project example included in this webcast were the specific criteria used to qualitatively assess risk severity and risk likelihood, and which erred in the side of caution for this relatively high risk project/system.

Q. Can you comment on how pressure testing a system can provide data on probability of failure?
A. Assuming that ‘pressure testing’ relates to the stress testing of software rather than the pressure testing of a process vessel, it can only provide a limited set of data on the probability of failure. Because software does not change over time (assuming effective change control and configuration management processes) stress testing has little value in terms of the software functionality. Boundary, structural (path & branch) and negative case testing has more value here and should provide data on the failure modes of the software rather than the probability of failure.
Where stress testing can be useful is in looking at the probability of failure of the infrastructure i.e. network constraints, CPU capacity, storage speed and capacity. Stress testing can provide not only a useful idea of the probability of failure, but should allow users to identify the circumstances (loading) that lead to a particular failure mode and then define sensible limits which should not be exceeded.

Q. Do you think that proper selection of risk analysis technique (like DFMEA, FTA) greatly improves risk management of medical device companies?
A. Yes, absolutely. Both ICH Q9 and ISO 14971 talk about the appropriate selection of appropriate risk assessment models and tools and ICH Q9 Annex I provides a useful discussion on this topic.

Thanks again to everyone who joined us for the webcast and we look forward to catching up for the next webcasts.

Thursday, February 18, 2010

Answers to Webcast Questions - Using Compliant ERP E-Records in Support of Regulatory Compliance

In yesterday's webcast Using Compliant ERP E-Records in Support of Regulatory Compliance, there were a couple of technical questions around the use of E-Records in Oracle E-Business Suite that we didn't get time to answer.

Thanks to our colleagues at Oracle for supporting the webcast and their help in answering these questions.

Q. Are new Oracle E-Business E-Record enabled events being added to the 11.5.10 release or just Release 12?
A. New developments are focused on Oracle E-Business Suite Release 12 and most of the recent E-Record enabled events are part of the Release 12 functionality e.g. Manufacturing Execution System. Release 11.5.10 is entering the maintenance mode of its life cycle so although some Release 12 functionality was previously ported back to 11.5.10, do not expect much, if any, new functional development on 11.5.10 moving forward


Q. In a earlier Business & Decision webcast (Testing Best Practices: 5 Years of the GAMP Good Practice Guide), it was suggested to get testing documentation from the vendor. What can Oracle provide to help minimize our internal testing?
A. As we discussed on the E-Records webcast, Oracle E-Business Suite customers can access automated test scripts that will run against the E-Business Suite Vision data set from the Oracle support site (formerly MetaLink). Just log in and search on "Test Starter Kit".
For clients implementing Oracle E-Business Suite using Oracle Accelerators test scripts are also generated by the Oracle Accelerator tool and these are specific to the client's configured instance generated by the Oracle Accelerator tool (see webcast "Compliant ERP Implementation in the Regulated Life Sciences Industry" for more information).

Thanks to all of you for your questions and remember that you can submit questions at any time on validation@businessdecision.com or erp@businessdecision.com, or by following the 'Ask an Expert' links on the website

Friday, February 12, 2010

New Life Sciences Index Announced

Based upon some useful 'vox pop' information collected by Business & Decision's webcast and on-line surveys, plus information from other sources, we have now started an on-line set of Life Sciences indices, revealing interesting information and trends for both Regulated Companies and suppliers to the Life Sciences industry.

This has just gone live at Life Sciences Index and we will be adding new indices over the coming weeks. If you have any data that you would like to share or would like to see, e-mail us at life.sciences@businessdecision.com and we'll see what we can do.

Thursday, February 11, 2010

Risk Likelihood of New Software

Here's a question submitted to validation@businessdecision.com - which we thought deserved a wider airing.

Q. To perform a Risk Assessment you need experience about the software performance. In the case of new software without previous history, how can you handle it?

A. We are really talking about the risk likelihood dimension of risk assessment here

GAMP suggests that when determining the risk likelihood you look at the ‘novelty’ of the supplier and the software (we sometimes use the opposite term – maturity – but we’re talking about the same thing).

If you have no personal experience with the software you can conduct market research – are there any reviews on the internet, any discussions on discussion boards or is there a software user group the Regulated Company could join? All of this will help to determine whether or not the software is ‘novel’ in the Life Sciences industry, whether it has been used by other Regulated Companies and whether there are any specific, known problems that will be the source of an unacceptable risk (or a risk that cannot be mitigated).

If it is a new product from a mature supplier then you can only assess risk based on the defect / support history of the supplier's previous products and an assessment of their quality management system. If it a completely new supplier to the market then you should conduct an appropriate supplier assessment and would generally assume high risk likelihood, at least until a history is established through surveillance audits and use of the software.

All of these pieces of information should feed into your initial high level risk assessment and be considered as part of your validation planning. When working with ‘novel’ suppliers or software it is usual for the Regulated Company to provide more oversight and independent verification.

At the level of a detailed functional risk assessment the most usual approach is to be guided by software categories – custom software (GAMP Category 5) is generally seen as having a higher risk likelihood than configurable software (GAMP Category 4), but this is not always the case (some configuration can be very complex)- our recent webcast on "Scaling Risk Assessment in Support of Risk Based Validation" has some more ideas on risk likelihood determination which you might find useful.

Wednesday, February 10, 2010

Answers to Webcast Questions - Testing Best Practices: 5 Years of the GAMP Good Practice Guide

The following answers are provided to questions submitted during the "Testing Best Practices: 5 Years of the GAMP Good Practice Guide" and which we did not have time to answer while we were live.


Can we thank you all for taking the time to submit such interesting questions.

Q. Retesting: What is your opinion on retesting requirements when infrastructure components are upgraded? i.e. O/S patches, database upgrades, web server upgrades
A. The GAMP "IT Infrastructure Control and Compliance" Good Practice Guide specifically addresses this question. In summary, this recommends a risk-based approach to the testing of infrastructure patches, upgrades etc. Based on risk severity, likelihood and detectability this may require little or no testing, will sometime require testing in a Test/QA instance or in some cases they may or should be rolled out to the Production environment (e.g. anti-virus updates). Remember - with a risk-based approach there is no 'one-size-fits-all' approach.
 
Q. No value add for independent review and oversight? Why not staff SQE's?
A. Assuming that 'SQE' is Software Quality Expert, we would agree that independent review by such SQE's does add value, specifically because they are experts in software and should understand software testing best practices. Where we do question the value of quality reviews (based on current gidance) is where the Quality Unit has no such expertise to draw upon. In these cases the independent Quality Unit still has a useful value add role to play, but this is an oversight role, ensuring that test processes and procedures are followed (by review of Test Strategies/Plans/Reports and/or periodic review or internal audit)

Q. What FDA guidance was being referred to re: QA review of test scripts etc not being necessary?
A. The FDA Final Guidance document “General Principles of Software Validation” doesn’t specifically state that QA review of test scripts is not necessary, but like the GAMP “Testing of GxP Systems“ Good Practice Guide, GAMP 5 and ASTM E2500, it places the emphasis on independent PEER review. i.e. by suitably qualified, trained or experienced peers (e.g. software developers, testers etc) who are able to independently review test cases. Although QA IT people may well have the necessary technical background to play a useful part in this process (guiding, supporting etc) this is not always the case for the independent Quality Unit who are primarily responsible for product (drug, medical device etc) quality.
 
Q. Do the regulators accept the concept of risk-based testing?
A. As we stated in response to a similar question in the webcast, regulatory authorities generally accept risk-based testing when it is done well. There is a concern amongst some regulators (US FDA and some European inspectors) that in some cases risk-assessments are being used to justify decisions that are actually taken based on timescale or cost constraints.
In the case of testing, the scope and rigor of testing is sometimes determined in advance and the risk assessment (risk criteria, weightings etc) are 'adjusted' to give the desired answer e.g. "Look - we don't need to do any negative case testing after al!"
The better informed regulators are aware of this issue, but where testing is generally risk-based our experience is that this is viewed positively by most inspectors.
 
Q. Do you think that there a difference in testing good practices in different sectors e.g pharma vs. medical device vs. biomedical?
A. There shouldn't be, but in reality the history of individual Divisions in the FDA (and European Agencies) means that there are certain hot topics in some sectors e.g.
  • Because of well understood failures to perform regressions analysis and testing the CBER are very hot on this topic in blood banking.
  • Because of the relatively high risk of software embedded in medical devices, some inspectors place a lot of focus on structural testing.
Although this shouldn't change the scope or rigor of the planned testing it is necessary that the testing is appropriate to the nature of the software and the risk, and that project documentation shows that valid regulatory concerns are addressed. It is therefore useful to be aware of sector specific issues, hot topics and terminology.

Q. Leaving GMP systems aside and referring to GxP for IT, Clinical and Regulatory applications. How do you handle a vendors minimum hardware spec for an application in a virtual environment?
We have found that vendors overstate the minimums (# of CPUs, CPU spec, minimum RAM, disk space usage, etc.) by a huge margin when comparing actual usage after a system is in place.
A large pharma I used to work for put a standard VM build of 512k RAM and to increase it if needed.  This was waived for additional  servers of the same application.   In the newest version of VMware (vSphere 4) all of these items can be changed while the guest server is running.
A. Software vendors do tend to cover themselves for 'worst case' (peak loading of simultaneous resource intensive tasks, maximum concurrent users etc - and then add a margin), to ensure that the performance of their software isn't a problem. The basic answer is to use your own experience based on a good Capacity Planning and Performance Management process (see the GAMP "IT Infrastructure Control and Compliance" Good Practice Guide again). This shoud tell you whether your hardware is over-rated or not and you can use historic data to size your hardware. It can also be useful to seek out the opinion of other users via user groups, discussion boards and forums etc.
Modern virtualization (which we also covered in a previous webcast "Qualification of Virtualized Environments") does allow the flexibility to modify capacity on the fly, but this isn't an option for Regulated Companies running in a traditional hardware environment. Some hardware vendors will allow you to install additional capacity and only pay for it when it is 'turned on' , but these tend to be large servers with mutliple processors etc.
At the end of the day it comes down to risk assessment - do you take the risk of not going with the software vendors recommendation for the sake of reducing the cost of the hardware? This is the usual issue of balancing project capex' budget against the cost to the business of poor performance.