Wednesday, December 14, 2011

Integrated ERP and CRM in Life Sciences


As many of you all know, over the last few weeks we've organized a couple of webcasts looking at the integration of ERP and CRM systems ("Leverage Your ERP & CRM Data to Make Faster, Smarter Decisions" and "How To Integrate Product, Customer and Patient Data"). This is as part of a series of five webcasts we are running related to enterprise systems.

The last two webcasts have looked not only at the advantages of integrating ERP and CRM systems, but also the use of master data management, business process orchestration and business intelligence tools.
There are some good questions the came out of yesterday's webcast session and unfortunately we didn't have time to answer them all during the live webcast. 

As we promised we've reproduce the unanswered questions and our answers here our blog.

Q. Can you say more about why some life sciences companies are keen to get into healthcare management and what they are doing?

A. We talked about this quite extensively in one about earlier webcasts () but basically were seeing a number of life sciences companies move more into healthcare as they see their traditional profit margins being increasingly squeezed. Moving into healthcare has several advantages such as:
  • Opening sources of new revenue
  • Ensuring better health outcomes for patients
In the case of this latter benefit this means that patients are more likely to continue using the regulated companies drugs or devices, thereby ensuring an on-going revenue stream. At a time when many payers (whether these are insurers or governments) are increasingly looking to pay based upon results it makes sense for life sciences companies to ensure that patients are complying with their medication regimes and treatment is successful.

In the case of yesterday's webcast were seeing a number of life sciences companies increasingly extend the use of their CRM systems to incorporate the use of patient and healthcare management.

Q. Can aggregate spend reporting be built into CRM systems?

A. It is certainly possible to build aggregate spend reporting based upon information that is held in CRM systems. For small to medium life sciences companies with only a single CRM system, these systems may indeed contain all of the information required to produce accurate aggregate spend reports. However, where there are multiple CRM systems data usually needs to be aggregated across the systems and as we saw yesterday's webcast there are also advantages in terms of integrating information from the ERP system such as cost information and expense data.

For those of you specifically interested in this topic we'll be talking more about aggregate spend in a webcast in the New Year

Q. With such a focus on external patient facing activities, easy ERP system becoming less important?

A. The traditional reasons for implementing ERP systems have never gone away. Those of us remember the days of early MRP and MRPII implementations understand the significant benefits that such functionality brings. For small to medium life sciences companies who do not currently leveraging ERP system there is considerable return on investment from implementing core ERP functionality. This should really be the primary reason for acquiring and implementing a new ERP system.

However, we are seeing ERP systems become increasingly integrated across the enterprise and not just in areas such as manufacturing and operations. While ERP systems are just as important as ever, they are becoming more and more integrated into the enterprise applications landscape.

For a company that has no ERP system implementation of a new ERP system is properly one of the most important things the company can do in terms of investment in IT. However once the basic ERP functionality has been implemented there are a number of additional end to end business processes that can be facilitated by integration of the ERP system and it is perhaps true to say that additional manufacturing functionality may not be the most important extension of functionality.


If you have any remaining questions please do get in touch and for those of you who missed the webcasts the recordings are still available on the Business and Decision Life Sciences website (see Past Events). We do hope you'll be available for the rest of the webcasts where we were looking at some interesting topics such as

  • How to leveraged software and system integration activities for cost effective validation
  • How to integrate good record-keeping and document management with your ERP and CRM systems
  • How to plan for successful ERP and CRM systems as a regulated company

Tuesday, September 27, 2011

Software as a Service - Questions Answered

As we expected, last week's webcast on Software as a Service (Compliant Cloud Computing - Applications and SaaS) garnered a good deal of interest with some great questions and some interesting votes.


Unfortunately we ran out of time before we could answer all of your questions. We did manage to get around to answering the following questions (see webcast for answers)
  • Would you agree that we may have to really escrow applications with third parties in order to be able to retrieve data throughout data retention periods?
  • How is security managed with a SaaS provider? Do they have to have Admin access, which allows them access to our data?
  • How do you recommend the Change Management (control) of the SaaS software be managed?
  • How can we use Cloud but still have real control over our applications?
  • What should we do if procurement and IT have already outsourced to a Saas provider, but we haven't done an audit?

As promised, we have answered the two remaining questions we didn't get time to address below.

 
Cloud computing is, not surprisingly, the big topic of interest in the IT industry and much of business in general. Cloud will change the IT and business models in many companies and Life Sciences is no different in that respect.

 
We've have covered this extensively during the last few months, leveraging heavily on the draft NIST Definition of Cloud Computing which is starting to be the de-facto standard for talking about the Cloud - regardless of Cloud Service Providers constantly inventing their own terminology and services!

If you missed any of the previous webcasts they were
- Qualifying the Cloud: Fact or Fiction?
- Leveraging Infrastructure as a Service
- Leveraging Platform as a Service


There are of course specific issues that we need to address in Life Sciences and our work as part of the Stevens Institute of Technology Cloud Computing Consortium is helping to define good governance models for Cloud Computing. These can be leveraged by Regulated Companies in the Life Sciences industry, but it is still important to address the questions and issues covered in our Cloud webcasts.

As we described in our last session, Software as a Service isn't for everyone and although it is the model that many would like to adopt, there are very few SaaS solutions that allow Regulated Companies to maintain compliance of their GxP applications 'out-of-the-box'. This is starting to change, but for now we're putting our money (literally - investment on our qualified data center) into Platform as a Service, which be believe offers the best solution for companies looking to leverage the advantage of Cloud Computing with the necessary control over their GxP applications.

But on to those SaaS questions we didn't get around to last week:

Q. Are you aware of any compliant ERP solutions available as SaaS?

A. We're not. We work with a number of major ERP vendors who are developing Cloud solutions, but their applications aren't yet truly multi-tenanted (see SaaS webcast for issues). Other Providers do offer true multi-tenanted ERP solutions but they are not aimed specifically for Life Sciences. We're currently working with Regulated Company clients and their SaaS Cloud Service Providers to address a number of issues around infrastructure qualification, training of staff, testing of software releases etc, . Things are getting better for a number of Providers, but we're not aware of anyone who yet meets the regulatory needs of Life Sciences as a standard part of the service.

The issue is that this would add costs and this isn't the model that most SaaS vendors are looking for. It's an increasingly competitive market and it's cost sensitive. This is why we believe that niche Life Sciences vendors (e.g. LIMS, EDMS vendors) will get their first, when they combine their existing knowledge of Life Sciences with true multi-tenanted versions of their applications (and of course, deliver the Essential Characteristics of Cloud Computing - see webcasts)

Q. You clearly don't think that SaaS is yet applicable for high risk applications? What about low risk applications?

 
A. Risk severity of the application is one dimension of the risk calculation. The other is risk likelihood where you are so dependent on your Cloud Services Provider. If you select a good Provider with good general controls (a well designed SaaS application, good physical and logical security, mature support and maintenance process) then it should be possible to balance the risks and look at SaaS, certainly for lower risk applications.
 
It still doesn't mean that as a Regulated Company you won't have additional costs to add to the costs of the service. You need to align processes and provide on-going oversight and you should expect that this will add to the cost and slow down the provisioning. However, it should be possible to move lower risk applications into the Cloud as SaaS, assuming that you go in with your eyes open and realistic expectations of what is required and what is available.
 
Q. What strategy should we adopt to the Cloud, as a small-medium Life Sciences company?
 
A. This is something we're helping companies with and although every organization is different, our approach is generally
  • Brief everyone on the advantages of Cloud, what the regulatory expectations are and what to expect. 'Everyone' means IT, Procurement, Finance, the business (Process Owners) and of course Quality.
  • Use your system inventory to identify potential applications for Clouding (you do have one, don't you?). Look at which services and applications are suitable for Clouding (using the IaaS, PaaS and SaaS, Private/Public/Community models) and decide how far you want to go. For some organizations IaaS/PaaS is enough to start with, but for other organizations there will be a desire to move to SaaS. Don't forget to think about new services and applications that may be coming along in foreseeable timescales.
  • If you are looking at SaaS, start with lower risk applications, get your toe in the water and gradually move higher risk applications into the Cloud as your experience (and confidence) grows - this could take years and remember that experience with one SaaS Provider does not automatically transfer to another Provider.
  • Look to leverage one or two Providers for IaaS and PaaS - the economies of scale are useful, but it's good to share the work/risk.
  • Carefully assess all Providers (our webcasts will show you what to look for) and don't be tempted to cut audits short. It is time well worth investing and provides significant ROI.
  • Only sign contracts when important compliance issues have been addressed, or are included as part of the contractual requirements. That way there won't be any cost surprises later on.
  • Remember to consider un-Clouding. We've talked about this in our webcasts but one day you may want to switch Provider of move some services or applications out of the Cloud.
The Cloud is coming - in fact, it's already here. As usual, were not always the earliest adopters in Life Sciences, but you need to be prepared to move and take advantage. We hope that our webcasts have helped - please do let us know if you have any questions.

E-mail us at life.sciences@businessdecision.com

Tuesday, September 20, 2011

GAMP® Conference: Cost-Effective Compliance – Practical Solutions for Computerised Systems

A very interesting and useful conference held here in Brussels over the past two days, with a focus on achieving IS compliance in a cost effective and pragmatic way. It's good to see ISPE / GAMP® moving past the basics and getting into some more advanced explorarations of how to apply risk-based approaches to projects and also the operational phase of the system life cycle.


There was understandably a lot of discussion and highlighting of the new Annex 11 (Computerised Systems), with many of the presenters tying their topics back to the new guidance document, which has now been in effect for just two and a half months.

One of the most interesting sessions was when Audny Stenbråten, a Pharmaceutical Inspector of the Norwegian Regulator (Statens Legemiddelverk) provided a perspective of Annex 11 from the point of view of the regulator. It was good to see an open approach to the use of pragmatic risk-based solutions, but as was highlighted throughout the conference, risk-based approaches require a well-documented rationale.

Chris Reid of Integrity Solutions presented a very good session on Managing Suppliers and Service Providers and Tim Goossens of MSD outlined how his company is currently approaching Annex 11.

Siôn Wyn, of Conformity, provided an update on 21 CFR Part 11, which was really ‘no change’. The FDA are continuing with their add-on Part 11 inspections for the foreseeable future, with no planned end date and no defined plans on how to address updates or any changes to Part 11.

On the second day, after yours truly presented some case studies on practical risk management in the Business & Decision Life Sciences CRO and our qualified data center, Jürgen Schmitz of Novartis Vaccines and Diagnostics presented an interesting session on how IT is embedded into their major projects.

Mick Symonds of Atos Origin presented on Business Continuity in what I thought was an informative and highly entertaining presentation, but which was non-industry specific and was just a little too commercial for my liking.

Yves Samson (Kereon AG) and Chris Reid led some useful workshops looking at the broader impacts of IT Change Control and the scope, and scalability of Periodic Evaluations. These were good, interactive sessions and I’m sure that everyone benefitted from the interaction and discussion.

In the final afternoon René Van Opstal, (Van Opstal Consulting) gave an interesting presentation on aligning project management and validation and Rob Stephenson (Rob Stephenson Consultancy) presented a case study on Decommissioning which, although it had previously been presented at a GAMP UK meeting, was well worth airing to a wider audience.

All in all it was a good couple of days with some useful sessions, living up to its billing as suitable for intermediate to advanced attendees. On the basis of this session I’d certainly recommend similar sessions to those responsible for IS Compliance in either a QA or IT role and I’m looking forward to the next GAMP UK meeting, and to presenting at the ISPE UK AGM meeting and also the ISPE Global AGM meeting later in the year.

Friday, September 16, 2011

The use of Unique Device Identifiers in Healthcare

Monday 12th and Thursday 13th September saw a very interesting public meeting organized by the US FDA, entitled "Unique Device Identification (UDI) for Postmarket Surveillance and Compliance".

Rather than looking at details of the rule currently being developed for the unique identification for medical devices (details of which can be found at http://www.fda.gov/udi) the meeting looked at how UDIs would be used in the real world.

Whereas Pharmaceuticals is looking to reduce or prevent counterfeiting by the use of serialization (see our recent webcasts on serialization - "Strategic Management of Product Serial Identifiers" and "Serialized Labelling: Impacts on the Business Model"), in the medical devices sector there is a global drive to be able to uniquely identify medical devices at all point in the supply chain, at point of initial use and throughout the life of the device. Whereas pharmaceutical products are clearly identified (e.g. via the National Drug Code [NDC] in the US), this is not the case for medical devices.

At the moment medical devices are identified inconsistently my manufacturer, model, product name, hospital allocated item number, SKU# etc. As the public meeting heard, the ability to uniquely identify what a medical device is has significant benefits in terms of:
  • More accurate device registries (e.g. of implantable devices)
  • Faster and more focused product recalls
  • Fewer patient/device errors (ensuring the right patient receives the right device)
  • Better post marketing surveillance and adverse events reporting
Key to this will be not only the use of the UDI, but the development of data standards which will allow the significant therapeutic attributes of devices to the standardized. This will allow data to be analyzed by specific device model and attributes(e.g. drug coated stents versus polymer coated stents) and different models from different manufacturers to be compared in terms of patient outcome.

The tracking of devices via the Electronic Health Record (EHR) or Personal Health Record (PHR) is one of the most significant steps to enable all of this - where the EHR records the Unique Device Identifier to be recorded, and thereby linked to model number and manufacturer, to the batch/lot or serial number where required, and a host of other associated device data available from a manufacturers database.


This is part of a global initiative to uniquely identify medical devices via a Global Medical Device Nomenclature - which is important when you consider how important it is for a German cardiac specialist to know exactly what sort of heart pacemaker is implanted in the Australian tourist who has just been rushed in the emergency room!

Although we're most likely a year away from finalizing the FDA rule on UDI, and two years away for initial requirements for Class III devices, the use of UDI heralds the possibility of a new era in reduced hospital errors, better device safety, faster recalls, improved safety signal detection and the abilty to use real evidence - and not marketing hype - to know what the best device is for any given patient.


Details of the public meeting program and presentations can be found on the US FDA website at http://www.fda.gov/MedicalDevices/NewsEvents/WorkshopsConferences/ucm263947.htm

Thursday, May 19, 2011

Cloud Computing: Infrastructure as a Service Webcast

Yesterday saw the webcast of the first in a series of Cloud Computing webcasts - this one on "Infrastructure as a Service". The next ones are looking at "Platform as a Service" (on July 20th) and "Software as a Service" (on September 21st) - don't worry if the dates have passed by the time you come across this blog entry because all of the webcasts are recorded and available via our Past Events page

There was was a good turnout and some good questions asked. Unfortunately we didn't have time to cover all the questions before our hour ran out. We've therefore covered the questions and answers in our Life Sciences blog below:

The first questions we did quickly look at were about Change Control and Configuration Management:

Q. (Change Control) pre-approvals tend to be the sticking points for changes, how have you overcome this
Q. Is there a live configuration management database used?

A. These first questions related to how the Essential Characteristics of Cloud Computing i.e. Flexible On-Demand Service and Rapid Elasticity can be met in a regulated and qualified environment.

Business & Decision does use pre-approved change controls for some types of like-for-like change and we discussed our change control processes in more detail in our webcast "Maintaining the Qualified State: A Day in the Life of a Data Center" webcast on January 12th 2011.

In the same webcast we also discussed the use of configuration management records. Depending on the platform and component our configuration management records are either paper based or electronic and in many cases we use a secure spreadsheet for recording platform or component specific configuration item details. Updates to such electronic records are 'approved' by the sign-off of the controlling change control record and this means that a separate paper document doesn't need updating, printing and signing. This supports the 'rapid elasticity' required in a Cloud model.

Q. If PaaS is provided by 3rd party, would vendor audit be sufficient?


A. Although the next webcast in the series will discuss Platform as a Service (PaaS) in more detail, we did have time to briefly answer this question on-line. Generally an audit of any Cloud provider (IaaS, PaaS or SaaS) would be the minimum that would be required. This is essential to ensure that you understand:
- What service is being provisioned
- Who provisions which parts of the service (you or the third party)
- Who manages the services on a day-to-day basis
- Where they provision the service (where your data and applications will reside)
- How you, as the Regulated Company, can provide effective control (and demonstrate accountability to your regulators)
- How they provision and manage the relevant services (and do they actually do what their Policies and Procedures say that they do)
- What are the specific risk scenarios, the risk likelihood and what risk controls need to be established beyond the providers standard services

Whether any addition actions would be required would depend on the outcome of the initial audit. In some cases infrequent surveillance audits are all that would be required. In other cases additional metrics may need to be established in the SLA and in some cases it might be determined that some services will need to stay On-Premise. If people download the slides from our website (let us know if you need to know how) you'll be able to see the flow chart for this process.

Q. In case of IaaS, provisioning is tightly coupled with PaaS and thus required to provisioning of PaaS as well. How can your on-demand provisioning can be achieved in a minute? 

A. Our experience is also that in the real world, at least contractually, the provisioning of Infrastructure as a Service is coupled to Platform as a Service i.e. we provide the complete Platform, including the infrastructure components (this also fits much more realistically with the GAMP definition of Infrastructure, as discussed yesterday). However in many cases the change is at the level of "processing, storage, networks, and other fundamental computing resources" (to use the NIST definition of IaaS), so it really is IaaS within a broader PaaS model.

It is certain infrastructure components that can be technically provisioned in a minute or so - additional storage, network connections etc - this is usually just a change to a configuration parameter. You obviously need to add on the time to raise the change control and update the configuration management record, but for small changes that don't need client approval (because this is based upon a client request), and because these processes and systems use electronic records it can still be done in minutes rather than hours.

For physical infrastructure items (e.g. additional memory or CPUs) we can make the physical change and reboot the server also in a matter if minutes. Where we need to prepare a new environment (e.g. switch the clients application to a different virtual machine with the right infrastructure) this may need additional time to prepare, but the downtime for the client can also be a matter of moments as we reallocate IP addresses etc.

Even where we have had to provision new virtual machines (which really is PaaS) this is done in a matter or hours as we not only leverage standard specifications/designs and build, but we also leverage standard qualification documentation to ensure that the qualification process doesn't slow things down.

While it's true that most PaaS changes require more than a few minutes, it's usually a matter of hours rather than days.

Q. How are security exposures addressed within a IaaS scenario? (If the IaaS is not responsible for OS?) 

de-provisioned) such as storage or memory or CPUs then there should be no fundamental change in the risk scenario previously assessed. e.g. when you allocate additional disc capacity at a physical level it 'inherits' the security settings and permissions to which it is allocated. Issues with the database security are handled at the level of PaaS and any allocation of new database tables would of course need a reassessment of risk. The same is of course true regarding memory and CPUs.

At the IaaS level it is network capacity and specifically routings that need more thought. Allocating bandwidth to a specific client or application within a T1 pipe is again a matter of configuration and doesn't affect the associated security settings. Making changes to routings is of course more 'risky' and would require us to look again at the original risk assessments.

Most security risks do occur in the PaaS and SaaS models which we'll be looking at in more detail in the future webcasts in this series.


Thanks again for everyone for joining us on the webcast yesterday - if you missed it the recording will stay on line as long as the infrastructure is provisioned by BrightTalk! We hope you'll join us agian soon.

Monday, April 11, 2011

Interesting GAMP UK Meeting

As usual, last week's GAMP UK meeting (Help at Perkin Elmer) was informative and useful, especially in terms of the discussions that took place.

The formal agenda included a presentation on the use of Business Process Markup Notation (BPMN) for defining user requirements by Jenni Sanders (something we've been leveraging at Business & Decision for years) and a case study on the validation of a cell culture counter changed from non-GxP to GLP use, by Richie Fraser at Pfizer.

From a business perspective the most interesting session looked at the use of GAMP in the blood banking industry, with Janet Samson from Welsh Blood Service describing some of the cultural and organisational issues faced in the sector. In the last five years all blood banks in Europe now come under direct regulatory oversight but it is clearly a challenge to be part of a government led health service but regulated by a different part of the same government. With a number of projects in this sector this is no real surprise to everyone at Business & Decision , but it does prove that many of the real issues around validation are related to people and organisations, not technology.

There was also the usual regulatory round up with some feedback on the FDA's "Part 11 (Electronic Record, Electronic Signature) inspections, but to date it appears that the non-specialist inspectors who have been asking about the implementation of Part 11 have been learned more from the companies they have been inspected. There were however US and European based inspections and it would be interesting to hear how the 'overseas' portions of this program are progressing.

In areas of other regulatory news the tendency for the faster escalation of observations and the expectation to address any issues at all sites continues. It was also noted that there is a continued background of enforcement actions being taken around website content.

Chris Reid presented on the new Annex 11, but to be honest the experienced audience was generally aware of the new Annex 11 and the general feeling is that it will have minimal impact on those Life Sciences companies already following GAMP Good Practices (as we suggested in our "Annex 11, Changes to Computerised System Guidelines in the EU" webcast back in February). There was a feeling that some consultancies are over-inflating the issues associated with the new Annex 11, but our view continues to be that there are a significant number of companies who didn't comply with the old Annex 11 and that's where the trouble lies. The issue won't really be the new Annex 11 (which comes into effect in June 30th 2011) but the really issue will be the enforcement of Annex 11.

Matthew Theobald also presented on the work of the 'Leveraging Supplier Involvement' Special Interest Group and this was the topic that sparked the greatest debate. There is certainly growing regulatory interest in the outsourcing of IT services and regulatory concern when this is done badly. However, outsourcing can work an Matthew's group are trying to provide some good models on how the involvement of suppliers can be justified and how it can be done well.

A big thanks again to the GAMP UK organising committee for another interesting and successful meeting. For anyone who hasn't been to a GAMP meeting, it's well worth getting along to a meeting an finding out what really is happening in the industry - it's much better to the at the forefront of these trends that trying to play catch-up.

Thursday, January 13, 2011

Maintaining the Qualified State

There were a couple of unanswered questions in yesterday's "Maintaining the Qualified State: A Day In The Life of a Data Center" webcast yesterday.

As usual, we've taken the time to provide the questions and answers here in our blog.

Q. Where can I get a copy of that ISPE article on (the qualification of) Virtualization?


A. There is a page on the Business & Decision Life Sciences website which provides access to the webcast on qualifying virtual environments and which also has a link to the relevant page on the ISPE website (ISPE members can download the Pharmaceutical Engineering article free of charge).
The page can be accessed at http://www.businessdecision-lifesciences.com/2426-webcast-qualification-of-virtualized-environments.htm


Our next answer responds to two similar questions:
Q. Do your IT Quality People review every change control?
Q. Do you view the change control as an auditable item, and as such require them to be written so that an auditor can understand it – requiring clarity beyond the level of a technical SME?

A. Our Quality Manager (or Quality designee) reviews and approves every Planned or Emergency Change Control. Pre-Planned (like-for-like) changes are not reviewed and approved by the Quality Manager.

However, all of the Change Control processes (Pre-Approved, Planned and Emergency) are subject to Periodic Review by the Quality Group, so if there was any issue with the Pre-Approved change process this would be picked up then.

All changes are also peer reviewed by a technical subject matter expert so we do not expect them to be written in such a way as to allow a technical ‘newbie’ to be able to understand them. Change are also reviewed by the System Owner and/or Client, to ensure that all impacts of the change are assessed (e.g. clients ability to access an application during a service outage).

The role of the Quality Manager is to ensure that the change control process is followed, not to ensure that the change is technically correct. Having said that, our Quality Team and independent Internal Auditor are technically competent and do understand what they’re reviewing, at least at a high enough level to understand what the change is about and why it’s required.

We find that some external auditors have a technical background and are able to understand the content of most change controls. However, some do not have a specific IT Quality role and can not understand the technicalities of all of the changes. If this is ever an issue during a client audit we get the originator of the change control to explain it.

Thanks again to everyone who tuned in to the live webcast. If you missed the live event the recording is still available at "Maintaining the Qualified State: A Day In The Life of a Data Center"