Monday, November 1, 2010

IT Infrastructure Qualification - Your Questions Answered

There we're a couple of questions relating to last week's "Pragmatic Best Practices for IT Infrastructure Qualification" webcast that we didn't get around to answering... so here are the questions and the answers.

Q.  What are the qualification strategies to be followed for infrastructure for which the applications it will support are unknown?

There are three basic approaches here:

The first is to qualify infrastructure platforms and components on the assumption of high risk severity of supported applications. This will mean that all infrastructure platforms and components are qualified such that they can support any application with no additional qualification activities being required at a later date. This is the approach taken by Business & Decision for shared platforms and components in our own data center and this provides us with the flexibility needed to meet changing customer requirements.

While this would appear ‘over kill’ to some, because qualification is really based on well documented good engineering practice (as per ASTM E2500) there is relatively little additional overhead over and above what any Class A data center should be doing to specify, build/install and test their infrastructure (this was covered in more detail in our webcast "A Lean Approach to Infrastructure Qualification").

The second approach is to qualify specific platforms and components for the risk associated with those applications that are known. This is possible for infrastructure that is dedicated to defined applications e.g. specific servers, storage devices etc. This can reduce the level of documentation in some cases, but this means that whenever a change is made at the applications layer, the risk associated with the infrastructure may need to be revisited. While additional IQ activities would not be required, it may be necessary to conduct additional OQ activities (functional or performance testing) of the infrastructure components prior to re (validating) the applications. This requires an on-going commitment to more rigorous change control impact assessment and can slow down the time taken to make changes. While Business & Decision might consider this approach for some client specific platforms and components our clients generally prefer the responsiveness the first approach provides.

The third approach (taken by some very large Regulated Companies in their own data centers) is to qualify different platforms according to different levels of risk e.g. there could be a cluster of servers, virtual machines and network attached storage dedicated to high risk applications, with the same (or a similar architecture) being dedicated to medium and low risk applications. This is probably the best solution because it balances flexibility and responsiveness with scalable risk-based qualification, but can tend to lead to over capacity and is only really a good solution in large data centers.

Q. Each component of our network software is individually validated.  What is the best strategy for qualifying the network itself?

The network isn’t really qualified in its entirety, but is qualified by way of qualifying all of the network platforms and components. This may include some functional testing of platforms or components, but the correct functioning of the network is really verified by validating applications.

The network can essentially be considered to be made up of the non-functional cables, fiber etc, the hardware (which may include firmware) and the software components that are necessary to make it work.

The software components (e.g. software based firewalls, server monitoring software, time synchronization software etc) should be designed, specified (including all configuration parameters), installed, configured and verified. Verification will include installation qualification, verification of configuration parameters and may also include some functional testing (OQ) which will be based on meeting the functional requirements of the software.

Hardware components such as bridges, switches, firewalls etc will be designed, specified, built/installed and verified. Verification will include IQ and if the ‘hardware’ component includes software (which is often the case) there will again be an element of configuration parameter verification and some functional testing. Business & Decision usually combine the IQ and OQ into a single a single verification test, simply for efficiency.

For the basic network (backbone cables, fiber, fiber switches and really ‘dumb’ network components with no configurable software element such as hubs), these will again be designed, specified, built/installed and verified, but verification will be limited to a simple IQ (recording installation details such a cable numbers, serial and model numbers etc). This can of course be done retrospectively.

All of the above can be scaled, based upon risk as discussed in the answer above.

Remmeber, if you have a question that you'd like us to answer, you can contact us on validation@businessdecision.com or you can submit your questions via the 'Ask An Expert' page on our Life Sciences website.

No comments: