Technician
January/February 2017
Syama Adhibhatta
Mark DiMartino
ricardo falcon
eric haman
kevin legg
petirrojo payne
Kevin R. Pipkins
Abdelqader Zamamiri
This document was written by members of the BioPhorum Operations Group IT and CPV team and reviewed extensively throughout the BPOG collaboration. As such, it represents the current consensus opinion of experts on the subject of process verification in the biopharmaceutical industry, but does not represent the details of any individual company's procedure. It is designed to be informative for industry members, regulators and other interested parties. It does not define statistical methods in detail, as those definitions are readily available elsewhere.
This article describes how signals can be developed and evaluated.in support of CPV in the biopharmaceutical industry. The implementation of the CPV, in addition to meeting regulatory expectations, can also provide a basis for continuous improvement of production processes and, consequently, greater consistency in product quality and supply assurance.
CPV involves the collection of data related to CQAs and CPPs, as well as analysis that reveals any statistical signals that become apparent over time. It is designed to detect variations within specifications. Therefore, the CPV is about keeping control within specifications and therefore does not normally lead to a formal investigation. This document provides several examples of signal response and scaling within the quality system, where necessary, as a model of a risk-based approach for CPV.
In 2011, the FDA introduced guidance on the process validation life cycle, including continuous process verification (CPV).1While implementation is becoming a regulatory expectation, CPV can provide benefits beyond compliance by identifying opportunities to improve manufacturing processes and ultimately the reliability of drug quality and supply.
The CPV is the third step in the process validation life cycle. It is an ongoing evaluation of parameters and attributes within the control strategy identified in Step 1 (process design) and refined at the end of Step 2 (process qualification). Its primary objective is to detect variability in the process and associated quality attributes that may not have been apparent when the process was characterized and introduced. The CPV provides ongoing verification that the control strategy is still effective in maintaining product quality.
Additional parameters and/or attributes not considered critical to quality or not specified in the control strategy can also be included in the CPV program (or an associated process monitoring program) to enhance process learning and support investigations to identify the root cause and source of unexpected problems. variability.
The CPV may also identify opportunities to improve process performance and/or optimize process control. Using statistical methods, historical manufacturing data or characterization studies are evaluated during CPV implementation to define signal criteria, define thresholds, and implement response procedures appropriate to ongoing operations. Therefore, signals are selected to identify the process behaviors of interest and to indicate when statistically significant variations may be affecting the process.
A critical aspect of CPV is establishing a procedure that provides a consistent response to these signals as they become apparent. Ideally, perhaps, the signals would be detected as soon as they occur, but this is not practical in most cases. In practice, the signal must be detected and a response mounted before the indicated trend leads to a true process deviation or out-of-specification (OOS) event. The best practice is to respond as quickly as possible based on the risk assessment.
The purpose of this document is to provide best practice guidance for responses to CPV signals that occur within the acceptable process control space.
There are five main steps in establishing CPV signals and associated response procedures.
- Define signal parameters and criteria
- Establish the frequency of monitoring and evaluation
- Establish evaluation criteria for signals and actions.
- Escalate actions if necessary
- Document and Answer signs.
While this document primarily focuses on responses, a discussion of signal selection is required, as the magnitude of the response must match the severity of the signal.
signal selection
Define parameters and criteria
In general, CPV signals evaluate expected performance based on prior process experience. The development and effectiveness of these signals depend on statistical techniques sensitive to the size and inherent variability of the existing data set. While not ideal, some signals are due to existing acceptable variation that was not fully captured and characterized in the initial data set. This is a common occurrence in biopharmaceuticals, given the complexity of the manufacturing processes and raw materials. Although sometimes referred to as "nuisance signals" or "false positives," these signals can be useful as learning opportunities over time and are used to grow the data set.
Definitions
capacity indicator-The ability of a process to deliver a product within specification limits. The concept of process capability can also be defined in statistical terms using the process performance index, Ppk, or the process capability index, Cpk.
Continuous verification of processes (CPV)- A formal process that makes it possible to detect variations in the manufacturing press that may affect the product. It provides opportunities to proactively control variation and ensure that, during routine production, the process remains in a state of control.1
control strategy- A planned set of controls, derived from a current understanding of the product and process, that ensures process performance and product quality. Controls may include parameters and attributes related to the drug and drug materials and components, operating conditions of facilities and equipment, in-process controls, finished product specifications and associated methods, and frequency of monitoring and control.
pvc limit- Statistically derived or scientifically justified threshold for use in process trends. The threshold is intended to predict future process performance based on past performance experience and is not necessarily tied to process or patient requirements. In a capable process, the CPV bias limits will be more stringent than other limits, ranges, or specifications required by the molecule's control strategy.
Critical Process Parameter (CPP) -A process parameter whose variability impacts a critical quality attribute and therefore must be monitored or controlled to ensure that the process produces the desired quality.5
Critical Quality Attribute (CQA)- A physical, chemical, biological or microbiological property or characteristic that must be within an appropriate limit, range or distribution to ensure the desired quality of the product.
Climbing-Respond to a signal by following the deviation/non-compliance system to investigate the potential impact of the product or process.
Assessment -An analysis of data and related circumstances surrounding a statistical signal, with the intent of identifying the cause of the signal.
Parameter in the critical (NCP) -A non-critical parameter has no quality impact at the process step in question. Note: It can have an impact on the performance of the next step in the process and can therefore be monitored for process control purposes.
Quality management system- The business processes and procedures used by a company to implement quality management. This includes, but is not limited to, laboratory and process deviation investigations, compromises, and change control.
SignAn indication of unexpected process variation caused by a violation of a predetermined statistical rule that is used to identify special cause variability within a process. Process behaviors with assigned signals include (but are not limited to):
- Outlier (like Nelson's Rule 1)
- Change (as Nelson's Rule 2)
- Drift (like Nelson's rule 3)
put on screenAn initial reading of the signal by the SMEs to determine if the signal should enter the standard category and be escalated or de-escalated.
identify variation
The focus of CPV signals should always be on identifying variation within the specified limits defined in the control strategy. This applies when critical quality attributes (CQAs) are within specifications and critical process parameters (CPPs) are within demonstrably acceptable ranges. Signals that are outside of the control strategy (ie, OOS) are primarily investigated within the quality management system (QMS).
This document is primarily concerned with evaluating responses to CPV signals within the design space. However, it would be beneficial to capture and integrate lessons learned from both formal investigations and evaluations related to the CPV to improve control of the process in the long term.
Establish the frequency of monitoring and evaluation
As stated above, the ideal scenario for CPV monitoring is to identify signals in real time during manufacturing and react accordingly. However, this scenario is not always practical, as many signals require multiple data points (ie drift or offset signals) and specific data capture and analysis technologies are needed to perform the calculation. Since these signals are, by definition, within the specified limits defined in the control strategy, the inherent risk of identification disassociation and reaction to lot release signals is low, allowing for more regular review. .
The frequency of the review must be established with the follow-up plan and must consider:
- Relative risk that a parameter or attribute will deviate from its acceptable range
- manufacturing frequency
- Level of knowledge of the historical process
- Technological capacity of the factory to collect and analyze the data
Establish evaluation and response criteria
A CPV signal is designed to identify possible new variations or unexpected patterns in the data. As these conditions are within the control strategy, the signals should not automatically be considered formal deviations from Good Manufacturing Practices (GMP). However, there may be cases where a signal is significant enough to indicate product quality or a validation effect that requires monitoring and resolution within the QMS. When this happens, a cross-functional data review and escalation procedure must be implemented to ensure that the signal is handled correctly.
This document provides an example procedure that can be used or adapted to respond to CPV signals using risk-based decision making to determine when a signal should be escalated. The procedure has four key elements that should ideally be present as prerequisites (Table A).
Element | Description |
---|---|
CPV plan with identified signal criteria | Plan describes the analyzed parameters for CPV and rules to identify signals. This provides guidance on which process behaviors warrant further analysis. |
Default responses to signals and parameters | The default responses are default actions for each parameter/signal combination. These are based on the criticality of the parameter and the nature of the signal to ensure that the response is compatible with the signaled level of risk. |
Process of review and escalation of data signals | The signals and their default responses should be periodically reviewed by an SMB cross-functional team to assess the adequacy of the default response and determine if an alteration (escalation or de-escalation) is required. |
documentation system | The response and justification signal must be documented and approved. |
Action | Description |
---|---|
No action | No answer needed. This response is associated with signals that are not considered significant enough to warrant further root cause analysis and do not require corrective or preventive action (CAPA). Document the decision and justification in accordance with approved procedures. |
Assessment | This response is associated with signs of unexpected variation from the historical processing experience that are considered significant enough to warrant a technical evaluation to understand the cause of the variation; however, it is not important enough to warrant a product quality impact assessment. A subsequent CAPA may be required. Assessments can span a broad spectrum of complexity, from a simple review of a batch record or initial raw materials to a complex, collaborative, cross-functional assessment. The size of the evaluation is based on the technical contribution of SMEs to the process. |
SGC escalation | This response is associated with signs of unexpected variation from historical processing experience that are considered significant enough to warrant a technical evaluation to assess potential product/validation impact and establish root cause. A subsequent CAPA may be required. The signal response is tracked within the QMS and requires product impact analysis/validation, root cause identification and any associated CAPA within the timeframes required by the relevant quality procedures. |
Plano CPV
Upon completion of Step 2: Process Performance Qualification (PPQ)2—A CPV plan should be established with the following components, including a rationale for each:
- Parameters and attributes to monitor
- CPV thresholds for each combination of parameters and attributes
- Frequency of trend assessments
- Statistical signals to evaluate
- Default responses for each combination of parameter signals
The justification may be based on risk and should include an explanation of what process behaviors may warrant further analysis. Ideally, each default response should be determined from a risk-based strategy that considers the criticality of the parameters, the nature of the signals, and the performance/capability of the process parameters.
The CPV plan should reference company-specific procedures that specify reporting formats, designate escalation procedures, identify roles and responsibilities for CPV trends, and define terms.3 By way of illustration, this document uses the terms signal-response action shown in Table B.
Standard responses to signals
Table C provides an example of a risk-based strategy that can be used to determine the minimum default responses for each parameter and signal in the CPV plan. The example uses classical signs, which indicate deviations from the established behavior for independent normally distributed data. Actual strategies may vary by CPV plan. The default response assigned to an individual signal and parameter combination may vary from the proposed default value if adequate justification is provided in the plan.
Table D illustrates how standard responses and modifications can be presented in a CPV plan.
escalation process
During the execution of the CPV plan, data is collected and analyzed at a predefined frequency. Once a signal is identified, a cross-functional team (CFT) of subject matter experts (SMEs) with knowledge of the process, manufacturing operations, quality control, and/or quality assurance reviews the signal to determine the appropriate response. Others such as quality control laboratories, regulatory sciences or continuous improvement and process development may also participate.
The CFT reviews the signal against the default response and determines if it is appropriate to scale to a higher level or scale down to a lower response level. Factors that may be considered when changing the default response include (but are not limited to) the possible results of the CFT review shown in Table E.
When the product is manufactured in more than one place, it is advisable to have a system to share data and/or CPV notes. In all cases, changing any standard response to cues requires proper justification and documentation.
Document signals and responses.
Once the CFT screening is complete and the results are aligned, the team will take the recommended actions or ensure they are taken. If the action is to be escalated, the appropriate quality system document will be initiated and current procedures will be followed. If the action is an evaluation, further analysis or experimentation is required to determine the cause of the signal.
The results of any evaluation must be documented in accordance with GMP principles. If no action is taken, that decision and justification must also be documented, but no further action is required.
Signal and response documentation generally falls into one of the types described in Table F.
Any changes to control limits, signals, or processes resulting from the evaluation should be managed by the most appropriate system for the change (ie, change control or CAPA). Quality approval is required to close a signal response for all three categories (escalation, evaluation, no action).
Sign | signal type | CQA* | CPP | PCN |
---|---|---|---|---|
point outside the curve | Nelson Rule 1:1 point outside a control limit | If the process capability is acceptable,† the evaluation | Assessment | No action |
Western Electric Rule 1:1 point outside a control limit | If process capability is marginal, scheduling | |||
Change | Nelson Rule 2:9 consecutive points on the same side of the center line | If the process capability is acceptable, the evaluation | Assessment if process capability is marginal | No action |
Western Electric Rule 4:8 consecutive points on the same side of the center line | If process capability is marginal, scheduling | |||
Drift | Nelson Rule 3:6 consecutive points, all ascending or descending | If the process capability is acceptable, the evaluation | Assessment if process capability is marginal | No action |
Western Electric Rule 5:6 consecutive points, all ascending or descending | If process capability is marginal, scheduling |
* When existing procedures require quality formal investigations, these procedures supersede this strategy (eg, OOT/OOS). Whenever possible, CPV plans should align with OOT procedures.
† Acceptable and marginal process capability can be defined within a procedure in statistical terms.3,4
Parameter | CPV plan response | comment | ||
---|---|---|---|---|
point outside the curve | average change | Drift | ||
NCP1 | No action | No action | No action | N / D |
NCP2 | No action | Assessment | Assessment | No impact on quality. Evaluate changes and deviations to limit business impact. |
CPP1 | No action | Assessment | Assessment | No action for outliers due to high process capability. |
CPP2 | Assessment | Assessment | Assessment | edge processing capability |
CQA1 | Climbing | Climbing | Climbing | Scale all signals due to process marginal capability |
CQA2 | Assessment | Assessment | No action | No action for drift signals due to inherent drift in the process. No escalation is required for discrepancies or media changes due to acceptable process capability. |
Factor | potential outcome |
---|---|
Compare signal with historical performance | Scale if the data is significantly different from historical data or is unusual based on the SME's knowledge of process performance. |
Closeness of the data point to the specifications | Escalate if CFT concludes there is an OOS risk |
Recurrence of similar signs | Escalate to determine cause |
Signals for multiple parameters and/or attributes in the same batch | Escalate to determine the cause and any potential impacts on the process not highlighted by the CPV trend |
Related events within the quality system | Reduce if an assignable cause was identified and investigated in the quality system |
Related predicted deviation, technical study or validation protocol | Reduce the intensity if the signal is assigned to the related study. Exceeding an existing time limit, for example, as part of validating a drive operation timeout extension. |
document type | Description |
---|---|
Mould | It can be used batch by batch to explain special causes and document their effect on the product and/or process. Form feedback can also be summarized in CPV reports. Information can also be captured in a database, usually outside of the QMS. |
Record of meeting | It is used to document regular reviews of CPV trends, signals, and discussions by the CFT responsible for the reviews. If part of the established periodic review of the CPV, the meeting minutes must be approved by the QA and stored in a formal document control system. |
technical report | It can be used to document an assessment as directed by the CFT. Typically, a report is used to document additional data collection and/or analysis outside the scope of a periodic CPV report. |
Informe COGS | It is used to summarize all assignable signs and causes. You can include brief discussions of easily explainable signals that do not require evaluation in a technical report or quality record. |
quality system record | When the CFT decides to escalate to QMS, a quality system record is initiated to track root cause investigation and product quality impact assessment. This record may involve different levels as a result of the root cause investigation and the result of the product impact assessment. This record must be referenced in the CPV report. |
Sample Answers
Frequently encountered when conducting CPV activities, the following scenarios offer guidance on how to evaluate and respond to signals seen in similar circumstances. They are classified into three categories:
- Standard answer vs. modulated response
- Address long-term special cause variances during control chart setup
- Signs indicating incorrect control chart setup
Standard answer vs. modulated response
In all three of these examples, the CFT, after routine review, must decide whether to proceed with the prescribed standard response or to modulate the response.
Example 1: default response
The CFT is monitoring a non-critical process parameter (NCP) using a control chart. By definition, NCPs have no effect on any critical quality attribute in a wide range of operations. They can be staggered yields, in-process retention durations for stable intermediates, or final cell density in seed passages. BCPs are typically monitored as indicators of process performance or consistency that may have practical or financial implications. While trend signals for such parameters have no effect on quality, monitoring them provides an opportunity to learn and gather process knowledge. Signals observed for NCPs may indicate suboptimal performance or unwanted process changes.
- 1 a bUnited States Food and Drug Administration. “Industry Guidance. Process Validation: General Principles and Practices.”http://www.fda.gov/downloads/Drugs/.../Guidances/UCM070336.pdf
- 5International Conference on Harmonization of Technical Requirements for the Registration of Pharmaceutical Products for Human Use. “Development and Manufacturing of Pharmaceutical Substances (Chemical Entities and Biotechnological/Biological Entities): Q11.” May 1, 2012.http://www. ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q11/Q11_Step_4.pdf
- 22. Boyer, Marcus, Joerg Gampfer, Abdel Zamamiri, and Robin Payne. "A roadmap for implementing continuous process verification". PDA Journal of Pharmaceutical Science and Technology 70, no. 3 (2016 May–June): 282–292.
- 3BioPhorum Operations Group. "Continuous Process Verification: An Industry Position Paper with a Sample Protocol". 2014.http://www.biophorum.com/wp-content/uploads/2016/10/cpv-case-study-print-version.pdf
- 4Oakland, Juan. Statistical Control of Processes, 6th ed. Routeledge, 2007
Figure 1: NCP Experiences Infrequent Outliers
In this scenario, the control chart shown in Figure 1 indicates that the monitored NCP is typically within the control limits, with some exceptions where outliers are observed. CFT believes this NCP is well understood and all previous excursions have been explained. Under the CPV plan, the default response to observed outliers is no action.
The CFT wants to decide whether to scale the response to the most recent outlier. Examining the figure and considering SME's input, a CFT member argued that neither the magnitude of the excursion is too alarming nor the frequency of outliers appear to have increased. Given this conclusion, a reasonable course of action in this case was to follow the standard "no further action required" response. An additional consideration is that the observed deviation can be considered part of the common cause variation, so a reassessment of the control limits may be warranted.
Example 2: Escalated response
In this scenario (Figure 2), a well-behaved NCP is monitored for outliers, shifts, and deviations. An outlier signal was observed for the last batch manufactured. Upon review, the CFT concluded that the magnitude of this deviation was significant compared to recent manufacturing experience. The CFT was also concerned that outliers for this parameter were not found frequently, so there was little understanding of the process in terms of its impact, especially considering its magnitude. While the default response to outliers for this particular NCP is no action, the CFT has determined that such a significant outlier should be escalated for evaluation to determine its cause. Escalation to QMS was not considered necessary as the associated critical parameters were well within the control strategy.
Example 3: Reduced answer
In this example, a CPP is being monitored for outliers, changes, and deviations. The control chart in Figure 3 shows that the CPP experienced an outlier signal for the third most recent batch produced. However, the dominant observation from the chart is that this CPP performs very well from a statistical standpoint.
Figure 4 shows that the process capability for this parameter is marginal (1.00–1.33). However, since the excursion direction is away from the nearest specification limit (the lower specification limit or LSL), this does not pose a risk to process capability.
Under the CPV plan, the default response to outliers for this CPP is valuation. To determine if the default response is appropriate, the CFT considered several factors: the magnitude and frequency of outliers, the direction of excursion relative to process capability, and the lack of a similar outlier in the next two batches. The CFT has determined that this single outlier of relatively small magnitude is within the specification limits and represents a low risk to product quality. Therefore, the team decided that no further evaluation is necessary, reducing the response in this case to no action.
Figure 2: NCP experiencing a large outlier signal
Figure 3: CPP with marginal capacity and a single outlier
Figure 4: Analysis of process capacity for a CPP with marginal capacity
Address long-term special cause variances
Many parameters and attributes experience long-term variations due to special causes; this includes raw material changes over time, outdated equipment, campaign-to-campaign variation, and cumulative changes in processes, equipment, materials, and test methods. Dealing with these variations in the long term depends on the nature of the variation, its frequency, and the ability to identify or predict it.
Treatments generally fall into one of two categories: if the special cause can be identified and it does not change too frequently, the control chart can be stratified to the different levels of that special cause; otherwise, a relatively large data set should be used when defining the limits of the control chart. It must be large enough to fully express the voice of the process, including the effect of special causes on long-term variation.
Control chart stratification is desirable when special causes of long-term variation are easily identified and have low frequency. This allows for a statistically significant number of data points within each stratum. Examples of special causes that can be addressed by layering include (but are not limited to): variances due to campaign manufacturing; variations caused by the introduction of significant changes in the manufacturing process, equipment or test methods; and variations due to changes in materials (such as chromatography resins) affecting multiple subsequent batches.
If, on the other hand, material changes are very frequent or long-term variation is gradual due to small cumulative changes, then control chart layering becomes impractical and challenging. The best approach in the case of very frequent changes is to build a robust control chart with a relatively large data set that demonstrates these variations. Temporary control limits are initially established and then recalculated with some frequency, introducing additional data until SMEs feel that the data set expresses the true voice of the process, including long-term variation. The CFT may also consider "no action" for signals of change, as they are expected for parameters that are sensitive to long-term variations.
Example 4: control chart layers
In this example, a CPP for a product that is manufactured in campaigns is monitored for mean changes, deviations, and discrepancies. Figure 5 shows a control chart of CPP in three campaigns of 10 runs (A, B and C on the upper axis). In this case, it is normal for several months to elapse between successive campaigns. Often different products/processes are running in the interim.
Signs of multiple mean changes and discrepancies became apparent as production continued. Since the parameter was a CQA, the signal was tested with a standard "evaluation" response.
The important consideration is whether there are subtle changes from campaign to campaign. These can be the result of different batches of raw materials, new speaker packages, etc. All are considered normal process variables, but when viewed in a campaign, they can exhibit marked process changes. To account for the effect of the campaign on the CPP and avoid false signals, the most appropriate treatment for the control chart was to stratify it by campaign, as shown in Figure 6.
Important considerations:
- All data are within specifications (0.84–1.60). This can be qualified through capability reviews or simply checked against the specs.
- Data for each campaign is considered "under control" or stable. There are no execution rule violations as described in the previous sections.
- The variation within each campaign is similar. Variance homogeneity can be verified between campaigns using various statistical tests. The most important consideration is that the process variation does not worsen (ie, wider control limits) with each successive campaign.
- Any changes between campaigns should be recognized and documented in CPV plans and/or more formally in a QMS.
Figure 5: CPP in three campaigns of 10 runs
Figure 6: CPP stratified by campaign
Example 5: Accounting for Long-Term Variation in Defining Control Chart Limits
In this scenario, a CPP has been found to be sensitive to many known and unknown long-term special cause variations. The top half of Figure 7 shows a CPP scatterplot with a line representing a moving average. The figure is also segmented into three parts representing three different column packs. The moving average of this CPP shows a slow and somewhat stepwise variation within and between the three segments. Furthermore, the variation within each segment appears to be of the same magnitude, if not greater, than the variation between segments. In this case, overlaying the control chart may help to avoid some false positive signals, but it will not eliminate them.
Given the long-term dynamics of this CPP, setting control chart bounds with a limited data set would lead to excess false positives. In cases like this, therefore, it is a good idea to use the largest practical data set when calculating bounds that take into account the long-term natural variation of this CPP. The long-term standard deviation should also be used when calculating the limits. The lower part of Figure 7 shows a control chart with limits calculated using the entire data set and without stratification.
Figure 7 indicates that despite the observed long-term variation for this CPP, it is actually a well-behaved parameter. There are few signs of trends and the process capability is remarkably high given the width and placement of the control chart compared to the specifications.
Figure 7: Long-term variation in the definition of control chart limits
Signs Indicating Incorrect Control Chart Setup
When calculating control chart bounds for trends, the idea is to collect a statistically significant data set that captures common cause variation that is expected to persist into the future. However, in practice, most CPV plans for new products are created with a limited number of lots. Therefore, the control limits may need to be updated as soon as a large enough data set becomes available.
Even for legacy products, there are cases where the historical data set is not representative of current manufacturing due to cumulative changes in processes, equipment, materials, or test methods. While it is not advisable to continually and arbitrarily change the data set baseline and recalculate control limits, it is not best to apply control limits that do not represent the current manufacturing process. A balanced approach is preferred, where control limits are regularly evaluated and updated as necessary.
The average and variation of monitored parameters and attributes are subject to change; they can also be intentionally introduced as a result of process optimization or continuous improvement. Ideally, the mean should move toward a predetermined target and the variance should decrease over time as process knowledge is gained and feedback controls are added through active monitoring.
One of the advantages of control charts is that the signals they contain can alert practitioners to changes in the mean or variance. A valuable control chart running rule that is not often exploited in the industry is 15 consecutive data points all within ±1 standard deviation of the mean, indicating that the variation has decreased over time. This behavior is often seen as a wide gap between the control limits and the mean.
Persistent outliers in one direction and/or persistent changing signals may also indicate long-term changes in the mean. In that case, the parameter in question should be examined to determine if the change is acceptable and new control limits are needed, or if the change is not acceptable and additional measurements are needed to bring the mean back to the target. The following two examples illustrate this type of situation.
Example 6: Decreasing deviation over time
In this example, when monitoring a CPP, the CFT observed a large gap between the control limits and the data, which clustered around the mean, as shown in Figure 8. The highlighted signals on the plot indicate that 15 or more data points are within ±1 standard deviation of the mean.
Figure 8: Wide gap between control and data boundaries
Figure 9: Updated limits due to reduced variance
Figure 10: Correction of control limits for process improvement, before (above) and after (below)
Such a scenario can result from one of two things: 1) A phenomenon called "stratification," where samples are systematically drawn from multiple distributions, or 2) the process variation has decreased, indicating significant change in the process. Both require the control limits to be reassessed.
After further evaluation, the CFT ruled out stratification. Review of the history of this chart indicated that the control limits were established using a limited data set when the process was first transferred to the site. Documented evidence has shown that numerous process improvements and tighter controls have been implemented over time. Therefore, the CFT concluded that the current limits were inadequate and that new limits were needed.
Figure 9 shows the same data set, using control limits that reflect the true nature of the data. In this case, the control limits were recalculated to reflect the process improvement that led to the improved (narrowed) control limits.
Example 7: Reducing variance and mean centering over time
In this example, a process underwent an improvement project to optimize performance and reduce variation. When implementing the changes, the project team decided to keep the existing thresholds and monitor the performance of 15-20 batches to see if the change was successful.
Figure 10 shows the data from this process where the change was implemented around batch 58. Around batch 70, the CFT reviewed the data and assessed the change as successful. On lot 110, after a period of time that covered additional variance factors such as equipment maintenance and critical raw materials, the control limits were recalculated and are now considered adequate for future production.
Summary
The CPV is an important initiative for the biopharmaceutical industry. Compliance means that statistical signals revealed by CQAs and CPPs must be handled appropriately. CPV helps maintain product quality, but is different from lot release. Since the primary purpose of COGS is to protect production from sources of long-term variation, escalation to QMS is likely to be rare.
Good practices related to CPV signals involve defining the attributes and parameters to be monitored, together with the associated signaling criteria. A standard set of responses can be defined, but it is important that the signals are reviewed by a CFT familiar with the product and process. This allows consideration of the complexities of the manufacturing process. Signals can be scaled or descaled from their default values; the rationale for these decisions should be recorded. The review process also provides an opportunity for an organization to further understand its manufacturing process and improve it over time.
Thanks
Andrew Lenz (Biogen), Martha Rogers (Abbvie), Bert Frohlich (Shire), Parag Shah (Genentech/Roche), Ranjit Deshmukh (AstraZeneca), Sinem Oruklu (Amgen), Susan Ricks-Byerley (Merck) y Randall Kilty (Pfizer) ).