PPMD Community

A couple of months ago we posted a write up about some of the efforts PPMD is undertaking to help improve clinical trial design.

For example, DuchenneConnect helps keeps patients up to date on clinical trials and allows for patient level data to be collected and aggregated to answer clinical questions, and C-Path’s D-RSC project is creating a disease progression model that will help us understand the natural history of the disease to better inform clinical trial design.

But how does data aggregation really work? And what are the nuts and bolts of a disease progression model? And, if you are aggregating data sets together how can they all be compatible? Don’t some databases use different terms?

Our friends at D-RSC spell out some concepts and terms for you to make understanding all that PPMD is doing easier. In the end, operationalizing these concepts are what we must do if we are to accelerate clinical trials and find a therapeutic that works. 


What are CDISC Standards?


The Clinical Data Interchange Standards Consortium, CDISC, is a nonprofit organization that was set up to develop a standardized way to structure and report clinical data. CDISC standards provide a common language, such that people collecting and analyzing data in different clinical settings can compare and combine data in a scientifically meaningful way. Each element of a measurement is reported in a standardized way so that the same test or measurement result is represented in the same way at different sites – and any differences in how it was done are documented. Using this language allows researchers to use the data collected in an individual natural history study or clinical trial in new ways, maximizing the value that comes out of that data. For example, by standardizing the data collected in different studies, it is possible to create databases of data aggregated from multiple studies, which then can be queried for new insights into the disease. Also, standardization of how the data are reported allows the regulatory authorities to review the data more quickly and efficiently, which is why the FDA will require all clinical data submissions in new drug applications and biologic licensing applications to be in this format from 2017 on.


While many of the medical concepts collected in Duchenne trials are common to those collected in other disease areas (age, height, weight etc.), these studies also have many measurements that are specific to Duchenne – such as the Northstar Ambulatory Assessment, or specific timed functional tests. These are not currently described in the CDISC language, so one of the goals of the Duchenne Regulatory Science Consortium (D-RSC) is to develop such standards, have them reviewed and edited by the community, and then to publish them as CDISC standards content, and make them available to all. This will help standardize data collection, standardize regulatory submissions, and help aggregate existing clinical data for data queries.


Aggregation of Data:


Over time, Duchenne patients have been studied a lot. Patients take part in clinical trials, are studied in natural history studies and other forms of clinical research, and supply information to registries. Even on a regular clinic visit, some clinicians gather specific information for research purposes (but they do have to ask your consent if they do so). As such, there is actually rather a lot of data about Duchenne patients. However, much of these data are in relatively small, individual databases, available only for analysis by a small number of people. Imagine if all those data could be combined into a single database, and made available to more researchers for analysis! Subtle patterns that are not obvious in small datasets might become clearer, larger patterns could be confirmed, and connections might be drawn between different measurements, helping us to better understand how the disease progresses, and what contributes to such progression.


D-RSC is focused on doing this by combining data from multiple sources to create an aggregated database for analysis. Importantly, the data must be combined in a scientifically accurate way, so that it does not lose its meaning, and it must be done in such a way that the individual patients’ data is protected. The Critical Path Institute, home to D-RSC, has created such databases across multiple disease areas, working with partners such as pharmaceutical companies and nonprofit groups, as well as the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO). The C-Path teams have used the data to create drug development tools – mathematical models that describe disease progression, biomarkers and endpoints that can be endorsed or qualified by the FDA and EMA, and used by the community to accelerate drug development. With the help of Duchenne experts, D-RSC hopes to replicate this successful approach. These experts will help D-RSC to make sure that only accurate data is incorporated into the database, that it is interpreted and combined correctly, and that any analyses that are done on the data are accurate, represent the disease and are shared widely. Thus we hope to use the combined data to learn about the disease and to inform future clinical trials.


What is a Disease Progression Model?


A disease progression model is a mathematical description of how an aspect of a disease changes over time. Essentially, by taking a series of measurements now, such a model would predict how that aspect of the disease might change over time in specific subpopulations. For example, in Duchenne, the model might describe how the strength of a specific muscle changes over time. This might be predicted by the type of genetic mutation the person has, other genetic markers, and the time it currently takes him to stand up from a chair. The model would use those variables to predict how the strength of that muscle might change over time. This is just an example – the field needs to be able to evaluate large amounts of data from patients to determine which aspect of the disease is most valuable to model, and which measurements predict changes.


A mathematically robust model would be of value to groups developing clinical trials, as it would inform them as to groups of patients who might progress in a similar manner, allowing shorter, smaller trials to be more informative. It could also inform them as to the best endpoints or biomarkers to measure in specific populations, and could inform subgroups of patients that can be compared best to each other, or to inform development of new biomarkers. Developed and used appropriately, such a model should allow us to develop more effective trials that give us clearer results, possibly in less time and using fewer patients.


Qualification/Endorsement of a Drug Development Tool:


“Qualification” and “endorsement” of drug development tools are FDA pathways by which they review the tool and agree that such a tool is scientifically valid for a specific use. “Qualification” is the process for biomarkers, while “endorsement” is used for tools like mathematical models. For example, a biomarker qualified for use in selecting a subgroup of patients for a trial would allow drug developers to select patients using that marker without providing the FDA with any additional data as to why that group of patients was chosen (and others not included in the trial). Each biomarker or model must be qualified for a specific context of use, in which it can be applied. EMA has a similar process in Europe, and often tools go through the pathways on both continents simultaneously.


As you might imagine, the burden of proof for qualification or endorsement is very high – the FDA requires a lot of data to justify why that measurement is scientifically valid, and really does do what is expected and predict what it is supposed to predict in all cases. This is important, as once qualified or endorsed, the tool will be able to be used to inform clinical trials from multiple drug developers without each having to prove the utility of the tool. The qualification process will take into account not just whether the measurement is predictive of what it is supposed to be predictive of (e.g., are dystrophin levels relevant to disease severity?), but also how it is measured, how variable those measurements are when different people do them and compared to different methods, how much is needed to make a difference, when the differences need to be seen and many other factors. The amount of data needed depends somewhat on the “context of use,” which defines how the tool will be used. Less data would be needed for a tool to be used for a non-critical use (e.g. defining patients to be included in a trial) than for a tool that might be used for drug approvals, such as a surrogate endpoint.


Qualification or endorsement of a tool means that the regulatory authorities agree on how a tool should be used in a clinical trial. The tool is then publically available for all users to help them to design efficient and effective clinical trials.


Clinical Endpoints:

A clinical endpoint is defined by the FDA as a direct measurement of how a patient feels, functions or survives. These are used in clinical trials to measure if a potential drug is helping a patient or not. However, in most trials researchers can’t afford to wait to see if patients live longer – Duchenne trials would take a very long time! So, most trials measure what are called “intermediate clinical endpoints.” These are measurements that change in a shorter period of time, but still indicate a change in how a patient feels, functions or survives. For example, a change in a defined functional test or a change in respiratory function could be an intermediate clinical endpoint. However, with tests like these, the FDA needs to be convinced that the degree of change within a trial is clinically meaningful, and the measure needs to be compared to some defined standard (ideally, patients on placebo in the same trial). The FDA defines an intermediate endpoint as one that is “reasonably likely” to predict a change in irreversible morbidity or mortality.


In contrast, a “surrogate endpoint” is a measurement that is reasonably likely to predict a clinical benefit – i.e. a measurement that is a surrogate for something that measures how the patient feels, functions or survives. For example, reducing someone’s cholesterol is often considered to be a surrogate for measuring the numbers of heart attacks in a population (cholesterol numbers, in themselves, do not affect how a patient feels). In the context of Duchenne, it could be imagined that dystrophin levels might be a surrogate endpoint – but there aren’t enough data at this time to support dystrophin as an indicator of clinical benefit. The burden of proof linking the surrogate to the clinical endpoint is very high –simply justifying that the measures are correlated (people with high cholesterol develop heart disease) is not enough… a demonstration linking a change in the surrogate and a clinically-relevant change in hard endpoints is also required (reducing cholesterol also reduces mortality due to heart disease). This has proved to be challenging in arguing for surrogate endpoints in diseases like Duchenne, where the amount of data available is limited and we do not have approved treatments. However, as more drugs go into trials and more data is collected, the story supporting some biomarkers may become strong enough to consider specific measures as surrogates.


Views: 475


You need to be a member of PPMD Community to add comments!

Join PPMD Community

Comment by deweer on May 17, 2016 at 5:23pm

SMTC1100  accélérons URGENCE  summit plc  semble plus intéressé par la finance  cotation en bourse 

Comment by deweer on May 17, 2016 at 12:52pm

smtc1100 + 5 mois de retard sans raison inadmissible que font la ppdm pas de réaction: la fatalité cpmme toujours ;'est comme cela; on préfère les grands discours. DECEVANT

© 2017   Created by PPMD.   Powered by

Badges  |  Report an Issue  |  Privacy Policy  |  Terms of Service