TwitterRSSFacebookDirectors Blog
  Disorders A - Z:   A    B   C    D    E    F    G    H    I    J    K    L    M    N    O    P    Q    R    S    T    U    V    W    X    Y    Z

Skip secondary menu

NINDS Parkinson’s Disease Biomarkers Program. Scientific Liaison Group Meeting


Meeting

Agenda

Participant List  

 

Summary

  1. Welcome and overview of Program. Walter Koroshetz And Katrina Gwinn
  2. Defining the need for a PD Biomarkers Program: What do we want from a Biomarker?. Ian Reynolds and Lucie Bruijn
  3. Data Management Resources
  4. Breakout Groups Meet to Discuss Working Group Questions and Recommendations
  5. Launch of the Afternoon session. Story Landis
  6. Clinical Group Recommendations. Lucy Bruijn
  7. Laboratory-Based Discovery Science Group Recommendations. Robert Martone
  8. Discussion. Ian Reynolds
  9. Adjournment

 

Welcome and overview of Program. Walter Koroshetz And Katrina Gwinn

Dr. Gwinn opened the meeting the purpose of which was to discuss what can be accomplished in 5 or so years for biomarkers that would be useful for neuroprotection trials. This will help NINDS think further about where they can go in Parkinson’s disease (PD). Dr. Koroshetz seconded the Institute’s desire for advice on where effort would best be put. They are open to discussion and ideas not defined by current knowledge, but potential new knowledge that would be important to bring into this area. Goals are to facilitate phase 2 testing of treatment that blocks progression of PD and enables go/no-go decisions based on evidence. The target may be very specific for what the disease is.

Biomarkers can be used for many different purposes, and there is huge variability in technologies used to identify biomarkers. FDA specifies that biomarkers can be exploratory, probably valid, or valid. For NINDS, a biomarker should provide objective measures of disease progression or disease pathophysiology that might be the target of a therapeutic agent—something to indicate that that agent may be worth going to phase 3. NINDS will facilitate research to discover new, useful biomarkers and to promote early-stage biomarker development. Investigators’ efforts should be complementary to avoid duplicative efforts and expenses. The Michael J. Fox Foundation (MJFN) has established the Parkinson’s Progression Markers Initiative (PPMI), an amazing project that collects bio-samples and clinical data in a repository, which will enable checking whether agents and tests actually work. PPMI samples are valuable, especially at this early stage, and will more beneficially be used at later stages of development.

The initiative NIND is addressing today fits below PPMI. It addresses what NIH could do to get people thinking about new ideas or developing ideas, i.e., the discovery phase rather than the validation phase, but they want to integrate their work with PPMI as they go along. Success ultimately depends on innovation, hard work, and good luck, but team science will quicken the pace through data and biospecimen sharing and taking innovative discoveries to the next level of biomarker development, i.e., replication. If everyone tests ideas alone and does not publish the results (either negative or positive), it is not efficient. Biospecimen sharing would enable having a few patients in various places for early-level development, instead of a few hundred patients in one place. It would also facilitate replication and moving to the next level. In addition to the science, NINDS needs discussion on a mechanism of operationalizing this.

Top

 

Defining the need for a PD Biomarkers Program: What do we want from a Biomarker?. Ian Reynolds and Lucie Bruijn

The Biomarkers Program mission is to make recommendations for accelerating the development of biomarkers to enable phase 2 or 3 neuroprotection trials, and for making that easier and more productive. Goals for this meeting are:

• Define critical features of a biomarker program focused on disease progression.
• Establish 5-year goals for the biomarker program.
• Propose a strategy for accomplishing those goals.
• Where possible, prioritize the strategies.

Several other biomarker programs are ongoing in other disease areas, such as the Alzheimer’s Disease Neuroimaging Initiative (ADNI), and other biomarker programs in PD, including the PPMI and a program at the Coalition against Major Diseases’. What can the NINDS group learn from the antecedent programs? How can the NINDS effort productively interface with other PD biomarker programs? Today we hope to address the questions previously circulated, specifically actions needed, actions that should be avoided, and the highest priorities. And, we would like to define a strategy for developing PD progression biomarkers that NINDS can implement.

Top

 

Data Management Resources

 Overview of the Meeting. Margaret Sutherland

A request for information (RFI) was sent to the community at large asking what has been used, how successful it was, what formats are useful, etc. From that, NINDS began to formulate a plan for biomarkers. In considering what to build into a data management resource for PD biomarkers, they have also looked at IT across NIH. The resource should be Web-based and electronic, identify common elements, and be able to aggregate the data. But, how do they develop data dictionaries? What about subjects who are enrolled in multiple projects? Do they want a federated database? Is there a way to coordinate how data for PD are collected?

Top

 

The NDAR Program. Greg Farber

National Database for Autism Research (NDAR) provides a number of solutions. It is a federal database supported jointly by NIMH, NICHD, NINDS, and NIEHS and the NIH Center for Information Technology. Its Data Access Committee is made up of NIH Program Officers and human subjects experts. Relevant Institute Directors act as a board of governors for the project. NDAR began late in 2006 and received its first data in 2008. It is an experiment to collect all of the research data in the autism world. To implement it, they defined Global Unique Identifiers (GUID), Data Dictionary, a data-sharing regimen, and federation, the linkage to other repositories.

The GUID is a universal, unique subject identifier that can be used in multiple areas. The Data Dictionary is a flexible framework for harmonizing data across studies and institutions. It allows researchers to bring in data in whatever format they used to measure it; NDAR then does the curation. One NDAR goal is to enable researchers to do “one-stop shopping,” and NDAR’s curation is key to that. They try to provide as much data as possible. Various types of queries will indicate how much data of that type is available and the user can find out what is in other databases. Definitions across databases can be problematic, e.g., there is no agreement on definitions of “mildly affected” or “strongly affected,” so NDAR must work with the community. Data can easily be made accessible by lab or NIH grant. It also shows relevant publications that are available through PubMed, or grants given.

The Data Access Committee gives access to NDAR on receipt of a short written description of what the applicant wants to do. The Data Dictionary currently contains about 200 clinical assessments and data elements. It is easy for the community to use. Investigators can run data through the validation tool before they submit it. Further, it is a way of promoting standardization in community. Research Electronic Data Capture (REDCap) is turning the Data Dictionary into a forms-based system. Data can be categorized by publications to get associated data.

A data-sharing regimen was advanced with community and NIH input; now all NIH awards require that data be submitted to NDAR. (See NDAR.nih.gov.) The data-sharing regimen calls for data submissions every 6 months. Existing data repositories in the autism world are federated so other databases can be accessed via the NDAR Web site. They also have tutorials. In sum, a lot of data are available and can be shared. By next September, they should know whether the community finds value in this data-sharing.

Discussion

• Key to creating a data-sharing culture was that the autism community is relatively small and everyone knew they could not collect enough data in a single lab to make a difference. Also, the NIH condition is persuasive.
• A significant part of the team tracks the people who are supposed to submit data by a certain date and then questions results—now investigators view this as a service. Ultimately it’s the program office in charge of the research.
• Data-sharing is required, and the Web site spells out the costing. Also included is sample consent language; and consenters must confirm that data can be shared.
• Data have different levels—raw, processed, or clinical outcomes—and there are lots of clinical assessments and a significant amount of imaging data. NDAR takes raw data and processing steps people want to contribute. They ask what instrument or kit was used. Data storage is a big issue; NDAR has concluded that they will have to store data in the Amazon Cloud and that they will allow researchers to back-up their own data.
• Industry has yet to submit data. Data-sharing is academically enforced, but they have no leverage with industry, which is concerned about the need to protect intellectual property.
• What are the whiches of NDAR—Which have been most difficult? Which have worked well? Which have worked poorly? Initial problems centered around investments made in inappropriate pathways, which yield expensive false starts. Such a resource needs to understand what the community thinks it needs and try to provide that. GUID is the key to success. Also the Data Dictionaries are not proscriptive, and that is working. It remains to be seen whether we can generate hypotheses with these data.
• Enforcement is effected by impeding publication unless the work is registered. Then the investigator needs to put the raw data into the <clinicaltrials.gov> database. NDAR views as its responsibility to go to the other repository and develop ways to federate the 2 without necessarily holding the data. <clinicaltrials.gov> is one of the federated sites.
• Data producers, rather than only data miners, should also get credit. NDAR sees this as a 2-issue problem:  First, the producer has to deposit data. Then software must be developed that deals with data provenance. Submitters get information about users of their data and the way they used it. NDAR is trying to find ways to cite the producer; but not enough people have come into the system to mine the data yet.
• NDAR is struggling with the issue of types of resources provided for analysis. They envision the facility as a data repository, not an analysis platform, but clearly, work flows must be associated with imaging and perhaps genomics data. If several labs have different ways of analyzing MRI, NDAR will make each available.
• Data are open and available to any researcher, including those from industry. But, they have had no requests yet. If someone deposits data in this database, it is required, but does not impede their right to intellectual property, a complicated issue—who is the owner of these data?
• Data standardization involves great effort, but we don’t know the value of it. It needs to go beyond what we understand now from a scientific perspective. Controlled vocabularies, etc. are tough questions.
• We have no firm quantitative assessment or metrics by which to measure success. Participation will drive the field forward as the measure of success.
• Data standards are incredibly important and extraordinarily expensive. They also can cause many errors. We can use data standards that exist for age, gender, etc. but particular fields create specific standards appropriate for those fields. FDA’s next Data Standardization document will be published within the year. Specialist data standards should disappear, but much depends on the stage of development of that field. FDA does not want to close off data standards too early in development of a field.

Top

 

Federal Interagency Traumatic Brain Injury Research (FITBIR). Matthew McAuliffe

FITBIR is a collaborative, biomedical informatics system that supports research in traumatic brain injury (TBI) to accelerate scientific discovery and treatment. FITBIR is funded by the Department of Defense using the NDAR model. Challenges with research data include:  small sample size; studies of the same subjects at multiple locations; lack of detailed visibility in all available TBI research; and inability to easily validate others’ results. Challenges of data accessibility include:  data take months or years to get into papers and journals, are typically summarized and rarely complete, were acquired not using common data elements, and tend to get lost after the grant ends.

Mission goals are:  to develop standards and policies to enable cross-site meta-analysis and data comparisons; to be a central/federated repository and portal for phenotypic, genomic, and imaging data; to promote the sharing of high-quality research data throughout the TBI research community; to know from where each data piece came; and eventually to make and deploy useful tools for community adoption.

FITBIR adopted common data elements as defined by NINDS. Once the structure has been formed, it can be adapted by defining data elements uniquely to the research at hand. In the GUID process, a random identifier is attached to each subject, which complies with the requirement to de-identify patients’ data. The researcher can mine and query the data, and when data are moved to a shared repository, all linvestigators will have access to that information. When data remain privately held, the researcher determines who can access it. Researchers can opt-out of data sharing, or data can be used in a clinical trial, but not shared and later removed from the database. Part of the patients’ consent is for their data to be shared.

When access is given to users (vs producers), the repository must acknowledge from where the data came. Key to making this work is valuing acknowledgments of producers, and NIH must push that, perhaps with demonstrations of the ability to share. NDAR has no system to track it, while ADNI has asked to be added to the author line. All journals have agreed, although Image has questioned the policy. They have been able to track several hundred producers. This listing is not counted as authorship for promotion.

Industry is invested in monitoring and source-checking data. The whole idea of data quality is critical for these complicated datasets. Some type of support for investigators has to be built-in for those who want analyzed data to understand exactly what they are getting. This should extend to biospecimen collection as well. NIH could state in grants that the investigator must include data quality plans as part of the application. NCI recently released a new edition of Best Practices for Biospecimen Resources. The repository at PPMI specifies some common data that should be collected on every sample—time of day, etc.—and the Data Dictionary addresses that. In addition, publishers must require submission of certain source information when accepting papers. To address quality assurance, a validation tool checks ranges of values, for instance, and ensures the submission is within that range; this is repeated with information submitted 6 months later to ensure consistency. Data will be stored in the Cloud, but the database will be local. Researchers can validate and compare raw data. All of this will enable studies and meta-studies. They hope to have the FITBIR system ready to collect data in June.

Discussion

• Being added to the acknowledgements works for ADNI, but other models funded by multiple sources may be totally lost. FITBIR can get at the issue if they know what data have been downloaded—it figures in the decision to refund the collector. However, junior authors have been discouraged from adding their names to such listings. The standard for listing a co-author vs acknowledging someone must be changed. This is an issue for further discussion.
• At the NCI Cancer Genome Atlas, the consortium produces a conglomerate dataset asking for the source of data. The largest expense is in ensuring quality control of these datasets. In ADNI, it is sometimes difficult to discern the diagnosis because people use terms in non-standard ways. These are complicated datasets and mistakes will occur, but everything cannot be policed.
• After GUID have been determined, a tool to generate hatch is integrated into the forms. Researchers have access to GUID used in other Web sites and know that their data will be shared.
• For emergencies, when a person cannot give consent, an investigator has 24 hours to collect samples. After that, someone (e.g., a parent) must sign the consent form; otherwise those data cannot be submitted to the database.

Top

 

Building a Clinical Studies Database. Christopher Coffey

If a data management resource is to be useful, it has to be useful to the community in general and not just specialists. Statisticians are end-users of clinical databases. Data from new studies must be collected in a standardized way (much of PPMI’s effort is devoted to this), and existing data sets must be standardized; most difficult is the combination of old and new data into a data set. For the information to be retrievable and useful, a data manager is needed, and in the best situation, a statistician to help design the database to ensure that researchers collect the information they need to answer their questions. Collecting data is only the first part; data management must be built in to allow data analysis.

Data management is a vague term that refers to anything from data-coordinating centers for multi-site trials, to groups with bioinformatics experience. The 2 extremes must be pulled together. A resource to support this must be synergistic and focus on the end user. Data sets must be ready for analysis so an investigator does not get different values for data downloaded from one month to the next. Query systems may be open or closed; data may be clean or raw. It would be good to avoid the need for the end user to merge multiple tables, which is complicated and can introduce errors.

Regardless of complexity, a data set in only valuable if it is useful. It is a misconception that public datasets can generally be obtained and immediately analyzed. A data management resource to merge and create analysis files for external users would save a great deal of time and speed up the reporting of important findings. Often, people are not trying to deceive, but cannot adequately describe their research with the imposed word limits, introducing many type 1 errors.

PPMI and LRRK2 investigators have tackled many issues and their experience can be used as a guide for planning. PPMI is a complex structure (400 PD subjects and 200 controls) of clinical, imaging, and biomarker data input into a bioinformatics core.

PPMI Lessons Learned

• Privacy concerns vs analysis concerns;
• The need for the flexibility to add or modify a database on the fly, e.g., a mechanism to upload derived variables;
• The need for a database to be constantly informed by and interacting with an integrated study team;
• The need for a publications/data-sharing committee to keep track of who is accessing the data and what it is being used for—keeping track of who’s doing what and facilitate connections to work on common problems.

LRRK2 Lessons Learned

• Data-sharing issues:  How much data should each group share; country-specific rules about what can be shared and with whom (identified, de-identified, anonymized); how to clean combined data if they are already de-identified or anonymized; how to verify the quality of data and data manipulation when some activities are done by the group submitting the data; and the need to obtain descriptions from each group as to how data were handled by the local group.
• Sites change data structure over time and those changes impact data transfer files.
• It is difficult to combine demographic data across cultures.
• Confirming data collected in another language with standardized training and understanding of information collected.
• The need to clearly define “required” items early in the project.
• Standards will always be a work in progress.
In sum, building a data management resource is an intense exercise, and it is only half the battle. The external user must be able to download datasets and have the proper data management and statistical expertise to analyze it; otherwise the resource is only a nifty thing for the specialist.
Discussion
• Data quality can be facilitated by range checks, but that doesn’t fix errors entered. The large amounts of data are worrisome, but to what extent? FDA has issued a new guidance on a risk-based model that has systems with places to look for errors, but you can’t verify everything. Entering something outside the expected range would be flagged; but more important is knowing whether a particular level of assessment was done.
• These are arguments to have data standards across diseases and syndromes. The benefit of the Centre for Educational Technology Interoperability Standards (CETIS) software is the inherent, automatic checks that can be run. And, those softwares will be available to everyone. Also people will be using the same tools, so they don’t have to validate whether data are showing what is wanted. Clinical data in hospitals will be included and available soon and there are pushes for standardized data elements throughout. CETIS is constantly working on this. Errors in data should be distinguished from errors in data entry. A common misperception is that a good surrogate is not how widely respected the PI is, but how good the data coordinator is. Data coordinators and the people entering data are important to data quality.

Top

 

Breakout Groups Meet to Discuss Working Group Questions and Recommendations

The day’s conversation was focused on the early discovery phase of biomarkers in PD. PPMI set up a pipeline for validation, and things could be sent to them for validation. The Clinical Group and the Discovery and Laboratory Science Group refined questions and responses in their discussion and reconvened to report their findings.

Top

 

Launch of the Afternoon session. Story Landis

Since 2002, PPMI has issued a series of biomarker discovery RFA, and many things have been learned. A rolling initiative last year elicited 25 to 30 throughout the year. For most studies, there is a requirement for having neuyw subjects. Fox gives 2-year grants and for the first 1½ years, most time and money is spent on recruitment. NINDS funded -omic studies, but had not enough foresight to know what to do with the inevitable list that emerged from the experiments. NINDS needs to prioritize and move forward. In terms of discovery, it is easy to integrate a statistician early to improve research design; similarly, linking clinical experts with biologics to ensure that samples of convenience are useful. In sum, we need a more integrated team.

Top

 

Clinical Group Recommendations. Lucy Bruijn

Target population. Defining the target population will impact trial design. Are they patients with motor-defined disease, or patients at-risk for motor-defined disease? Both could be done using motor and non-motor indicators (though the end point for both might become the same). However, collecting pre-motor patients’ data could be expensive because many will not convert (selecting an enriched population will help). Chances to intervene with a pre-motor group will be better. Staggered cohorts are a possibility. A genetically known cohort will provide early clues for biologics.
At-risk vs pre-motor disease. Collecting biospecimens from this population is ongoing in the military.

Pros and cons of later stages. Advantages of not focusing solely on newly diagnosed patients are that it is less expensive; industry is increasing its concentration on biologics (to modify symptoms/disease); and there may be less placebo effect. If we start with the early-diagnosed, they will progress and can be followed into mid- or late-stage disease. Later stage groups can be more complicated, e.g., drug effects, although it would give an adverse event profile, and other progression measures (e.g., cognition) could be included, but it is difficult without a particular question in mind.

Criteria for defining populations. Motor symptoms are important as a start in the early group. Pre-motor symptoms would include REM behavior disorder, anosmia, constipation, abnormal cardiac SPECT—a cluster of criteria. Genetic risk factors could be considered, but there might be many negatives clinically. DAT scan might be used. It is best to use a biological indicator for stage if possible. Symptoms might be used as biomarkers, but they may be change over time. The key is to clearly define and standardize samples so it is possible to know why a study could not be replicated. Does the field need standards of clinical measures? The Unified Parkinson’s Disease Rating Scale (UPDRS) has been gold standard for motor symptoms, but that is somewhat subjective. New instrumental standards, e.g., for bradykinesia, might be useful. Currently, there are many scales for severity. We could use common data elements for already diagnosed disease. Using measures that differentiate populations as measures of progression may not work, or some may be useful and others not.

Cohort size. Ideally the cohort should be BIG. Keep in mind that PD is heterogeneous, so limiting size could be based on symptoms. A larger population is needed for validation studies.
Controls. Subjects may not have to meet all at-risk criteria, depending on the question. Test–retest is important.

Which biomarkers. We might focus on progression as well as pharmacodynamics (drug target markers, drug effect markers). But, how does this affect the population of interest? Can a biomarker serve as a standard? What are the implications for future trials if a standard is defined? Does it need to be mechanism based?.

Best measures. Something that could catch change over 6 months would be better than something that only marks change over 2 years; ease of administration (home-based).
Is this the best approach? Will it improve the quality of what’s already happening? Is cross-study standardization worth the investment? Can we do this? Do we need reagents, platforms, technology validation first? Should we just use PPMI protocols? NIH goals are to build up a database and repository for this, which should not be too limited and too proscriptive. Data from early phases are very important.

Discussion

Disease stage
• One perspective is that the pharmaceutical industry is moving more and more toward biologics, which increases the risk. They need a population that is at greater risk/benefit ratios, which is likely to be later-stage patients. Later we can go to earlier-stage patients. But the longer you wait, the less there is to salvage. This situation has been true for Alzheimer’s disease (AD).
• The progression marker issue could be addressed with a hypothesis about the pre-motor state and then enrolling people to prevent progression to the motor state.
• We might wind up with symptomatic treatment for motor conditions, but nothing to slow cognitive decline. Patients are more worried about their cognitive state and quality of life.
• These cohorts are older and do not necessarily need efficacy, but they need safety. And a large number of older PD patients are very interested in running trials. Nevertheless, the early patients are far and away the best. We could design trials with pre-motor symptoms and track them to conversion to motor disorders.
• Plenty of people with PD are between the pre-motor and the motor stage of disability. There is also the concern about confounded treatment preventing focus on that group. A biomarker in that group would be valuable.
• Looking at the confounder, motor treatment vs progression, over time requires very large numbers. However, if you are going for an indication for a biomarker, you get what you study. Studying pre-motor symptomatic patients would reach the whole spectrum of PD.
• To get to enriched populations of prediagnostic patients (with 70% conversion), you have to start with tens of thousands of people.
• Sleep labs may be the best place to start.
• People who receive the induction regimen lenalidomide, bortezomib, and dexamethasone (RVD) turn out to have other diseases than PD. Furthermore, we’re assuming that the same mechanisms are responsible for disease progression in early, middle, and late disease.
• With this idiosyncratic cohort, we might use (valuable) PPMI samples to validate late-stage biologics; that would be only 1 or 2 candidates.
• The intermediate stage is missing, and for that an independent cohort is key.

Alzheimer’s disease comparisons
• We should be able to learn from the AD experience:  earlier in disease, the less certain you are of the diagnosis; e.g., when enrolling AD patients with normal status exams, but amyloid disease, some 60% can fail at the clinical level. Lumbar puncture and amyloid imaging have taken time to perfect.
• Cognitive impairment mechanisms may differ from those of motor impairment.
• The Dominantly Inherited Alzheimer Network (DIAN) can support and focus on enrolling genetically susceptible individuals in whom disease is inevitable. Then we need to have good controls, follow them longitudinally, and see what is common and not common. It allows drug companies to pursue orphan applications because the cohort is minus years from full-blown disease.
• The extent of amyloid deposition in AD could be related to -synuclein in PD—they are analogous in some ways, but they may also differ.
Genetic cohort
• A genetic cohort is not better than one based on other symptoms. LRRK2 accounts for 30% of disease; but symptoms may indicate PD in some 80%.
• Syndromes are typically turned down for orphan indications, but a particular genotype may work.
Study design
• Part of the discussion should be how to design studies and make sure they are working, especially since there are limited ways to get funding to work on novel designs. We need ideas for ways to do that, and knowing how these would work would advance project design. Perhaps a statistical experimental think-tank could determine how many patient studies are needed. E.g., how many sample sets are needed to study an early-stage analyte? This issue has been ignored for years.
• When it is appropriate to go to the next replication stage, what kind of information would you need to put in the next step?
• If we can find out a biomarker doesn’t work sooner, it is cheaper than finding out at the end—and few candidates will be positive. A worthy goal would be defining small thresholds to identify failures early.
• Now proposals require that the applicant have a statistician to ensure sufficient sample size, etc. This will preclude experimental designs that prevent investigators from replication.
• It is not possible to have one sample size.
• ADNI’s PPSC is analogous. PPSC funds testing and retesting, and test–retest data are now available to all ADNI members. This is needed for PPMI.

Top

 

Laboratory-Based Discovery Science Group Recommendations. Robert Martone

The objective is fostering discovery biomarker research that can be done in a coordinated way, leading to biomarker validation.

Imaging marker. This implies a clear, imaging marker that would be categorical and change over time, understanding that significant changes occur within 2 years. Research would build assays and provide a robust, predictive assay in the environment that allows for replication and eventual validation. This stage would not reproduce PPMI, but be pre-PPMI.

Validation could enable existing biomarker candidates, but validation studies must be done with a clear-cut study design. Examples include CSF, plasma blood biomarkers, and peripheral markers (skin, nerve). Technologies include RNA-based, DNA immune response, retinal function, and protein synthesis.

Assay validation or analyte validation is another category.
Target validation may not be worthwhile for NINDS to fund because industry will likely pursue it.
Biomarkers for various states could all be helpful in early drug development.
Pharmacodynamic biomarkers are different, e.g., tau in AD.  
Disease progression. Certain biomarkers can predict rates of progression, but further discussion is needed on what a disease progression biomarker is. There is the candidate approach or the open-ended approach, and both have utility.
Bioeffects. A bioeffect marker is more general than a progression biomarker.
Animal models. Participants were opinionated, but not in agreement, about animal models. They are needed for validation at least for dose-response. Their use requires additional samples to validate the work.
Imaging can be used to look at potential effects, changes of networks, new molecular markers or ligands, and metabolic pathways, e.g., the metabolic dynamics in CSF.
Specificity is PD vs non-PD.  What is disease progression? We need more information on that. Is one thing the same as another later in the disease? What is a progression biomarker? How does a diagnostic biomarker differ from a progression (rate) biomarker? A diagnostic marker could still be quantitative.
Current RFA. The RFA asks investigators to pursue areas not exploited or explored now, e.g., RNA-based alterations, or innate immune system.

Discussion

• Dr. Landis clarified the NINDS goal:  It is to enable PD trials in 2 years rather than 7, to identify a trunk for diagnosis and disease progression. It would be impossible to explore all the different possibilities laid out. NINDS needs help thinking about what is the most important and feasible set of advances that could be made. This group should define the most important thing and the most feasible thing.
Biomarker candidates
• Studying the pre-symptomatic population doesn’t make sense. Studying more symptomatic people means that you have an enriched population immediately. Asking for a surrogate endpoint may involve predicting efficacy and toxicity. We need a biomarker devoid of symptomatic treatment. Also, symptomatic people are easier to deal with. But, we still know nothing about disease progression, e.g., CSF, new tracers for brain pathophysiology, or RNA.
• PPMI is a great validation set, but if NINDS doesn’t step in, no studies will be ready to use those samples. NINDS could come up with intermediate cohorts by issuing an RFA for something with particular promise.
• The AD model doesn’t work for most other diseases and there’s no reason to think it will work for PD.
• One Udall center is studying a marker and has collected a good amount of data over several years. Dr. Landis noted that NINDS has received applications for additional biomarkers.
• We want to move more quickly on -synuclein, DJ1 measures, and other promising proteins, and to do that we have to take some gambles and balance the investments.
• We need to foster new biological and clinical measures to study cohorts—industry doesn’t fund that.
Study design
• The initial experiment that provides rationality for whether to continue work on a prospective biomarker is a good place to start.
• No good ideal has emerged of the right parameters to measure. A handful of discovery projects could test a hypothesis (based on merit), using a network, of motor-manifest PD. Based on the outcome of the discovery studies, the best could be taken to PPMI. Within that framework are there things we could do? Discovery cannot be narrowly defined.
• We’re in a state of not knowing what we’re looking for. We need to get to a state where we can generate hypotheses.
• There are 2 issues:  Do we have a good drug? Then, how do you design a trial better? There are 2 types of trials:  1) A study of pre-motor progression to motor (a critical stage of the disease), in which the endpoint is straightforward and could come to a trial design by enriching the population, i.e., identifying enough pre-motor people where 90% progress to motor disease within 2 years. This is do-able. 2) Take patients who already have motor PD and slow their disease progression. Such a study could be powered with about 100 people to be done in 2 years. During those trials other information could be developed about other biomarkers.
• We have used the same clinical trial structure for 40 years and no matter what we come up with, we will have to validate it, which means we need a new design. NINDS could develop a clinical trial design. What is the baseline of these groups? We’re in our current state because we don’t know the natural history of this disease. We have no measure of progression and no good outcome measure. We have to be much more strategic in how we’re looking. We must perfect research designs and study the natural history of PD. We use the same measures over and over, despite their not being productive of the desired result. We need to clean up the methodology. We need a progression measure, which is confidently determined. This will boost us the most rapidly.

Top

 

Discussion. Ian Reynolds

The mission is to identify biomarkers that would impact clinical trials of disease progression.
Dr. Reynolds asked each participant, in light of the day’s discussions, to state the most important thing NINDS could do to advance biomarker trials—what single thing will make a difference?.

1. Have a database set up with a standard built on NINDS’ CETIS. A database is being built primarily from industry trials, about a quarter of whose users use it for biomarkers. The most valuable part will be the placebo or control arm of industry trials. We also need disease progression models, in which industry has an interest. If fact, one for AD was recently submitted to FDA. NINDS would be most effective if it coordinated the various efforts.
2. Set up a new cohort study to collect sufficient volumes of samples, as well as other markers, to be able to provide samples to investigators for discovery of novel biomarkers. It would provide a way to replicate the interesting ones. This is the only way to get a sufficient volume of well-characterized bodily fluids.
3. There are 3 options:  Create a bio-repository to which people can bring assays; create genetic cohorts and follow them; or initiate a cross-sectional longitudinal study to capture early, middle, and late stages. Later stage study allows you to reiterate earlier cohorts. Creating a longitudinal cohort where everything is standardized and done systematically would be very valuable.
4. The participant agrees with #3, but noted that between PPMI and lots of small investigators, we need a few people to study for a short time. The cohort could be early, middle, or late disease. Cohorts are for replicating studies that can be done on a larger population.
5. We need a cohort between discovery and PPMI. Key is that such a cohort needs to be consistent with PPMI,as do collection methods. Also we have no understanding of all aspects of this disease, and we need to know how motor, autonomic, and other functions are affected.
6. For data management, we need to have all necessary metadata and have the necessary criteria for them; and everyone must agree on the metrics used in data collection.
7. Most important is funding for different groups to do different things in exploratory research, but allow those groups to interact. Key is data management so you don’t have poorly designed, underpowered research designs.
8. First, fund opportunities to add progression markers to PPMI; then fund a limited discovery center where a few grantees share a cohort of some sort of repository where everyone can contribute samples; and last, the pre-symptomatic (very high-risk) cohort on whom we could study long-term, mid-term, and short-term disease. These are things not in PPMI, but which would greatly augment it.
9. Most pressing is markers to understand pathophysiology, either by a cohort, or by facilitating outside investigators to run their own studies. But, put in place standardization to measure biomarkers against. Replication should focus on the most promising biomarkers coming out of this.
10. Develop a pre-motor cohort. Information resulting from that would supplement PPMI’s cohort. We need excellent clinical data to accompany samples that are collected. This may require periodic clinic visits to characterize patients well, including non-motor testing. (PPMI has clinic visits every 3 to 6 months the first year and then annually.)
11. A longitudinal study of a limited set of biomarkers, one not too early in the exploratory phases. Understanding a potential marker’s utility cannot be done in 10 different facilities; it has to be done in 1 lab. Validation could its use as a clinical diagnostic test.
12. The participant agrees with the “mutual fund approach” (#8), but asserts the need for a common language. The value of ADNI is that it is a cross-sectional approach—2 years is a short time in a 10- or 20-year disease. Cross-sectional cohorts by different investigators can be strung together, but we need a common language. Yet, there has to be an exploratory component since we don’t really know what we’re looking for in PD.
13. A repository with standardized endpoints would be a good goal, but without overlooking the possibility to collect samples from other sources.
14. PPMI is designed for data to be available on the Web:  200 people have already been recruited and have had DAT scans, MRI, etc. People are encouraged to download data and use it. We might try to do experiments from the back end first to see if we can find what we’re looking for.
15. Biomarker scientists must be more involved with biostatisticians to address the many issues in experimental design. There is also room for epidemiologists.
16. The goal is premature. It is likely that there are treatments and biomarkers for PD not even thought of yet. Maybe PPMI is investing money the wrong way. We don’t understand this disease well enough.
17. There are 2 RFA possibilities:  1 for sample collection, as has been suggested, and 1 for discovery research on new biomarkers. Many structural and imaging studies can be done, but they should not be moved forward yet. An applicant should have to include how the endeavor is related to genetics or other current understanding of the disease, and then the appropriate animal model in moving from lab to bedside. A biomarker should be linked to current understanding of disease mechanisms in a molecular way.
18. Convenience samples are frequently submitted and we should ask for common data elements for every discovery project so they could be compared. Second, we need to fund people for adaptive design, or FDA disease models. Third, set up a pre-motor and a cross-sectional for cohort longitudinal studies—we need a place to go for a cohort and we need to think about how that can be made more feasible. Can we leverage the work of people who already do this? We need to brainstorm how to do that so we don’t have a single giant cohort study—maybe have different expertise in different sites. And, fourth, a pre-motor study is critical (based on experience in HD) and we need to brainstorm how to do that.
19. PD is not AD or HD, but equivalent to cancer in the 19th century, and we are very premature in talking about some of these issues. We have a phenotype and a few genes that contribute to it and we should put money into discovery of all the epitopes that end in that ultimate phenotype. Second, we need trial designs for small rapid studies. Third, we need data standards. Emulating the cancer world, we could enlist public health organizations that deal with PD to get samples. Moreover, it would be better to have a live reservoir than spend lots of money maintaining a structure that stores samples.
20. There’s so much we don’t know. It is impossible to try to build a perfect framework for each unknown discovery. We should fund a lot more basic biomarker discovery work. As for large vs small cohorts, it would take less time to fund many smaller cohorts with standardized data.
21. ADNI’s panel study format is a useful paradigm. The idea is to take many smaller slices along a longer trajectory of disease progression. AD is heterogeneous and the farther followed, the more heterogeneous it becomes. This underscores the need for careful consultation with the right people (statisticians, et al.) on study design. What will analyses look like? What will you gain from it? We have to plan analyses before we collect data. Having a core data set is critical. We need to look at the list of potential biomarkers we already have. It would be extremely helpful to be very specific about the biomarker questions. Each purpose has study design issues associated with it. This implies having a coordinated infrastructure for data management, study design, and statistics, etc.
22. At NCI, biospecimens are collected at the tumor level for people who submit a proposal. The investigator needs an independent cohort of enough volume to verify what has been done. The analogy is a partnership with PPMI—a pool of individual investigators with potential candidates. We need to know the clinical question and then do initial discovery and have an independent cohort. NCI has developed networks—think-tank workshops—to define clinical questions. Then an RFP could be issued to get specimens, and a year later an RFA issued. It’s a targeted-based approach vs a discovery-based effort. Because NCI has a network, discovery volume can be at a local scale.
23. Add a metric for success of this initiative:  Put infrastructure in place, and speed up weeding out the rubbish—that’s a success, too. Definitive failure is important.

Top

 

Adjournment

Dr. Bruijn adjourned the meeting at 4:36 pm, and Dr. Koroshetz thanked everyone for their contributions and discussions.

Top

Last updated December 23, 2013