Tuesday, July 15, 2008

Can You Create Well-Designed Consistent CRFs for the Site?

It's all about perspective...
The title question was posed at the recent SCDM Data Quality webinar. Most respondents answered “yes”. My answer is “it depends”. From a given sponsor’s perspective, the answer is “yes”, but from a site’s point of view, it is definitely “no”. Sites may never do more than one study with any given sponsor, so from their perspective, the CRFs (or EDC system) is never seen again. Each set has slightly (or very) different questions, with different answers, grouped on different pages, and with different completion instructions. Visits have different names, forms for non-completed visits may or may not have to be returned to the sponsor, and entry edit checks fire for different reasons. If sites don’t keep these rules straight, we think they produce poor quality data.

So what is the solution? The CDISC CDASH project will resolve some of these dilemmas. It identifies the minimum set of data fields for most common study designs, along with CRF completion guidelines. Many fields are linked to standard terminology, ensuring that code lists are consistent. This goes encourages similar content, and that will help the sites, but a major obstacle remains. On the whole, each company’s data management group believes that it has the best answer to each of these challenges; it has developed the best and highest quality solutions, practices and procedures, and few have any interest in changing. If one assumes that most have had submissions accepted then one of two possibilities must be true. They are either all “good enough” for the purposes of the development project, or all the QC, QA, audits and oversight are so sloppy that they fail to detect the flaws in these processes. Granted, there are differences required by some study designs, indications, and drugs/biologics vs devices, but my experience suggests that these are very minor.

That leads to the inevitable conclusion that these variations are a matter of preference, and do not impact quality. Some will be more efficient, precise, or suited to habit, but they achieve the same result. Think about what happened the last time you were shifted to another project in your organization. Chances are that you had to learn new rules, and until you learned them, you were more likely to make mistakes. There is no reason to believe it is any different for the sites.

Why, then, can’t we agree upon common practices and rules and approaches for these common activities? Are we so convinced of our own superiority that we refuse to change? Are we afraid that others will steal our good process ideas and get to market first? Are we just “used to doing it that way” or have “always done it that way” or believe that “regulations or Biometrics or Clinical require that we do it that way”? In other words, if it ain’t broke, don’t fix it? Well, I argue that it is “broke”. It is “broke” for the sites, and if it ain’t fixed we’ll continue to lose investigative sites, and pour ever more resources into trying to inspect quality into the data, and miss the opportunity to maximize the CDASH revolution.

So what do you think? Do you agree? Disagree? Want to throw turnips? Should life be more consistent for the sites at the expense of our processes? We’re a pretty inventive bunch – I bet we could find ways to be efficient while collaborating with the sites to improve consistency.

I’ve started a list of practices that I think could be harmonized. Do you agree with the list? What can you add? What shouldn’t be there? How do you approach these activities and why? Are your reasons based in concrete need or historical habit or the belief that someone else internally “won’t like it”? How can we create a sponsor/site forum that would be trusted by both groups? Who might have to collaborate internally and externally to make this happen? Let’s talk!

1. When a subject terminates early, do CRFs for all visits after termination have to be returned to Data Management? (applies to paper only, I assume)
2. Do sites complete new AE and Con Meds forms for each subject at each visit, or are existing forms updated? i.e., are AEs and Con Meds info captured in a visit-based style or a log-based style?
3. How should visits be referenced? Can they be standardized to 1, 2, 3, etc., with a possible variation for course-based studies (e.g., oncology, although that could probably be Visit 1, 2, 3 etc within each course)?
4. When should we start capturing AEs and SAEs? Informed consent? Start of treatment (technically it can’t be an AE if treatment hasn’t started)? SAEs at informed consent and all AEs at treatment start? Should it depend upon whether the study requires potentially harmful screening procedures? On something else?
5. Once the CDASH data fields have been finalized, can we agree on a consistent layout? What should it be for each of the domains?
6. For Adverse Events forms, should the layout be portrait with one AE per page or screen, or landscape with multiple lines per page or screen? What are the pros and cons of each? Note that this is not talking about how they are stored in the database – just how the site would see them.
7. Should monitoring guidelines be given to the sites? Put in the site’s study manual? Included in the EDC application? If not, why not?
8. Should the list of edit checks be given to the sites? If not, why not? Don’t we want the sites to understand what the data should look like? If so, how can they be presented so that they are accessible and understandable?

Each of these, and many more, can be discussions in their own right, and the answers to each depend upon any number of assumptions about the underlying processes, but surely, if we really want to, we can make this work. I look forward to your comments.

Photo: Romulus, Ann Arbor, Michigan. c. 2008, Kit Howard.