Fidelity Confounds in Experimental Design

2012 EEN Forum

Led by Jonathan Tunik

Mr. Tunk began with discussing his work with a large urban school district testing its new literacy program. There were 64 schools that were chosen to participate in the experimental or controlled programs. The outcome measure was state standardized literacy tests. The analysis was an HLN model, which takes away the bias of social context in results. Problems that the case study encountered 1. The clients questioned whether the state tests worked with low-testing students 2. Calling a group “control”– it’s a “comparison” group 3. Teachers and superintendants talk to each other and information spills into the other schools 4. His company came in during the middle of the study; they were not able to start the study from the beginning 5. The school district training culture- you can’t withhold training from someone who wants it 6. The feds say you should measure what is happening at the control school, but some schools did not return surveys and therefore the study lacked data 7. Some students move from treatment to control schools within the district

In this case, fidelity means adherence. In this study, not all the schools had high fidelity. For fidelity measures, they focused on objective measure with teachers reporting. However, the reporting stemmed from the respondents’ perspectives. This is an issue when creating and analyzing data.

There is a lot of opportunity and ways to gaining insight as to what goes on in schools. A possible study might include a model without defined parameters or a network study.


  1. Jonathan Tunik
    24 July 12, 9:04am

    The title of this session should read, “Fidelity Confounds in Experimental Design”.

Post a Comment