PPIG 2023 - 34th Annual Workshop
Parallel Program Comprehension: A Mental Model Approach
Leah Bidlake, Eric Aubanel, Daniel Voyer
Abstract: Empirical research on mental model representations formed by programmers during parallel program comprehension is important for informing the development of effective tools and instructional practices including model based learning and visualizations. This work builds upon our initial pilot study in expanding the research on mental models and program comprehension to include parallel programmers. The goals of the study were to validate the stimulus set, consisting of programs written in C using OpenMP directives, and to determine the type of information included in expert parallel programmers’ mental models formed during the comprehension process. The task used to stimulate the comprehension process was determining the presence of data races. Participants’ responses to the data race task and the level of confidence in their responses were analyzed to determine the validity of the stimuli. Responses to questions about the programs were analyzed to determine the type of information that was included in participants’ mental models and the type of models (situation and execution) participants may have formed. The results of the experiment indicate that the level of difficulty of the stimuli (accuracy rate of 80.88%) was appropriate and that participants were from our target population of experts. The results also provide insight into the type of information included in expert parallel programmers’ mental models and suggest that the data structures aspect of the situation model (the identification of data structures) was not present, however there is evidence that the data structures aspect of the execution model (the behaviour of data structures) was present. Further investigation into this topic is needed, and this study provides a stimulus set that would be useful for those wanting to expand the research on mental model representations to include the parallel programming paradigm.