Abstract:
For over 15 years visualization tools have attempted to present the complexity of concurrent programs in easily digestible formats. For example, visualization tools, that display an execution-based animation of concurrent algorithms, have been used extensively in educational contexts to illustrate the behavior of concurrent algorithms to students. However, there is little documented evidence that such tools significantly improve the users’ comprehension of the concurrent code.
This paper proposes an evaluation method for determining programmers’ comprehension of concurrent systems. It is based on a review of current algorithm animation tools and on existing measures of comprehension. The resulting method proposes a framework within which creators of algorithm animation tools (and of other tools that support the understanding of concurrent systems) can evaluate their products.
This paper proposes an evaluation method for determining programmers’ comprehension of concurrent systems. It is based on a review of current algorithm animation tools and on existing measures of comprehension. The resulting method proposes a framework within which creators of algorithm animation tools (and of other tools that support the understanding of concurrent systems) can evaluate their products.