Number 22, July `98
Edited by Paul Mulholland P.Mulholland@open.ac.uk
The use of questionnaires in PoP research by Alan Blackwell
An analysis of the present state of Psychology of Programming in Italy by Eleonora Bilotta
Eisenstadt/Bonar $1000 end-user-programming bet!! by Marc Eisenstadt
The Psychology of Programming Interest Group (PPIG) was established in 1987 in order to bring together people from diverse communities to explore common interests in the psychological aspects of programming and/or in computational aspects of psychology. ‘Programming’, here, is interpreted in the broadest sense to include any aspect of software development. The group, which at present numbers approximately 300 world-wide, includes cognitive scientists, psychologists, computer scientists, software engineers, software developers, HCI people et al., in both Universities and industry.
PPIG aims to provide a forum for the rapid dissemination of results, ideas, and language or paradigm tool development, circumventing the long time-lag of conferences and journals. It does this by maintaining two electronic mailing lists - one for announcements and one for discussion - by publishing two newsletters a year, maintaining pages on the World Wide Web, and organising a workshop annually, together with other workshops as and when required.
The annual workshops, which always attract a high percentage of attendees from outside the United Kingdom, consist of keynote addresses by eminent practitioners in the relevant fields, discussion panels, software demonstrations and seminar-like presentations. Invited speakers have included Professors Jack Carroll, Bill Curtis, Laura Leventhal, Clayton Lewis, Gary Olson, Peter Polson, Elliot Soloway and Willemien Visser. Venues have included the Universities of Warwick (1989), Wolverhampton (1990), Huddersfield (1991), Loughborough (Jan 1992), INRIA (Paris) (Dec. 1992), the Open University (Jan. 1994), University of Edinburgh (1995), University of KaHo Sint Lieven, Ghent (1996) and Sheffield Hallam University (1997).
In 1996, for the first time, a workshop was held specifically to allow post-graduate students in the relevant disciplines to come together, give presentations and exchange ideas. It is hoped that this will be the first of a series.
There is no subscription. Financial help in the past has come from Xerox EuroPARC, the DTI and EPSRC. Further information is available from the organiser, Judith Segal.
The PPIG website can be found at http://www.ppig.org/
Questionable practices: The use of questionnaires in PoP research
Alan F. Blackwell
MRC Applied Psychology Unit
The best introduction to the research methods used in Psychology of Programming (PoP for the sake of levity) is David Gilmore’s excellent chapter in the PoP book (Gilmore 1990). One area that David did not cover in his introduction is the use of survey methods, including questionnaires. This is no reflection on his contribution, because survey methods are not a normal part of the research repertoire of experimental psychology, which is after all our theoretical context. I, however, have been sufficiently unwise to include several surveys in the work carried out for my psychology PhD (Blackwell 1996a, 1996b, Whitley & Blackwell 1997).
At the last PPIG workshop, I discovered a co-offender in Helen Sharp, who with Jacqui Griffyth reported on a survey of programmers taking a course in object-oriented programming (Sharp & Griffyth 1998). A coffee-break conversation with Helen and Marian Petre revealed that Marian had a guilty stash of unanalysed, unpublished survey data - who knows how many others there are? Since then, I have tried to find out a little more about how we can use surveys in PoP - this is the (rather informal) result. I am indebted to Jackie Scott, a research methods specialist in the Social and Political Sciences Faculty in Cambridge, who has advised me and helped to clarify these ideas.
The nature of surveys and questionnaires
David Gilmore did, in his introduction to PoP methods, discuss interview studies. These are a type of survey, although the interviews he described are informal ones, and are usually conducted with very small sample sizes - this kind of survey is not intended as a basis for drawing general conclusions. When it is necessary to draw general conclusions, the following precautions are normally taken: a large sample is used, and the interviewer follows a question script (perhaps including options based on previous responses), from which they do not deviate. More exploratory interviews can be valuable (as in the study published by Marian Petre and me, Petre & Blackwell 1997), but it is important to recognise the potential influence that the interviewer has had on individual responses.
If the interviewer is reading questions from a script, the study starts to seem pretty much the same as a written questionnaire - once one has decided to carry out a formal survey, the choice of delivery method is not as important as the initial choice between formal and informal methods. Some of the analysis methods described below are applicable to informal surveys, however - the techniques used for coding of open format responses can also be used for analysis of informal protocols. They can even be used for text collected in non-research settings. In the first opinion survey I conducted (Blackwell 1996a), the opinions about programming that I analysed were collected from published computer science research papers. Kirsten Whitley and I later evaluated these alongside questionnaire responses, using identical techniques.
As in experimental design, the most straightforward survey design compares the responses of two or more sample groups. The groups can either be identified in advance (eg. regular Windows programmers versus subscribers to a visual programming mailing list in Whitley & Blackwell 1997), or from a response to a forced choice question (eg. “are you a professional programmer, or do you just write computer programs as part of your job?"). Once groups have been identified, all other responses can be compared between them, assuming that the measures are equivalent for each group. The main alternative to simple comparison of groups is to identify correlations between responses. This is a little more complicated, as described below. As in experimental design, surveys should be designed with research hypotheses in mind - there are other techniques (interviews, focus groups) that are more suitable for exploratory opinion investigations.
In questionnaire design, the most fundamental choice is whether each question should be open or closed. Closed format responses are hugely easier to analyse, but they have the disadvantage that one must anticipate what the respondents want to say - and you rarely succeed in doing this. You will quite likely miss the most interesting (because unexpected) findings. One solution is to run a pilot survey with open format responses, and use those results to create the set of response options for the main survey.
Closed format questions might ask respondents to agree or disagree with a statement, to select one of several categories as corresponding most closely to their opinion, or to express their opinion as a position on an ordered scale. The design of scales is the most complex of these, particularly in the choice of whether or not the scale should have a midpoint. The standard British social attitude scale is a 5-point scale; the standard American scale has 4 points. If you use a scale without a midpoint, this may be forcing an artefactual response - the respondent genuinely has no opinion, but they are forced to make something up on the spot (quite likely on the basis of preceding questions). If a midpoint is provided, you can’t tell the difference between someone who has never thought about this question, or someone who has mixed opinions. One approach is to include qualifying questions (“do you have an opinion about …") before requesting opinion measures. Another is to include a specific don’t know option - the relative advantages and disadvantages are discussed at length by Schuman & Presser (1981). A further problem in the use of rating scales is to account for the fact that each respondent will interpret the scale differently. The standard approach is to include a neutral “anchor” question as the first item in a series, and evaluate all other responses relative to the anchor.
Where the response to different questions is to be compared, it is very important to minimise unnecessary differences between them (as in any controlled experiment). Some traps that I have nearly or actually fallen into include a) ordered scales in which the direction of the ordering changes from one question to the next and b) changes of tense between questions (implicitly asking for the respondent’s current opinion in some questions, while others ask about their opinion in the past, or at some time in the future).
A well-known trap in survey design is induced bias. One should, of course, avoid introducing the survey, or phrasing individual questions, in a way that will bias the respondent toward a particular response.
I mentioned above the alternative survey administration methods of interviews and questionnaires. A third alternative, used in exploratory research, is the focus group. This has further advantages in discovering unexpected opinions. A valuable approach to pilot studies is to ask members of a focus group to complete a questionnaire that will be used in the main study, then ask them to discuss what they meant by their responses. This can identify problems in questionnaire design, as well as providing a framework for coding open format responses.
Three advantages of interviews over questionnaires (beyond the fact that they can be carried out on the telephone) are that questions will not be missed accidentally, validity of responses can be checked, and probe questions can be added in the case that a respondent makes a particular response or combination of responses. These advantages are also available in automated surveys. In the study described by Whitley & Blackwell (1997), we created an HTML form, and asked respondents to complete the form. When they submitted the form, their response was immediately checked by a CGI Perl script. If they had failed to respond to any question, the script generated a feedback page, suggesting that they complete the missing question. This script could also have asked probe questions.
If paper questionnaires are used, it is more difficult to guarantee a complete (or any) response. My experience is that personal contact is a great help. For the survey described in (Blackwell 1996b) I dressed in my best suit, and stood at the door of a trade show for two days. Each person who arrived was politely invited to participate, and given a questionnaire form. I had arranged with the trade show organisers that completed questionnaires could be left in a box at their prominent desk. The questionnaire was a single page, and was clearly headed with the amount of time that it would take to complete (three minutes). These precautions resulted in a response rate approaching 50%, far higher than would be expected when people receive unsolicited questionnaires, or are asked to return them by post.
Analysis of closed questions is relatively easy. Many survey analyses use a Chi-square test to assess differences in response between groups. Where ordered scales are used, they are often treated as approximations to a normally distributed range of opinion. Used with caution, t-tests may therefore be sufficient for hypothesis testing that compares different groups. Similarly, pairwise t-tests can be used to compare systematic differences between the responses to two different questions. There are, of course, more sophisticated alternatives - but I won’t report on them, because I haven’t used them. A further alternative for the analysis of ordered rating responses is the use of correlation analysis to find out whether opinions on different questions are related.
Analysis of open questions is far more foreign to most PPIG people, although it has some similarity to protocol analysis techniques. It proceeds in two phases. The first is to create a coding frame - a set of rules for allocating responses to defined categories. The coding frame would ideally be based on the results of a pilot study, although I have always used a sample of the main survey in my own work. The coding frame is structured either to answer one or more specific questions derived from the study hypotheses, or to capture a range of response. The latter is perhaps a more likely application of open questions, and is more typical of my projects. In this case, the coding frame would be structured in a hierarchical manner, with broad response topics, each of which is usually subdivided.
Assessing whether open responses are in favour or not in favour of some position within the coding frame is complicated. One should err on the side of non-classification, and then remember that the absence of a statement on a particular topic may reflect a firm opinion just as much as making an extreme statement does. People will often not mention their most deeply held beliefs, on the basis that these are obvious and do not need to be stated. Statistical comparisons should therefore always treat respondents who didn’t make a statement on a given theme as a separate group.
Coding of open responses according to the coding frame would normally be done by a coding panel of at least three people who are not aware of the study hypotheses. Reliability between coders should be “in the high 90% range” according to Jackie Scott. If this is not achieved, the coding frame should be refined. I should note that Dr. Scott usually deals with surveys having thousands of respondents. The largest study that I have been involved with attracted 227 responses (Whitley & Blackwell 1997).
A difficult point for computer scientists to accept is that surveys measure opinions, rather than observable behaviour (as is the case in most HCI or PoP research). This makes them seem even more “soft” than experimental psychology, and perhaps less scientific. Nevertheless, research papers are still published in which computer scientists evaluate new user interfaces or programming systems by asking a small sample of raters to compare it to whatever they were using previously. (“Is my new system better, much better, or phenomenally better than the one you were using before?"). In the face of such habits, we can improve things substantially by at least taking a scientific approach to opinion analysis.
Things are slightly confused in some of my studies, where I was purposely measuring uninformed, naive, opinion (Blackwell 1996b). Discussion of these results requires a careful distinction regarding the object of study: for what purpose is it useful to know about uninformed opinions? They tell us about people’s theories of programming, but not about how people actually do programming.
Some distinctions that are likely to be important include:
- If respondents say they like some programming system, they may not actually like it. They may be students whose well-liked course lecturer has asked them to assess the result of his five year research project - how can they disappoint him by rating it badly? (This example is based on a study that I saw reported recently).
- Even if a respondent actually does like what they say they like, this may not mean that they think it is more usable. Most of the respondents in one of my studies (Whitley & Blackwell 1997) were almost evangelical in their support for the system they were assessing, but many of them just liked it, in the way that we like the interface fashions of the day: text menus, bitmap icons, shaded icons, virtual reality, web pages with big splash screens
- Even if a respondent specifically thinks that some system is more usable than another, they may be mistaken. Payne (1995) has observed that ordinary users are not easily able to assess which interfaces will be more usable.
- If a respondent thinks they have learned something, they may not necessarily have learned it (Glenberg, Wilkinson & Epstein 1982).
- Of course, the respondents’ perception of the progess they have made in a programming task, as in many problem solving tasks, may not accurately reflect real progress (Metcalfe & Wiebe 1987). These are only some cautions that have already become clear to me. I’m still learning about this - if you are interested in using survey techniques, please feel free to contact me. If I get enough interest, I would like to run a tutorial session at the next PPIG workshop. In the meantime, I must read some more introductory texts on survey methods: Jackie Scott recommends Moser & Kalton (1979).
This report is cobbled together from other people’s contributions - I knew nothing before I started. Jackie Scott was very generous in giving her time to someone so ignorant. Pat Wright at the APU helped design my first questionnaire. Kirsten Whitley suffered through my discovery process, having approached me as a collaborator because I knew more about surveys than she did! Thomas Green has been a stimulating and tolerant PhD supervisor. My research is funded by the Advanced Software Centre of Hitachi Europe.
Blackwell, A.F. (1996a). Metacognitive Theories of Visual Programming: What do we think we are doing? In Proceedings IEEE Symposium on Visual Languages, pp. 240-246.
Blackwell, A.F. (1996b). Do Programmers Agree with Computer Scientists on the Value of Visual Programming? In A. Blandford & H. Thimbleby (Eds.), Adjunct Proceedings of the 11th British Computer Society Annual Conference on Human Computer Interaction, HCI’96, pp. 44-47.
Gilmore, D.J. (1990). Methodological issues in the study of programming. In J.-M. Hoc, T.R.G. Green, R. Samurçay & D.J. Gilmore, Psychology of Programming. London: Academic Press, pp. 83-98.
Glenberg, A.M., Wilkinson, A.C. & Epstein, W. (1982). The illusion of knowing: Failure in the self-assessment of comprehension. Memory & Cognition, 10(6), 597-602.
Metcalfe, J. & Wiebe, D. (1987). Intuition in insight and noninsight problem solving. Memory and Cognition, 15(3), 238-246.
Moser, C. & Kalton, G. (1979). Survey methods in social investigation (2nd ed.). Aldershot: Gower.
Payne, S.J. (1995). Naive judgements of stimulus-response compatibility. Human Factors, 37(3), 473-494.
Petre, M. and Blackwell, A.F. (1997). A glimpse of expert programmer’s mental imagery. In S. Wiedenbeck & J. Scholtz (Eds.), Proceedings of the 7th Workshop on Empirical Studies of Programmers, pp. 109-123.
Schuman, H. & Presser, S. (1981). Questions and answers in attitude surveys: Experiments on question form, wording and content. New York: Academic Press.
Sharp, H. & Griffyth, J. (1998). Acquiring object technology concepts: the role of previous software development experience. In J. Domingue & P. Mulholland (Eds.) Proceedings of the 10th Annual Workshop of the Psychology of Programming Interest Group. pp. 134-157.
Whitley, K.N. and Blackwell, A.F. (1997). Visual programming: the outlook from academia and industry. In S. Wiedenbeck & J. Scholtz (Eds.), Proceedings of the 7th Workshop on Empirical Studies of Programmers, pp. 180-208
An analysis of the present state of Psychology of Programming in Italy
By Eleonora Bilotta firstname.lastname@example.org
Interdepartmental Centre of Communication (CIC)
Department of Educational Sciences
University of Calabria, Rende- Cosenza, Italy
This paper is an analysis of the state of the art of the Psychology of Programming in Italy. The first step is to ascertain what Psychology of programming means in Italy, what sectors it covers and the definition and identification of this area in the Italian research. Two kind of problems were encountered:
a) in searching for information, first because there are so many disciplines that could be grouped together under the title of Psychology of Programming and second because there is a variety of journals, books, workshops of these disciplines that could be considered interrelated or have some affinity with Psychology of Programming, but there is nothing about Psychology of Programming as a whole;
b) the organisation of the work, how to present the contents that seem very incomplete and unfinished. It was decided to present all the disciplines contributing to Psychology of Programming and to make an analysis of Italian research that could be of interest for this area.
What is the Psychology of Programming sector?
In Britain, in the USA and in other countries the label Psychology of Programming has been adopted to indicate a research area that investigates “the psychological aspects of programming and the computational aspects of psychology”, where ‘programming’ signifies all the meanings including the process of software development. In particular, the psychological issues that pertain to programming, the theoretical and methodological issues of design, skill acquisition, expert programming and other fundamental problems are investigated. The main topics (drawn on the 10th annual Workshop of the Psychology of Programming Interest Group, PPIG) are:
- programming tasks (e.g. comprehension, creation, documentation, modification, debugging, testing);
- reasoning and planning (e.g. strategies, programming plans, formal reasoning, display-based reasoning);
- cognitive models;
- programming notations and representations; 5. programming paradigms;
- learning programming;
- software engineering (e.g. programming in the large, re-use, maintenance, scale, specification);
- social and organisational issues;
- collaboration (including CSCW);
- programming tools (e.g. environments, CASE, editors, navigation tools);
- differences among programmers.
In Italy, this area of research does not seem to be clearly identifiable under this label even though there are some research groups and projects that could be grouped within this sector. Since Psychology of Programming is a multidisciplinary sector whose components attempt to address the topics I mentioned before from different perspectives, these disciplines will be presented to ascertain what is pertinent to Psychology of Programming. (Apologies to English PPIGgers who are already acquainted with these concepts but I should like to spread through the Italian academy this paper to let other researchers become active members or at least to get information about this area).
Disciplines contributing to the Psychology of Programming
Computer science and Software Engineering
Computer Science deals with the study of algorithmic processes that describe and transform information. This study is about theory, analysis, design, efficiency, implementation, application and automatization of software development.
Computer scientists have developed various kind of techniques to support software design, development and maintenance, high-level programming language, User Interface Management System, User Interface Design Environments and debugging and prototyping tools. On the theoretical side they have worked on system architectures, abstractions and notations, software reuse, visualisation and virtual reality systems.
Software Engineering analyses design problems through a simple but non-linear model which foresees: requirements analysis and definition; system and software design; implementation and unit testing; integration and system testing. There is also the rapid-prototyping model which allows the realisation of software packages and foresees the collection of software requirements, rapid design, prototyping, evaluation, revision and product engineering. It has been calculated that about half of the software developed over recent year has been devoted to the user interface systems. This explain the fact that a reasonable proportion of the effort and energy spent on software development has been concentrated in guaranteeing usability for this user interface. Software developers are now giving greater attention to the characteristics of the system that most concerns the users: usability.
Italian research centres within these disciplines. There are 20 or more Engineering Faculties that are active in the Computer Science and Software Engineering fields. Sometimes they offer Doctoral programmes in Computer Science. The research activities carried on in these academic departments can be roughly grouped in the following areas: Computer Graphics, Computer Vision, Data Base, Fuzzy logic, Informatics, Artificial Intelligence, Multimedia, Neural Networks, Programming, Languages and Software Development, Information Technology, Engineering to mention just a few. University of Pisa (http://www.di.unipi.it/ricerca/aree/aree.html) Some research areas are: Algorithms and Data Structures, Architecture of System Elaboration, Artificial and Robotics Intelligence, Data Base and Information Retrieval, Computational Mathematics, Programming Languages, Software Engineering, Logic Programming, Theory of Functional Languages Types, Logical and Operational Methods. University of Udine (http://www.dimi.uniud.it/~tasso/general.html) The Artificial Intelligence Laboratory of the University of Urine was founded in 1984 within the Department of Mathematics and Computer Science. It has been very active in several research areas of artificial intelligence, robotics and other traditional fields of Computer Science. University of Turin (http://www.di.unito.it/home.html) Department of Computer Science The Department hosts all the researchers of the University of Turin that are active in the Computer Science field and is the main support of the curriculum in Computer Science that is offered by the Faculty of Mathematical, Physical, and Natural Sciences. The research activities undertaken in this Department can be grouped in the following areas:
Artificial Intelligence, Programming Languages and Tools, Image Processing, Information Systems and Databases, Information Technology, Logic Programming and Automated Reasoning, Mathematical Logic, Modelling and Analysis of Computing Systems, Models for Decision Making and Management, Education and Theoretical Computer Science. University of Bari (http://www.di.uniba.it/aboutdi/index.htm) The Computer Science Department of the University of Bari is active in the areas of: Computational Intelligence, Computer Science in Education, Learning, Database and Decision Support Systems, Knowledge Acquisition and Machine Learning, Intelligent Interfaces, Intelligent Systems, Simulation Methods and Techniques. In this Department there is a Cognitive Science laboratory. University of Bologna (http://www-lia.deis.unibo.it/) Advanced Informatics Laboratory LIA is a laboratory of the DEIS Department of the University of Bologna.Research areas: Computer Science, Artificial Intelligence, Distributed Systems, Programming Paradigms and Languages. University of Milano (http://www.dsi.unimi.it/) Department of Information Science The main themes considered in this Department are: Multimedia System Development, Development of Communication Systems, Image Elaboration, Dialogue systems for human learning, Virtual Reality, Scientific Visualisation, Realisation of Expert Systems, particularly in Medicine.
Cognitive Psychology deals with processes like perception, attention , memory, learning, thinking and problem solving in humans, from the information-processing point of view.. Actually, many psychological frameworks have been adopted which more adequately characterise the way people work with each other and with many ‘cognitive artefacts’, including computers. Cognitive Psychology is relevant to the design of programming languages with the aim of understanding human mental processes that underlie it, including the use of models to predict human performance and the use of empirical methods for testing computer system.
Italian research within this discipline. There are 30 or more Departments of Psychology, Degree Courses, Institutes of Psychology within Medicine Faculties and Arts and Humanities Faculties. A WWW page on Internet sites that may be of interest in the Psychology of Programming is at the Department of Psychology, University of Bologna (http://www-psicologia.psibo.unibo.it/dip2.htm) IARG home page, http://psicosun2.univ.trieste.it/ Intelligent Agents Research Group, Department of Psychology, University of Trieste. The Group is involved in a variety of research projects, the majority of which is about the use of simulators as important research tools. Institute of Psychology, CNR (National Research Council) Rome http://psicosun2.univ.trieste.it/ AI Section, Interactive Cognitive Models. Research Topics: Multi-Agent Systems and Distributed AI; Artificial Intelligence Planning Systems; Knowledge Representation; Social Simulation; Cognitive Modelling; Human-Computer Interaction. Institute of Scientific and Technological Research (IRST) / Genoa (http://ecate.itc.it:1024/) Mechanised Reasoning Group. Research Group on Artificial Life, (G.R.A.L.)http://kant.irmkant.rm.cnr.it/gral.html Department of Neural Systems and Artificial Life, CNR (National Research Council) Rome Research Interests: Artificial Life, Genetic Algorithms, Neural Networks, Learning, Computational Biology, Adaptive Computation, Complex Dynamical Systems, Evolutionary Robotics. R.I.E.S.CO http://www.crs4.it/~luigi/RIESCO/homepage_riesco.html Association for integrated research in the evolution of cognitive system. The RIESCO association operates in the experimental psychology research domain. One of its principal functions is the diffusion of new tools and clinical processes to enhance the study of cognitive operations mainly associated with learning systems. University of Pavia, Psychology Department http://www.unipv.it/~webpsyco/welcome.html Cognitive Psychology research areas: Intelligence, Memory, Psycholinguistics Interdepartmental Laboratory of Communication, University of Calabria, Cosenza Cic home page (http://uni.abramo.it/server/server/Cubo20/index.html) The Interdepartmental Laboratory of Communication looks at problems related to teaching communication through the use of IT and the creation of environments for experimentation in multimedia and programming languages. The areas of research that the PPIG might be interested in are:
- experimentation in elementary schools with didactic units built up using AgentSheets; this to evaluate the learning of the basics of programming in 7 and 8 years-old subjects and to see whether this software helps in the learning and building of mental models;
- experimentation with new formative paths in multimedia and programming (both visual and non) languages within the Degree Course of Arts, Music and Performing Arts (DAMS). One problem that we are finding, though, is that as the students come to the course from the Humanities, they aren’t necessarily familiar with programming methods or tools;
- simulation of some of the aspects of cognitive processes through the use of visual programming languages (article presented to the 10th PPIG workshop);
- research on mental models of visually impaired people interacting with computers.
Social and organisational Psychology
Social and organisational issues (the structure and functions of organisations in terms of information flow, technology, working practice, power hierarchy, size and complexity, etc.) and the way people co-operate and behave in working areas are influenced by computers. Psychology of programming can gain insights from this research area for understanding models of organisational change.
The Italian research institutes in this discipline University of Trieste (http://psicoserver.univ.trieste.it/salone/DIPART.html) Psychology of Work University of Padova (http://126.96.36.199/DidaW/DW2076.htm) Psychology of Work Social and Organisational Psychology
Ergonomics and Human Factors The role of Ergonomics is to define and design tools and various artefacts for different kind of human activity from work to leisure, to suite to the humans. The objective is to get information from the above sciences into the context of design and into the programming activity.
Italian research in this discipline Italian Ergonomics Association, (SIE), http://www.psych.unito.it/htdocs/ergo/sie.html Association with the aim of promoting the development of Ergonomy and the spreading and organising of knowledge and experience. This considering social and productive reality. Affiliated to the IEA , the official international organism which groups together and co-ordinates the ergonomic society around the world. University of Turin Laboratory of Ergonomics (http://www.psych.unito.it/htdocs/ergo/sie.html) Research activities: Certification , Learning, Organisational and Management Planning, Man-Computer Communication, Ergonomics and Health, Ergonomics and Handicaps, Ergonomics and Safety, Ergonomics and Toxicology, Ergonomics and lighting. In the area of Human Computer Interaction the following topics are investigated: Error-analysis methods, Methodology of Planning, Methodology and psychometric tests to evaluate discomfort, Tests of the manuals’ usability, Tests of the prototype usability, Evaluation scales, Experimentation techniques, Video recording.
Artificial Intelligence is concerned with the design of intelligent computer programs which simulate many different aspect of human intelligent behaviour, especially knowledge structures that are utilised in human problem solving. In particular, the subjects’ problem solving activities and formal reasoning are topics of Psychology of Programming. Internet sites related to Artificial Intelligence Italian Association for Artificial Intelligence AIIA - Torino (http://www.di.unito.it/~aiia/). The Italian Association for Artificial Intelligence AIIA was founded in 1988 with the aim of promoting research and AI application. The FACE-IT project (http://gracco.irmkant.rm.cnr.it/luigi/lupa_face.html) The Face-it Project (FIP) package uses a Genetic Algorithm to evolve face expression pictures and is meant to be used in psychological studies. The evolution of pictures is based on the user’s evaluation of a number of face expressions shown on the screen. In Italy Artificial Intelligence is taught in the Faculties of Engineering and in the Departments of General Psychology.
Human Computer Interaction
Human Computer Interaction is a discipline whose principal aim is to improve the quality of interaction between humans and computer systems, creating artefacts that are simple to use, to learn, to work and to play with. To create this kind of systems, it is necessary to apply knowledge about human goals and cognitive processes, tasks, capabilities and limitations together with knowledge about computers capabilities and limitations and to relate this findings to the comprehension of social environments in which users work. Since different disciplines contribute to HCI, there are many shared and distinct research areas in which many of the disciplines above mentioned fall down.
In Italy HCI is taught into the Engineering Faculties and in the Social Psychology departments. Italian researches within this discipline SIGCHI Italian group http://www.etnoteam.it/cqs/e4.html SIGCHI , an ACM group with particular interest in Human-Computer Interaction has been activated in Italy with the aim of spreading information about human computer interaction problems. It groups together researchers in the design, evaluation and implementation of interactive systems.
Some of the principal factors that negatively affect the growth of the Italian Psychology of Programming research area Probably, the most relevant factors affecting the Italian Psychology of Programming research sector are:
- the lack of academic institute working in this multidisciplinary sector;
- the lack of software industries and the crisis in Informatics;
- diverse sectional and disciplinary interests;
- the lack of consistent multidisciplinary groups;
- the lack of relevant information on the Psychology of Programming research area even in the academic world.
It is necessary to bear in mind that, in the last few months, the Italian Government has been working on a new law to change and modernise academic and school systems and we all think that this law will also change some of the old ways of thinking, allowing the Italian educational system to arrive at European and American standards.
Our institute, the Interdepartmental Centre of Communication (CIC) of the University of Calabria, intends to move in this direction:
- to organise, in accordance with PPIG, the Psychology of Programming workshop for the year 2000, inviting all the Italian institutes that are interested in developing such an area;
- to create an aggregation centre (under PPIG direction) for all those researchers that, coming from related disciplines, have scientific contributions to offer within the Psychology of Programming;
- to publish a WWW server on the Psychology of Programming and to spread information.
Eisenstadt/Bonar $1000 end-user-programming bet!!
Marc Eisenstadt M.Eisenstadt@open.ac.uk
SUMMARY: 1985-1995 $1000 bet predicted scarcity of end-user-programming
Though the subject line may cause your spam-filters to throw out this email, I just wanted to raise an old issue.
At the 1985 Empirical Studies of Programmers conference I made a public bet with Jeffrey Bonar, witnessed by Thomas Green, Elliott Soloway and others, arguing that end-user-programming would NOT be nearly as pervasive as most of the attendees seemed to believe. I argued that LESS THAN 20% of a sample of ‘professional’ citizens (doctors, lawyers, teachers, etc.) would routinely be undertaking what we would call ‘programming’ (formally defined in the bet as requiring the use of variables, conditionals, and flow of control) as part of their professional lives even TEN YEARS from the time of the bet (i.e in 1995). Thomas Green agreed to be the judge, and the bet was agreed to be $1000.
In my recollection, this was considered by most attendees to be somewhat rash on my part– Logo was in the ascendancy, and programming was something people (especially professionals) not only needed to know, but it was getting easier and more pervasive all the time.
In my opinion, I won easily (though I’m happy for Thomas to chime in with a comment about this). I’d be wondering if PPIG-ers out there would like to comment on
a) the bet itself (we can dig up the formal wording, if you like)
b) the whereabouts of Jeffrey Bonar (sorry, Jeff: I could have just looked you up with a search engine, but I’m actually interested in the community commentary)
c) what we should do with the money? (some kind of prize fund would be fine; I don’t personally want the dough, nor do I want to inflict a 1K fine on Jeff, but where I come from this is nevertheless a serious business!).
Any comments on the bet? Email the ppig discussion group at email@example.com.
Special Issue on Usability Engineering
Empirical Software Engineering: An International Journal Kluwer Academic Publishers
Editors in Chief: Victor R. Basili and Warren Harrison
This Special Issue on Usability Engineering will appear in early 1999 and is meant to build a bridge between the software engineering and the human-computer interaction research communities. Usability engineering is emerging as the commercial and practical discipline that incorporates participatory design, rapid prototyping, and iterative evaluation.
User-centered design and usability evaluations have become common practices in many organizations, but they are still novel and typical development cycles do not accommodate these practices. Wide spread inclusion of usability engineering methods in development would be fostered by empirical studies validating these methods and case studies addressing cost/benefit issues.
We seek papers describing empirical studies, field studies or cases studies of topics such as: Design
- Frameworks and methodologies for user interface design and development
- Incorporation of usability engineering into software engineering lifecycles
- Novel methods for obtaining user requirements
- Cost- benefit analysis of usability engineering methods
- Use of existing data from other domains (sociological, demographic, market analysis; cognitive and social psychology) in product design and evaluation
- Use of field research methods in product design
- International and cross-cultural software engineering
Testing and Reviews
- Evaluation of strategies for expert reviews and usability testing
- Utility of preference vs. performance measures
- Validation of surveys and metrics
- Web-based remote usability testing
Special Issue Editors:
Jean Scholtz and Ben Shneiderman
Submit papers by July 10, 1998 to:
Jean Scholtz, Ph.D.
NIST Building 225, Room 216
Gaithersburg, MD 20899, USA
(301) 975- 2520 Fax: (301) 975-5287
Notification with reviews will be by August 31 and the revised version will be due on October 30.
PPIG ‘99 Workshop
Conference website here: workshops/1999-annual-workshop