Model Building. Cognitive Processes. Human Factors
Cognitive Processes. Modifications in theories of human behavior have been both the cause and the effect of research in behavioral science computing. A ‘‘cognitive’’ revolution in psychology occurred during the 1950s and 1960s, in which the human mind became the focus of study. A general approach called information processing, inspired by computer science, became dominant in the behavioral sciences during this time period.
Attempts to model the flow of information from input-stimulation through output-behavior have included considerations of human attention, perception, cognition, memory, and, more recently, human emotional reactions and motivation. This general approach has become a standard model that is still in wide use.
Cognitive science’s interest in computer technologies stems also from the potential to implement models and theories of human cognition as computer systems, such as Newell and Simon’s General Problem Solver, Chomsky’s transformational grammar, and Anderson’s Atomic Components of Thought (ACT).
ACT represents many components and activities of human cognition, including procedural knowledge, declarative knowledge, propositions, spreading activation, problem solving, and learning. One benefit to implementing models of human thought on computers is that the process of developing a computer model constrains theorists to be precise about their theories, making it easier to test and then refine them.
As more has been learned about the human brain’s ability to process many inputs and operations simultaneously, cognitive theorists have developed connectionist computer models made up of large networks of interconnected computer processors, each network comprising many interconnected nodes.
The overall arrangement of interconnected nodes allows the system to organize concepts and relationships among them, simulating the human mental structure of knowledge in which single nodes may contain little meaning but meaning emerges in the pattern of connections. These and other implementations of psychological theories show how the interactions between computer scientists and behavioral scientists have informed understandings of human cognition.
Other recent theoretical developments include a focus on the social, contextual, and constructive aspects of human cognition and behavior. From this perspective, human cognition is viewed as socially situated, collaborative, and jointly constructed. Although these developments have coincided with shifts from stand-alone computers to networks and Internet-based systems that feature shared workspaces, it would be erroneous to attribute these changes in theoretical models and explanation solely to changes in available technology.
Instead, many of today’s behavioral scientists base their theories on approaches developed by early twentieth-century scholars such as Piaget and Vygotsky. Here the focus shifts from examining individual cognitive processing to evaluating how people work within a dynamic interplay of social factors, technological factors, and individual attitudes and experiences to solve problems and learn.
This perspective encourages the development of systems that provide mechanisms for people to scaffold other learners with supports that can be strengthened or faded based on the learner’s understanding.
The shift in views of human learning from knowledge transfer to knowledge co-construction is evident in the evolution of products to support learning, from early computer-assisted instruction (CAI) systems, to intelligent tutoring systems (ITS), to learning from hypertext, to computer-supported collaborative learning (CSCL). An important principle in this evolution is that individuals need the motivation and capacity to be more actively in charge of their own learning.
Human Factors. Human factors is a branch of the behavioral sciences that attempts to optimize human performance in the context of a system that has been designed to achieve an objective or purpose. A general model of human performance includes the human, the activity being performed, and the context.
In the area of human-computer interactions, human factors researchers investigate such matters as optimal workstation design (e.g., to minimize soft tissue and joint disorders); the perceptual and cognitive processes involved in using software interfaces; computer access for persons with disabilities such as visual impairments; and characteristics of textual displays that influence reading comprehension.
An important principle in human factors research is that improvements to the system are limited if considered apart from interaction with actual users. This emphasis on contextual design is compatible with the ethnographic movement in psychology that focuses on very detailed observation of behavior in real situations. A human-factors analysis of human learning from hypermedia is presented next to illustrate this general approach.
Hypermedia is a method of creating and accessing nonlinear text, images, video, and audio resources. Information in hypermedia is organized as a network of electronic documents, each a self-contained segment of text or other interlinked media. Content is elaborated by providing bridges to various source collections and libraries. Links among resources can be based on a variety of relations, such as background information, examples, graphical representations, further explanations, and related topics. Hypermedia is intended to allow users to actively explore knowledge, selecting which portions of an electronic knowledge base to examine.
However, following links through multiple resources can pose problems when users become disoriented and anxious, not knowing where they are and where they are going. Human factors research has been applied to the hypermedia environments of digital libraries, where users search and examine large-scale databases with hypermedia tools.
Rapp et al. suggest that cognitive psychology’s understanding of human cognition should be considered during the design of digital libraries. Hypermedia structures can be fashioned with an awareness of processes and limitations in human text comprehension, mental representations, spatial cognition, learning, memory, and other aspects of cognitive functioning.
Digital libraries can in turn provide real-world environments to test and evaluate theories of human information processing. Understandings of both hypermedia and cognition can be informed through an iterative process of research and evaluation, where hypotheses about cognitive processes are developed and experiments within the hypermedia are conducted. Results are then evaluated, prompting revisions to hypermedia sources and interfaces, and generating implications for cognitive theory.
Questions that arise during the process can be used to evaluate and improve the organization and interfaces in digital library collections. For example, how might the multiple sources of audio, video, and textual information in digital libraries be organized to promote more elaborated, integrated, and better encoded mental representations? Can the goal-directed, active exploration and search behaviors implicit in hypermedia generate the multiple cues and conceptual links that cognitive science has found best enhance memory formation and later retrieval?
The Superbook hypertext project at Bellcore was an early example of how the iterative process of human-factor analysis and system revision prompted modifications in original and subsequent designs before improvements over traditional text presentations were observed.
Dillon developed a framework of reader-document interaction that hypertext designers used to ensure usability from the learner’s perspective. The framework, intended to be an approximate representation of cognition and behavior central to reading and information processing, consists of four interactive elements: (1) a task model that deals with the user’s needs and uses for the material; (2) an information model that provides a model of the information space; (3) a set of manipulation skills and facilities that support physical use of the materials; and (4) a processor that represents the cognitive and perceptual processing involved in reading words and sentences.
This model predicts that the users’ acts of reading will vary with their needs and knowledge of the structure of the environment that contains textual information, in addition to their general ability to ‘‘read’’ (i.e., acquire a representation that approximates the author’s intention via perceptual and cognitive processes). Research comparing learning from hypertext versus traditional linear text has not yielded a consistent pattern of results. User-oriented models such as Dillon’s enable designers to increase the yield from hypertext versus traditional text environments.
Virtual environments provide a rich setting for human-computer interaction where input and output devices are adapted to the human senses. Individuals using virtual reality systems are immersed into a virtual world that provides authentic visual, acoustic, and tactile information. The systems employ interface devices such as data gloves that track movement and recognize gestures, stereoscopic visualizers that render scenes for each eye in real time, headphones that provide all characteristics of realistic sound, and head and eye tracking technologies.
Users navigate the world by walking or even flying through it, and they can change scale so they effectively shrink to look at smaller structures in more detail. Krapichler et al. present a virtual medical imaging system that allows physicians to interactively inspect all relevant internal and external areas of a structure such as a tumor from any angle. In these and similar applications, care is taken to ensure that both movement through the virtual environment and feedback from it are natural and intuitive.
The emerging field of affective computing applies human factors research to the emotional interaction between users and their computers. As people seek meaning and patterns in their interactions, they have a tendency to respond to computers as though they were people, perceiving that that they have human attributes and personalities, and experiencing appropriate emotions when flattered or ignored.
For example, when a computer’s voice is gendered, people respond according to gender- stereotypic roles, rating the female-voiced computer as more knowledgeable about love and the male voice as more knowledgeable about technical subjects, and conforming to the computer’s suggestions if they fall within its gender-specific area of expertise.
Picard and Klein lead a team of behavioral scientists who explore this willingness to ascribe personality to computers and to interact emotionally with them. They devise systems that can detect human emotions, better understand human intentions, and respond to signs of frustration and other negative emotions with expressions of comfort and support, so that users are better able to meet their needs and achieve their objectives. The development and implementation of products that make use of affective computing systems provide behavioral theorists a rich area for ongoing study.
Date added: 2024-03-07; views: 140;