Visual methods for getting attention in web design

I summarize the most relevant and interesting thoughts from part 1 in Michael Eysenck's book on Fundamentals of Cognition, giving a few ideas for how this research on visual perception and attention might be relevant in the web development, IA and UX fields. 

 

Chapter 2: Seeing vs. acting

We treat 2-D figures as if they are 3-D (because our world is 3-D, but our eyesight is not).

You have 2 systems for visual perception:
    1. The Acting (or Where) System

All about acting quickly, this system focuses on motion, is imprecise, and you don't have conscious control of it. Think of it as a combination of hand-eye coordination and radar-like attention to movement that might indicate danger. It happens very early in the visual perception process.

   2. The Seeing (or What) System

All about recognizing objects, this system is slow, detailed and accurate.

    p.35 = chart of 2 dual-process in vision (contrast & characteristics)

 

Chapter 3: Depth, size & distance perception

There are a number of different cues our eyes are accustomed to using to tell us about distance, size and depth in the world around us. When designers work with these pre-existing assumptions, it's easier to communicate a topographical map of a website's landscape.

There may be some relevance of motion-related cues (such as motion parallax and optic flow) to the creation of videos, but since most examples of motion on the web involve animate advertising images, I'm going to ignore them until I see them used well for content-related (and not ad-related) purposes.

A number of different cues are already recognized by our vision systems to give our 2-dimensional perception of the world a better understanding of its size, depth and distances.  Designers, who no doubt already know about them, can use them to provide a richer web experience for users.  For instance adding texture or shadows to a Submit button makes it look more clickable. Here are the cues listed in this chapter:

  1. Linear perspective: parallel lines pointing directly away from us seem progressively closer together as they recede into the dista
  2. Aerial perspective: more distant objects lose contrast and seem hazy.
  3. Texture: implies depth
  4. Texture gradient: textured objects losing density (size?) from front to back look to be slanting away.
  5. Interposition: a nearer object hides part of a more distant object from view.
  6. Shadows: a gap in shadows can indicate movement
  7. Familiar Size: we remember the size of an object
  8. Image Blur: two textured regions - 1 sharp, 1 blurred - are perceived at different depths.

But what if, as happens in the world and on the web, you perceive more than one cue at a time? In such a situation, your brain generally applies Additivity, in which it uses information from all available cues to depth perception (vs. selection of just ONE). However, as Massaro argues, the influence of relatively unambiguous cues may be greater (more heavily weighted/considered) than ambiguous cues in shaping your final perception. And, if the many cues you perceive conflict, then you are more likely to rely heavily on just ONE.

Chapter 4: Unconscious perception/awareness

You have two ways of perceiving the world:

    1. conscious (outcome-focused, late in perception process)
    2. unconscious (early in the perception process)

Unconscious/subliminal perception occurs below the level of conscious awareness and can have relatively long-lasting effects.
One example of this type of processing is called perceptual defense, in which you have greater difficulty in perceiving threatening or taboo stimuli than neutral ones.

 

Chapter 5: Object recognition (what is this I see before me?)

Perceptual Organization (Gestalt)
    1. Perceptual Segregation = which parts belong together and so form objects
    2. Laws/Principles of Gestalt

  • foundation = Pragnanz
  • proximity
  • similarity
  • good continuation
  • closure
  • figure-ground segregation

Object Recognition Stages Model (Riddoch & Humphreys, 2001)
[vs. the idea that later stages may influence earlier]

    1. Bottom-Up (early stages?)
        a. Edge grouping: by having common line (from motion, color, form, and depth features)
        b. Feature binding into shapes
        c. View normalization (this is controversial): creates a viewpoint-invariant representation
    2. Top-Down (later stages?)
        a. Structural description: use stored knowledge about overall form and shape
        b. Semantic system: use stored knowledge of semantic information relating to object,

 

Chapter 6: Facial recognition

Bruce & Young model (1986) of facial recognition has 8 Components (see p. 83 for list & explanation).

  • Name accessed LAST of all features
  • The most interesting component is EXPRESSION analysis.

 

Chapter 7: Seeing with the mind's eye (visual imagination/imagery)

Visual imagery is the mind's eye or our act of visually imagining.  This is different from, and generally less accurate than, our perception of visual stimuli. Though, we have more control over imagery. However, the image in our minds does resemble our visual perception, and may use the same limited-capacity processes, this is evidenced by the facts that:

  • the field of resolution is similar
  • there is a facilitation effect when the content of imagery and of perception is the same
  • there is an interference effect when the content of imagery and of perception is different.

Kosslyn's perceptual anticipation theory: very similar processes are involved in imagery and perception. He suggests that: visual images are DEPICTIVE REPRESENTATIONS - they are like pictures or drawings in that the objects and parts of objects contained in them are arranged SPATIALLY (we organize mental images SPATIALLY).


See p.96 for a chart of the structures and processes involved in perception and imagery (from Bartolomeo, 2002).
 
Perhaps this is why a "standard" web layout is easier? The content of my perception meshes with my mental image/expectation, preventing interference and even facilitating perception.

 

Chapter 8: In sight but out of mind (change blindness)

Change Blindness: failure to detect changes in the visual environment (taken advantage of by magicians).
 

Inattentional Blindness: failure to detect an unexpected object appearing in a visual display.

Factors that make change blindness more likely:

  1. when observer is not informed beforehand
  2. when the unexpected object is similar to other objects in the display: token change (same category replaces previous object) vs. different category (more likely to be noticed)
  3. when the object that changes didn't receive much attention prior to the change (i.e. we weren't paying attention)

This change blindness can be good for web developers, as it can help make gradual improvements/upgrades unnoticeable. It can be bad, however, if your marketing updates are not noticed.

Evidence suggests that only moderately detailed information about previously attended objects is available for at least a few seconds after they cease to receive attention (very little of which makes it to long-term memory). Ergo, we remember MORE details, if we are PAYING ATTENTION. Thus, attention-getting is important if we want to highlight the changes we're making.

 

Chapter 9: What do we attend to in vision?

Much of the time, visual attention is object-based, rather than location-based. Theories don't yet explain why it's sometimes location-based. Ergo, the human visual attentional system is FLEXIBLE. Which makes sense, given that there is competition among stimuli for our attention.

Restatement of Gestalt theory: The grouping processes (e.g. law of similarity, law of proximity) occurring fairly early in visual processing divide the visual environment into figure (central object) and ground.

Unattended visual stimuli receive less processing than that received by attended stimuli.

There are 2 types of attentional systems (theory):

  1. stimulus-driven: involuntary, bottom-up. Attends to salient/conspicuous unattended visual stimuli
  2. goal-directed: voluntary, top-down. Involves the selection of sensory info and responses. Is influenced by: goals, knowledge, expectation.

Cross-modal attention = involves more than one sense. Attention involves multiple senses, not just vision. These cross-modal effects need to be further studied, so they can be predicted. Studies show that attention to stimuli in one modality at a given location typically attract attention to other modalities of stimuli in the same location.

Zoom lens theory: we choose the size of focal area of attention.
Split attention: allocation of attention to 2 non-adjacent regions of visual space.


So, the question I have is: what are white hat/ethical ways to get attention?

 

Chapter 10: Multitasking

 

Multi-tasking: performing 2 or more tasks at the same time by switching rapidly between tasks.

More intelligent people can multitask more effectively than less intelligent ones. However, people are 50% slower when multitasking than when doing a single task (which is a HIGH opportunity cost).

There are both a central capacity model (in which there is one big capacity for all processing) and a multiple-resource model for explaining this finding.

Multiple-Resource Model (Wickens 1984) argues: the processing system consists of independent processing mechanisms in the form of multiple resources, each of which has a limited capacity.

There are 3 stages in this process:

  1. encoding
  2. central processing
  3. responding

There are 2 key assumptions of this model:

  1. There are several pools of resources based on distinctions among:
    1. stages of processing
    2. modalities (auditory or visual)
    3. codes (spatial or verbal)
    4. responses (manual or vocal)
  2. If 2 tasks make use of different pools of resources, then people should be able to perform both tasks without disruption.

If one task is automatic and practiced, then the other is less likely to cause interference. However, automatic processes are inflexible when the situation changes.
Criteria for automatic processes:

  1. they are fast
  2. they don't require attention, and so don't reduce the capacity to perform other tasks at the same time
  3. they are unavailable to consciuosness
  4. they are unavoidable, meaning they always occur when an appropriate stimulus is presented (involuntary).

Errors of automatic attention (attentional slips) from Sternberg (p.136)

  1. Capture errors
  2. Omissions
  3. Perseverations
  4. Inappropriate sequencing errors
  5. Description errors
  6. Data-driven errors
  7. Associative-activation errors
  8. Loss-of-activation errors

In addition, if both tasks involve direct stimulus-response relationships, this can also lessen interference, as it's easier than indirect-stimulus response.


Ways tasks can be similar:

  1. stimulus modality: e.g. both involving visual or both involving auditory presentation
  2. central processing: i.e. both involved in spatial processing
  3. responding: e.g. both requiring manual responses or both requiring vocal responses.

Studies have obtained evidence of dual-task interference via PRP effect. Psychological Refractory Period (PFP) Effect = the slowing of response to the second of two stimuli when they are presented close together in time.

Dual-task interference study findings can be explained by central capacity theories (the resources of some central capacity have been exceeded).

 

Bibliography

Eysenck, Michael W. (2006). Fundamentals of cognition. New York: Psychology Press.

Sternberg, Robert J. (2004). Cognitive psychology. Wadsworth Publishing.