Evaluation of published research

Williamson’s chapter is a clear, pragmatic exemplar of the high-quality reporting it encourages. It explains not only report evaluation, but also its creation, in very helpful bullet points.

I particularly enjoyed Box 18.1 that summarizes this already summary chapter of research evaluation criteria. The criteria for research findings struck me as particularly excellent guidelines when doing reporting and presentation on usability studies.

Online dissemination of research will likely spark a need for metadata element additions to these seven report components. To increase the findability of their articles, authors and editors will likely need to consider the following additional fields:

  • keywords (perhaps from controlled vocabularies such as the LCSH);
  • an official title;
  • marketing titles – headlines that tease and entice browsers to view the actual article;
  • leads - short descriptions or introductions that tease and entice browsers to view the actual article;
  • an outline or table of contents for our bullet-list-oriented web readers;
  • an explanation of the utility or originality of the article.

There may also be a need for other metadata fields for online content. We definitely need clear, shared standards of metadata harvesting and exchange so the user’s experience is seamless, regardless of the interface.

Williamson, Kirsty. 2002. Evaluation of published research. In Research methods for students, academics, and professionals: Information management and systems (2nd ed., pp. 305-322). Wagga Wagga, New South Wales, AU: Centre for Information Studies, Charles Sturt University.