Warning

The content below is from an ALA archive site--it may be out of date or superseded by more recent content.

Browse Archive From This Page


Information Technology and Libraries vol.18, no.4

     

Visualization of Metadata

Donald Beagle


Visualization research has transformed the operating system environment of Web browsers and OPACs, but has not yet changed the way we manipulate content. The potential for visualization of metadata and metadata-based surrogates is discussed, including a command interface for metadata viewing, site mapping and data aggregation tools, dynamic derivation of surrogates, and a reintroduction of transient hypergraphs from the tradition of cocitation networking. Digital library research into query-specific instantiation through agents accessing a central metadata repository is also discussed in the context of potential synergies between querying, browsing, and group information sharing.


A new generation of Web-based OPACs is now reaching the library marketplace, at the same time that libraries are experimenting with the inclusion of URLs and related surrogates for an expanded range of electronic information resources.1 These OPACs make use of the graphical user interface (GUI) standards that originated with Xerox PARC and entered the marketplace through the Apple Macintosh Operating System (OS), Microsoft Windows, and Web browsers from MOSAIC to Netscape.

But while GUIs generate a visual representation of the operating system environment, they remain primarily list-based on the level of record retrieval and access for both print and electronic resources. The user can interact with such lists to view very basic permutations, such as by date of creation or last modification, by application, by file name or type, and so on. This situation repeats itself with typical Web search engines. The user interacts with the search engines via the graphical environment of the browser, but is answered by search results in the form of hundreds or thousands of items in multipage sequential lists.

GUIs displaced older interfaces such as MS-DOS by offering a better way to visualize the OS environment through a symbolic syntax of nouns (icons representing files and applications) and action verbs (pointers and menu commands). Over the past decade, a parallel line of visualization research has emerged from Xerox PARC and its brethren into the display of intrinsic or extrinsic relationships among various containers of content and their contextual domains of discourse. Whereas a GUI gives the user a graphical representation of an environment and toolset designed for the retrieval of information, knowledge visualization tools generate graphical representations of meaningful relationships among retrieved files or objects themselves.

The potential benefits of such research may be suggested by the example of a typical city map. One side normally features an alphabetical list of street names paired with location codes. The other side features a grid aligned to the location codes showing icons representing streets, buildings, or sites of special interest. Each side of the map is a useful starting point for certain types of queries. The list side excels when the user knows the name of a street and formulates a quick directed search. But this fails to help the user who wishes to gain a general or in-depth overview of the relationships between commercial districts, residential neighborhoods, and recreational facilities. To extend the analogy: list-generating search engines and OPACs are useful to those entering the Web to retrieve specific predetermined chunks of information, but less useful to those who need an overview of Web resources related both to one another and to given domains of discourse.

The GUI browser environment certainly offers the potential for integration of content visualization as well as a navigational schema equivalent to the map's coded locator grid, but there is still a dearth of applications offering graphical overviews of content. From the user's perspective, one side of the potential knowledge map remains perpetually blank. This is surprising, considering that browsers are designed for visually oriented browsing, as the nomenclature implies (i.e., Netscape Navigator, Microsoft Internet Explorer) and an extensive set of graphical tools now frame the browser interface. But without equivalent content visualization to extend the browsing activity to meaningful semantic navigation and exploration, Web browsing all too often remains a superficial, even trivial, exercise.

Why has visualization research advanced further and faster on the interface side than on the content side? Rorvig and Wilcox have pointed out that the most common avenue of visualization research has entailed full-text document processing.2 In these technologies, rather than presenting users with a ranked list, the visualization system presents a two-dimensional (and in some cases a three-dimensional) display of the relationships among retrieved items. Such systems use statistical techniques based on lexical analysis to cause highly similar items to be displayed proximately within a derived visual space. A user probing at the densest areas of the dataset display would normally be rewarded by the presentation of the most highly relevant items. Thus, instead of being confronted by a list of hundreds of document titles, user attention is focused by aggregation of the most relevant ones within the predetermined visual arena.

As applied to current Web retrieval technologies, such visualization schemas will probably remain second-level access tools, to be applied only after brute force comparison of search terms against indexing. Preliminary lines of second-level research have included examples by Mukherja at Georgia Institute of Technology and by Lamping, Rao, and Pirolli at Xerox PARC.3 But this tradition of research also suggests why content-analysis visualization tools have not successfully penetrated either the Web search engine or library OPAC marketplaces: They are too computationally demanding to be economically viable. Rorvig and Wilcox describe one visual access tool for special collections as offering promising results, but point out the dataset displayed by the visualization requires for its organization nearly 1.5 CPU hours on a SunSparc Explorer 5000.4 Couple the full-text processing requirement with the additional time needed for rendering visualization displays, typically involving hyperbolic geometry, and one sees that the bar for widespread commercial or end-user application remains frustratingly high.

Clearly, an alternative approach to visualization would be useful for economic viability and end-user accessibility. This article will argue that one promising avenue for content-based visualization research will be the development of tools designed for the analysis and manipulation of surrogates and their constituent metadata elements. The advantages of processing metadata or metadata-based surrogates in place of original content go well beyond issues of visualization, of course, and have been described in this context by Lagoze.5

The practical value of surrogate processing or preprocessing for Web retrieval has already been tested to a limited degree. OCLC's Scorpion project, designed to build tools for automatic subject recognition based on standard schemes such as the Dewey Decimal Classification (DDC), included a surrogate preprocessing experiment by Jean Godby. Godby assembled a set of articles featuring the term "AT&T" into a single document and fed this to Scorpion for full-text analysis. Scorpion demonstrated acceptable to good subject recognition. In a second run, the raw text was first run through a set of natural language processing tools to create a surrogate, and this surrogate was then given to Scorpion. Scorpion provided better, more specific subject recognition from the surrogate than from the original raw text.6 The key point is that Scorpion's subject recognition algorithms did not change, only the input to Scorpion. Since subject recognition is the initial component of most content-based visualization schemes, the Godby experiment suggests that preprocessed surrogates may offer improved visualization quality and retrieval accuracy in addition to streamlined computation.

The explosion of Web resources has posed a peculiar challenge for Web-based OPAC design, since OPACs are surrogate-dependent while the Web remains largely surrogate-free. It is technically possible for a Web-based OPAC to query its normal assemblage of captive surrogates for all nonWeb resources while "handing off" the same query for nonsurrogate full-text Web searching in parallel. This hand-off scenario, however, fails to bring to Web searching the benefits of surrogates described by Lagoze. Secondly, it does not address the needs of libraries wishing to selectively bring Web resources into the OPAC domain through qualitative filters and criteria. Lastly, it may also effectively prevent content visualization tools from entering the library OPAC marketplace due to the computational demands inherent in processing raw Web content.

But while standard Web page surrogates have not yet appeared, there currently exists a widening variety of scenarios for inclusion of Web metadata elements such as the Text Encoding Initiative (TEI), Dublin Core (DC), and the Gateway to Educational Materials (GEM). These are not mutually exclusive domains, but overlap and incorporate certain common elements. Frameworks for the coexistence of metadata schemas have also become important, with examples being the Warwick Framework (WF) and the XML derivative Resource Description Framework (RDF). Finally, various developers are also creating systems for handling metadata in repositories. While such repositories are currently designed for data warehousing and application development needs, they will likely spin off conventions and practices that will find their way into more general Web metadata schemas.

With the emergence of metadata frameworks, various interest groups are expected to assemble different types of surrogates serving many specific needs. But the frameworks introduce yet another level of abstraction and complexity, representing a potential barrier to usability. The course of Web development would thus appear to have reached a stage where visualization and metadata can offer one another important advantages. Visualization can offer users a set of retrieval tools that place abstract and complex metadata elements and frameworks in a consistent and more easily understood context for users. This context can make use of icons, toolbars, pointers, and other proven GUI elements to allow users to manipulate metadata elements in effective and creative ways. From the other direction, the use of metadata can bring important advantages to visualization research and development. Analysis of surrogates and their metadata elements will almost always be more cost-effective and less computationally demanding than analysis of source text. Accepted metadata frameworks will potentially allow various visualization tools to converge on a set of accepted operational standards. And because library OPACs are already based upon the manipulation of surrogates, they can become a potential market for a new generation of metadata visualization applications. This article will explore some pathways of current and potential future development in the application of visualization techniques to metadata and metadata-based surrogates.


"View Metadata"

Weibel has pointed out that the easiest way to deploy metadata on the Web is to embed it in HTML documents using the META tag.7 Given sufficiently widespread adoption, it may also be that the simplest way to place embedded Web metadata in a visualization framework is to add a "metadata" option to the browser's View menu, similar to the "view source" command that allows the user to view the HTML code level of a given page. One can easily imagine a tri-level view option: the top default level showing the Web page as it is designed to appear, the second level displaying the body of HTML tags and source text, and the third level displaying the embedded metadata for that page or site-in this case, perhaps, the bibliographic header tags of the TEI or the fifteen elements of the DC set. Scripts and programs designed to extract metadata from source code documents in this manner have been under development for some time, such as the Metadata Harvest Program developed to extract GEM metadata from HTML-tagged documents.8

The effective use of a "view metadata" command would require browser recognition of an environment that permits the coexistence of metadata packages serving various independent functions, such as terms and conditions, resource discovery, and archival management. In an extended metadata architecture such as the RDF, the browser might need to display a metadata wizard with dialog boxes and radio buttons designed to walk the user through a range of possible retrieval and display options.9 Depending on user needs, the browser could respond with a breakout display of any metadata elements held within a captive surrogate, go to the source document itself to retrieve appropriate elements from an embedded header, or trigger any and all metatags found within RDF wrappers to "drop through" from the source to the "view metadata" display, regardless of where they might appear within the document. These might then be remapped for easier manipulation through a visual flowchart or tree diagram.

A related application of a "View Metadata" command would be to facilitate access to third party label bureaus, i.e., entities that collect and manage metadata records referring to resources but not embedded in those resources. Such entities will presumably include a variety of private sector e-commerce players as well as public sector institutions such as libraries and museums.10 Some of the potential of retrieval and display integration of third-party surrogates will be discussed later in this article.

A "view metadata" menu option immediately offers interesting possibilities for integrated and subsidiary search capabilities. The command could also display a sublevel menu or toolbar with commands such as "more like this." When a searcher had found a particularly interesting site, he could then highlight one or more elements such as CREATOR, CONTRIBUTOR, and SUBJECT and then click "more like this" to initiate retrieval of all other Web sites having matching metadata elements.

Other DC elements offer expanded visualization possibilities. Most obvious would be the creation of timeline extensions using the DATE element. After retrieval of multiple resources with matching SUBJECT or CREATOR elements, for example, one could choose "array by date" to project a visual timeline with nodes representing each retrieved resource. More sophisticated visualization techniques could be employed around the Dublin Core experimental elements RELATION and COVERAGE. These possibilities will be discussed below in the context of transient hypergraphs.


Sitemaps and Aggregation

Sitemaps were perhaps the earliest graphical or visualization tool developed for the Web. Most early sitemaps consisted of tree diagrams or flowcharts drawn by Webmasters and inserted as static GIF or JPEG files. Later products such as WebCutter (now being researched by IBM under the term Mappucino) automated visual sitemapping based on proprietary analytical formulas.11 Now, the emergence of metadata and RDF conventions opens the door to systematic generation and manipulation of sitemaps (including clickable and dynamic sitemaps) by browsers equipped to recognize pertinent embedded metatags.

Boutin has described one line of development beginning with the integration of RDF sitemap recognition in Netscape's open source browser code "Mozilla."12 Mozilla recognizes sitemaps written in RDF by looking for <LINK REL=sitemap> tags in Web pages, such as:

<LINK REL=sitemap SRC="/rdf/sitemap. rdf#root" NAME ="Xpublication" TYPE ="text/ rdf">

This tag tells Mozilla to open the sitemap at www.xpublication.com/rdf/sitemap.rdf and render the site tree starting at the item tagged with id = "root". The sitemap's graphical hierarchical description of a site is more intelligible to users than the corresponding RDF code could ever be, as is easily demonstrated by comparing the above <LINK REL=. . . > statement to the sitemap graphic included by Boutin as an example.13 Perhaps the most significant feature of the Mozilla approach is that it generates the sitemap dynamically from the RDF code. This opens the possibility for future dynamic sitemap renderings with query-specific features based upon a particular user's access needs. For example, the sitemapper could highlight those nodes corresponding to pages satisfying certain search criteria such as the presence of graphics or Java script, or it could annotate nodes corresponding to pages identified as "gateway" pages to other sites. Or the sitemapper could be integrated with a secondary search engine to highlight nodes corresponding to pages where certain keywords are found. Further possibilities for query-specific and dynamically derived surrogates are discussed later in this article.

A similar approach has been taken with the XML Tree Viewer, a Java applet that utilizes the Pax Syntactica parser to review a Channel Definition File (CDF) and display its contents as a hierarchical display of a collection of documents. A user can then navigate through the collection and access the specific documents in a separate window. The parser builds the logical structure from the CDF and then makes it available to the applet, which uses this structure to display a "tree view" of the collection of documents specified in the file.14

Beyond such XML-based applications, other private sector developers are exploiting the potential for sitemap rendering integrated with metadata recognition and management. Tetranet, for example, is producing affiliated products called Metabot and Wisebot. Metabot Pro is marketed to Web authors as a tool for the generation and management of metadata for HTML documents. The tool allows Web authors to view the metatags that exist in their files, to insert metatags into many files at once, and to test and manage for metatag standards compliance.15 Tetranet claims that Metabot currently supports over fifty metatags and standards like DC and Government Information Locator Services (GILs). It also allows authors to create their own metatags for nonstandard implementations. The affiliated product Wisebot Pro automatically creates a map of a site and publishes it in HTML, Applet, or XML format. Wisebot also provides editing features for customizing the maps' contents and appearance.16 An experimental application of Metabot/ Wisebot generated sitemaps may be found at the Web site for CUIP, the Chicago Public Schools and the University of Chicago Internet Project.17

A key point to be stressed for understanding future research and development is that sitemaps rendered from RDF and metadata tags themselves become a type of graphical surrogate that can be stored in repositories, manipulated, and interconnected in an interesting variety of ways. The potential of sitemap repositories can be glimpsed at the Atlas of Cyberspaces maintained by Martin Dodge of the Centre for Advanced Spatial Analysis at University College London.18 Note, however, that one significant drawback of the Dodge atlas is that each map is individually based on a unique analytical approach; i.e., they cannot be made interfunctional. Most are also directly based on Web source text or code. A truly effective repository or cyber-atlas would require derivation of all examples from a standard element set, which is yet another advantage of linking visualization research and metadata development.

A collection of sitemaps pertaining to a given subject area and interlinked by various logical qualifiers can come to represent a type of conceptual cartography of Web space pertaining to a domain of discourse, such as the ET Map demonstrated by Hsinchun Chen at the University of Arizona.19 Sitemaps can also be serialized and linked in order to monitor and preserve search history.20 Multiple sitemapping also offers potential for content retrieval, as Hyatt points out, due to the fact that RDF can pull data from different places (like bookmarks and history or another Web site) and combine them through a feature called aggregation, the ability to put completely different kinds of data into the same place. For example, the traditional tree view could contain anything from mail messages to local files to maps of other sites.21

Some very limited experimental applications integrating sitemapping with content aggregation can be found in the DESIRE project (Development of a European Service for Information on Research and Education).22 Centered at the Institute for Learning and Research Technology at the University of Bristol, DESIRE has produced a Resource Discovery Toolkit named RUDOLF, the RDF JTree hierarchical metadata browser, and an associated "Visual Navigation System." This experimental applet creates a pop-up window with a typical "tree" interface to metadata repositories. It aggregates metadata from local and remote sites, grouping sites within the visual navigation system via subject recognition through RDF metadata. The demo pop-up window contains a top level folder called "Metadata Tree Browser" with information drawn from the Institute for Learning and Research Technologies, Social Science Information Gateway, and the World Wide Web Virtual Library.23

Finally, the fluid character of the Web suggests future possibilities for animated site mapping and virtual reality modeling. An interesting example of animated tree diagrams related to semantic content can be found in the Plumb Design Visual Thesaurus developed at Princeton University.24 The same Thinkmap application underlying the thesaurus has been used for the content retrieval module of the Smithsonian's Online Exhibition "Revealing Things."25 Additional potential Web site and library applications of dynamic mapping and modeling are described at the Thinkmap home page.26

Dynamic Surrogates and Transient Hypergraphs

If the use of metadata can release visualization tools from the demands of full-text processing, greater focus can then be placed upon allocation of computational resources to users' actual queries. Query-specific graphical displays dynamically derived from metadata elements will likely parallel the important trend toward dynamically derived surrogates in digital library research. Lynch et al. have emphasized the potential for dynamic derivation of a wider variety of surrogates from networked objects than librarians have been accustomed to using in the print environment.27

The potential application of dynamically derived surrogates to visualization tools recapitulates to a certain extent an earlier line of research into the use of transient hypergraphs for citation networks. Citations between documents have been used for many years "to study the structure and development of a discipline, to determine the importance of an individual author or document, and as an aid to researchers in determining potentially relevant documents."28

Because lists or tables representing citation and cocitation relationships are very difficult to work with, early hypertext research identified the possibilities associated with implementing a citation network in a visual hypergraph system. The use of static or fixed hypergraphs proved problematic, however, involving the duplication of nodes, a large overhead in maintaining the database, and a very complex procedure for updating the cocitation structure. From the user's perspective it also quickly became apparent that fixed hypergraphs were unwieldy to use and required high cognitive overhead unless their definition and structure were very specifically identified. And the most useful way to specify such definition and structure, of course, is in relation to a user's actual query. Research consequently moved away from a model of retrieving a stored predrawn hypergraph from a static set. Instead, it moved toward a model of dynamically drawing a custom transient hypergraph based on matching the user's query to document citation metadata already stored.

Transient hypergraphs instantiated by the user's specific query resolved the problems of both construction of and access to citation networks, while also allowing new ways to view and manipulate information.

In addition to solving certain problems of update and node duplication . . . transient hypergraphs increase the utility of hypertext by permitting the user different views of the database and expanded query types, such as cocitation queries, without requiring new permanent links to be defined and the database reloaded. This allows the user to ask questions that may not have been answerable or only answerable with a high cognitive overhead cost to the user on hypertext systems with static hypergraphs.29

Citation networks with transient hypergraphs were based on relational database management systems (DBMS) not substantially different from those underlying current metadata repositories. The DBMS approach allowed the following: separation of database structure and database content so that queries could be formulated in terms of the schema and not necessarily in terms of database instances; different views of the database, including support for alternate graphical representations; the structure necessary for browsing through sets of nodes rather than one node at a time; and attribute capabilities for atomic objects (text, graphics, audio).30

The reapplication of transient hypergraphs in the current landscape of Web development raises interesting possibilities for library OPACs. This is because (1) many libraries are including URLs in the 856 field of the MARC record, creating hotlinked surrogates that can be immediately followed to their respective linked pages, and (2) libraries are creating MARC records for Web resources based on a qualitative selection filter similar to filters used for selecting print resources. One criteria for allowing a Web site into the OPAC domain may be its utility as a gateway site collecting links to other important content sites in the given field, rather than the significance of its own scholarly content. Transient hypergraphs displaying the relationships of linked and colinked Web sites may thus become important to researchers studying the structure and development of a discipline's resources on the Web, to determine the importance of an individual Web site or creator, and as an aid to researchers in predicting potentially useful pathways for further browsing.

One can imagine three Web sites, A, B, and C, already brought into the OPAC domain through a selection filter. An OPAC updating agent would draw a simple hypergraph to alert the webmaster to newly created links from these gateway sites to new content sites D and E. These sites now become candidates for selection to the OPAC domain, with the site linked from the largest number of gateways potentially having the stronger case. This is not an attempt to argue the case for gateway selection criteria, but to simply demonstrate the potential of transient hypergraphs carried forward from their roots in cocitation networking to the new domain of Web visualization.

The graphing of linked sites can simply be based on the extraction of URLs from gateway pages, but it can also become much more sophisticated and complex when based on the deeper structure of metadata. An earlier note was made about expanded visualization possibilities for the Dublin Core element RELATION. The basic graphing conventions of boxes and linking lines would seem an obvious visualization tool for the RELATION element, but a complex array of variations on type of RELATION has been proposed for GEM metadata, including isOverviewOf, isContentRatingFor, isDataFor, isSponsoredBy, isStandardsMappingOf, isQualityScore, isPeerReview, isSiteCriteria, and isAgencyReview.31 In such an extended schema, the RELATION element will become a vital tool for qualitative assessment of educational materials on the Web. The inherent complexity makes it likely that visualization tools will be needed to assist users, but also suggests that simple lines drawn between boxes will not suffice. Interface aids for users will probably require an assemblage of the visualization tools described above: query wizards, site maps, aggregation trees, link-node hypergraphs, and dynamic derivation based on the user's own interest profile.

Such applications will also require further research and development into user interest profiles, which themselves can become an important source of metadata. Lagoze has pointed out that dynamic surrogate derivation requires mechanisms for

tracking and modeling the current user requirements, presenting those requirements to a resource discovery tool, and then matching them to the appropriate surrogate template.32

This type of model has been implemented using an agent-based approach by the University of Michigan Digital Library (UMDL). UMDL has created an open distributed system architecture where software agents interact in a sort of "information marketplace." Three general types of agents populate the UMDL system:

  1. User interface agents express user queries in a form interpretable by appropriate search agents, maintain user profiles based on specified, default, and inferred user characteristics, customize presentation of query results, and manage the user's available resources with respect to fee-for-service.
  2. Mediator agents deal exclusively with other software agents, rather than end-users or collections. They perform a variety of functions: for example, directing a query from a UIA to a collection, monitoring query progress, transmitting results, format translation, and bookkeeping.
  3. Collection Interface Agents (CIAs) manage the UMDL interface for collections, which are defined bodies of library content. Among other communication tasks, the CIA also publishes the contents and capabilities of a collection in the registry. The registry contains metadata for both collections and agents.33

This distributed architecture with interoperable communicating agents drawing upon a metadata registry has nurtured the development of innovative visualization techniques for the UMDL Advanced User Interface (AUI). The AUI

investigates the synergy between querying and navigation. This is demonstrated by allowing any object to be used to query any set of objects anywhere in the workplace, by retaining the structure while displaying query results to allow navigation, and by allowing a smooth transition between the two tasks.34

Exploration of the UMDL visualization interface demonstrates important synergies between querying and browsing, e.g.: query-initiated browsing, when a query leads the searcher to an interesting part of the space which can then be then studied in depth; Browse-initiated querying, when the searcher browses over a structure until finding some interesting parts, then uses those as a query directed toward another piece of structure ("find more like these"); and query directed browsing, when the searcher uses the highlighting from a query to guide further browsing.35 These possibilities address the underlying issue touched on at the start of this paper: browsers are designed for browsing, as the nomenclature implies, and an extensive set of graphical tools now frame the browser interface. But without equivalent content visualization to connect the browsing activity to meaningful semantic exploration, browsing is in danger of remaining a superficial, even trivial, exercise.

A second application of UMDL visualization is the prototype for Collaborative Visual Information Gathering. "This research prototype interface explores social aspects of information gathering. In particular, it explores an environment where geographically dispersed group can move smoothly between synchronous group or asynchronous sub-group or individual interactions. The subgroups can be formed dynamically. Shareable artifacts can be created by the individual, subgroups, or the group as a whole. The emphasis here is on visual information."36

Conclusion

Visualization research has already revolutionized the way humans interact with personal computers by replacing list-based command-line interfaces with a navigable graphical interface and a toolset of icons and pull-down menus. But content retrieval remains primarily list-based on the semantic level. While researchers have attacked the content problem through knowledge visualization, their tradition of full-text document analysis has created computational demands too great for successful penetration of the library OPAC or Web search engine markets.

Now, the emergence of metadata frameworks offers a new pathway of research. Visualization can place abstract and complex metadata elements in more usable contexts. Metadata can bring computational shortcuts to the analysis and representation of Web resources, and standard frameworks will permit interoperability of visualization tools and techniques. And because library OPACs are already based upon the manipulation of surrogates, they can become a potential market for a new generation of metadata visualization applications.
Current and potential lines of research into metadata-based visualization include a command interface for metadata viewing, site-mapping and data aggregation tools, dynamic derivation of surrogates, and a reintroduction of transient hypergraphs from the tradition of cocitation networking. Digital library research into query-specific instantiation through agents "reading and feeding" a central metadata repository is already producing visual interfaces that create promising new synergies between querying, browsing, and group information sharing. These developments by no means exhaust the possibilities, but only represent the general outline of future research into new ways for humans to utilize their digital information environment through the visualization of metadata.


References and Notes

  1. "Internet Resources Survey: Survey Data." Accessed May 20, 1999, www.nevada.edu/~wardd/survey/data.html.
  2. Mark E. Rorvig and Mark E. Wilcox, "Visual Access Tools for Special Collections," Information Technology and Libraries 16, no. 3 (Nov. 1997): 99-107.
  3. Sougata Mukherja, "Visualizing the Information Space of Hypermedia Systems." Accessed May 20, 1999, www.cc.
    gatech.edu/gvu/people/Phd/sougata/Nvb.html
    ; John Lamping, Ramana Rao, and Peter Pirolli, "A Focus+Context Technique Based on Hyperbolic Geometry for Visualizing Large Hierarchies," in Human Factors in Computing Systems: Mosaic of Creativity: CHI'95 Conference Proceedings, May 7-11, 1995, Denver, Colorado. Accessed May 20, 1999, www.acm.org/ sigchi/chi95/Electronic/documnts/papers/jl_bdy.htm.
  4. Rorvig and Wilcox, "Visual Access Tools," 100.
  5. Carl Lagoze, "From Static to Dynamic Surrogates: Resource Discovery in the Digital Age," D-Lib Magazine. Accessed May 20, 1999, www.dlib.org/dlib/june97/06lagoze. html.
  6. Keith Shafer, "Scorpion Helps Catalog the Web," Bulletin of the American Society for Information Science 24, no. 1 (Oct./Nov. 1997): 29.
  7. Stuart Weibel, "The Dublin Core: A Simple Content Description Model for Electronic Resources," Bulletin of the American Society for Information Science 24, no. 1: (Oct./Nov. 1997): 10.
  8. "GEM Developer's WorkBench: Metadata Harvest (beta version)." Accessed May 20, 1999, http://gem.syr.edu/ Workbench/Workbench_harvest.html.
  9. Eric Miller, "An Introduction to the Resource Description Framework," Bulletin of the American Society for Information Science 25, no. 1 (Oct./Nov. 1998): 15-19.
  10. Weibel, "The Dublin Core," 10.
  11. "Developers: Java Overview: Mappucino." Accessed May 20, 1999, www.ibm.com/java/mapuccino/index.html.
  12. Paul Boutin, "Netscape's Open Source Browser Revealed," Hotwired. Accessed May 20, 1999, www.hotwired. com/webmonkey/98/18/index3a.html.
  13. Paul Boutin, "Page 2: XML, RDF, and Sitemaps," Hotwired. Accessed May 20, 1999, www.hotwired.com/
    webmonkey/98/18/ index3a_page2.html?tw=browsers
    .
  14. Lisa Rein, "Use an XML Tree Viewer to Organize Your Web," Webreview. Accessed May 20, 1999, http://webreview. com/wr/pub/97/12/05/feature/newview.html.
  15. "Metabot Pro: Metatags Made Simple." Accessed May 20, 1999, www.tetranetsoftware.com/products/metabot.htm.
  16. "Wisebot Pro: Web Navigation Made Simple." Accessed May 20, 1999, www.tetranetsoftware.com/products/wisebot.htm.
  17. "CUIP Site Map and Keyword Index." Accessed May
    20, 1999, http://astro.uchicago.edu/outreach/cuip/sitemap/ index.html.
  18. Martin Dodge, "An Atlas of Cyberspaces." Accessed May 20, 1999, www.cybergeography.org/atlas/atlas.html.
  19. Hsinchun Chen, "ET-Map." Accessed May 20, 1999, http://ai2.bpa.arizona.edu/ent/.
  20. "Surf Maps: Visualising Web Browsing." Accessed May 20, 1999, www.cybergeography.org/atlas/surf.html.
  21. Dave Hyatt, "XUL and RDF: The Implementation of the Application Object Model." Accessed May 20, 1999, www.mozilla.org/xpfe/xulrdf.htm.
  22. "Welcome to the DESIRE Project." Accessed May 20, 1999, www.desire.org.
  23. "RDF JTree: Hierarchical Metadata Browser." Accessed May 20, 1999, http://snowball.ilrt.bris.ac.uk/ RUDOLF/jtree/.
  24. "The Plumb Design Visual Thesaurus." Accessed May 20, 1999, www.plumbdesign.com/thesaurus/.
  25. "Smithsonian Without Walls: Revealing Things." Accessed May 20, 1999, www.si.edu/revealingthings/.
  26. "Thinkmap Potential Uses." Accessed May 20, 1999, www.thinkmap.com/.
  27. Clifford Lynch and others, "CNI White Paper on Networked Information Discovery and Retrieval." Accessed May 20, 1999, www. cni.org/projects/nidr/nidr.html.
  28. Michael A. Shepherd, C. R. Watters, and Yao Cai, "Transient Hypergraphs for Citation Networks," Information Processing & Management 26, no. 3 (1990): 397.
  29. Ibid., 408.
  30. Ibid., 401.
  31. "Relation: Type: Description." Accessed May 20, 1999, http://gem.syr.edu/Workbench/training/olD/t20-relationtable. html.
  32. Lagoze, "From Static to Dynamic Surrogates."
  33. William P. Birmingham "An Agent-Based Architecture for Digital Libraries," D-Lib Magazine. Accessed May 20, 1999, www.cnri.reston.va.us/home/dlib/July95/07birmingham.html.
  34. George W. Furnas, "UMDL Technologies: Advanced User Interface."Accessed May 20, 1999, www.si.umich.edu/ UMDL/aui.html.
  35. Stephen Markel and others, "Visual Querying of Digital Library Information." Accessed May 20, 1999, http://madison.si.umich.edu/AUI.Fall97/vqdli.html
  36. Furnas, "UMDL Technologies."


Donald Beagle (drbeagle@att.net) is Associate Director of Library Services and Head of the Information Commons, University of North Carolina at Charlotte.


| ITAL Vol. 18, No. 4|


http://www.lita.org/ital/1804_beagle.html
Copyright 1999, American Library Association