First Monday

Can Navigational Assistance Improve Search Experience? A User Study by Mazlita Mat-Hassan and Mark Levene

Abstract
Providing navigational aids to assist users in finding information in hypertext systems has been an ongoing research problem for well over a decade. Despite this, the incorporation of navigational aids into Web search tools has been slow. While search engines have become very efficient in producing high quality rankings, support for the navigational process is still far from satisfactory. To deal with this shortcoming of search tools, we have developed a site specific search and navigation engine that incorporates several recommended navigational aids into its novel user interface, based on the concept of a user trail. Herein, we report on a usability study whose aim was to ascertain whether adding semi-automated navigational aids to a search tool improves users' experience when "surfing" the Web. The results we obtained from the study revealed that users of the navigation engine performed better in solving the question set posed than users of a conventional search engine. Moreover, users of the navigation engine provided more accurate answers in less time and with less clicks. Our results indicate that adding navigational aids to search tools will enhance Web usability and take us a step further towards resolving the problem of "getting lost in hyperspace".

Contents

Introduction
Navigation and Searching
Usability Study
Results and Discussion
Concluding Remarks

 

++++++++++

Introduction

An increasing number of people are using the World Wide Web (WWW) both for finding and disseminating information. A recent estimate (by Global Reach) is that the number of online users is close to 476 million; it is projected that this number will double by 2005. The Web is a decentralised, dynamic, and diverse information space, making it very difficult to locate information purely by navigation (colloquially known as "surfing") often resulting in the "lost in hyperspace" syndrome (Levene and Loizou, forthcoming), where users tend to become disoriented during browsing. The emergence of Web search engines (Lynch, 1997; Smeaton, 1997) attempts to address some of the problems facing users in the process of locating information. Still, users are heavily dependent on several factors when they search for relevant information: the right choice of keywords, knowledge of the behaviour of the search engine, extensive use of available browser tools, and some luck. Users of search engines are also confounded by the fact that obtaining high-quality results is dependent on the particular search and ranking algorithms used, these normally being "heavily-guarded" secrets.

Although global search engines such as AltaVista, Google, and Yahoo are successful at directing users to the appropriate Web sites, finding detailed information on the actual sites is often quite problematic. Current Web technology supports high quality search services, but does very little in helping users in their navigational activity. Most search engines lack any form of navigational assistance which would guide users through their information seeking process. Current navigational practice is to select links through a combination of inspecting highlighted link text, clicking on the back and forward buttons, and scanning the history list. Due to the limited horizon that these tools present, users are often guessing which link to follow next without any certainty of whether they are heading in the "right direction". Despite the fact that the navigational problem has been an ongoing research issue for well over a decade, the incorporation of navigational aids into Web search tools has been surprisingly slow. To deal with this shortcoming of search tools we are developing a search and navigation engine (Levene and Wheeldon, 2001) that incorporates several navigational aids into its novel user interface based on the concept of a user trail.

Herein we report on a usability study whose aim was to ascertain whether adding semi-automated navigational aids to a search tool improves users experience when "surfing" within a Web site. Users were given two sets of information-seeking tasks to complete using two different search tools, one of them being the search and navigation engine we are developing. We measured users' completion time, number of clicks employed, number of correct answers found, and confidence and satisfaction levels. The results we obtained from the study reveal that, overall, users of the search and navigation engine performed better in solving the question set posed than users of a conventional search engine. Moreover, users of the search and navigation engine provide more accurate answers in less time and with less clicks. We also observed that users of the search and navigation engine expressed greater satisfaction and confidence levels in their searching tasks. Our results indicate that adding navigational aids to search tools will enhance Web usability and help resolve the notorious problem of "getting lost in hyperspace".

The rest of the paper is organised as follows. In the Navigation and Searching section, we give a brief overview of navigational and searching issues that have been investigated by the hypertext community. In the section on the Usability Study, we detail our methodology and describe the research questions we set out to answer. In the Results and Discussion section, we discuss the detailed results of the study, and finally we give our concluding remarks in the last section.

 

++++++++++

Navigation and Searching

Conklin (1987) identified two problems that are most prevalent in limiting the usefulness of hypertext, disorientation and cognitive overhead. According to Elm and Woods (1985) disorientation or "getting lost in hyperspace" is defined as

"the user not having a clear conception of the relationships within the system or knowing his present location in the system relative to the display structure and finding it difficult to decide where to look next within the system."

Cognitive overload is defined as the extra effort required in order to maintain, at any given moment, routing information of several trails. As the Web continues to grow in volume, exploring its structure is becoming increasingly difficult and frustrating. Many Web users opt to use search engines to aid them in finding the information they require, but are struggling to comprehend the displayed result list and, in addition, are having difficulty in navigating the Web page structure while trying to remain focused on the goals of their original query.

Many researchers in the hypertext and information visualisation communities suggest that to navigate effectively and efficiently without getting lost, requires readers to be aware of their location in the information space and to be able to pick up the "scent" (Pirolli, 1997) of what their next destination might be and then follow the right trail leading to this destination. Furthermore, users need contextual information to establish a sense of location, particularly spatial context to help users decide which trail to follow next and temporal context that gives an indication of the navigation history (Utting and Yankelovich, 1989). The most commonly used techniques recommend the provision of sufficient navigational aids to orientate users within the information space (Bieber and Kachmar, 1995; Hearst, 1995; Navarro-Prieto, Scaife, and Rogers, 1999). Although these ideas relate to the general process of navigation within a hypertext system, the same tool set could be utilised for the more specific activity of search and navigation within a search service. We believe that visual displays that show relationships between terms and documents, and reveal the underlying structures of the document space will be successful in relaxing the demands on the performance of search tools (Lin, 1997).

The idea of using trails as navigational cues does not appear to have been extensively utilised by hypertext researchers. To our knowledge, there is no commercial search engine that uses the concept of trails as an integral part of its system. We believe that presenting relevant trails to users is potentially very useful in helping them justify decisions made during their "surfing" activity. Zellweger (1989) and Furuta et al. (1997) have suggested that by following an ordered set of relevant links, choices of routes for users are narrowed down only to the ones that will lead them to information that they are looking for.

To tackle these problems we are currently developing a site specific search and navigation engine called NavZone that incorporates the concept of a trail both at the system and user interface levels. A detailed overview of NavZone is available in Levene and Wheeldon (2001); see http://www.navigationzone.com for a demonstration of the current version of NavZone.

Figure 1: NavZone Interface

As can be seen in Figure 1, the NavZone user interface is divided into three main frames:

  1. The top frame, also known as the navigation tool bar. This frame includes the query area and provides a history mechanism that records previously visited links and a recommendation mechanism that suggests to users the next link to be followed.
  2. The left frame, also known as the navigation tree window. This frame displays the search results in terms of a preferred list trails, organised in the form of a tree structure with the trails being ranked from the most preferred.
  3. The main window, also known as the browser window. This frame displays the Web pages being clicked in the navigation tree window.

This interface is in complete contrast to a conventional search engine such as Compass (see Figure 2), a site specific search engine for University College London (UCL). Compass, like many other search engines, employs a single window interface with a linear representation of the displayed results.

Figure 2: Compass Interface

In NavZone, all of the links in the navigation tree window and navigation tool bar are clickable and their displays are synchronised. Putting the cursor over a hyperlink causes a popup window to appear which displays the title of the Web page, its URL, a summary of the contents of its page, and other useful information pertaining to the page. Matched keywords or input queries entered by users are highlighted in the summary of the popup window allowing users to inspect the significance to their query of each retrieved document; this strategy is known as "scan-browse" (Carmel, Crawford, and Chen, 1992).

 

++++++++++

Usability Study

We have conducted a usability study to gauge users' perception, acceptance, satisfaction or dissatisfaction of NavZone compared to a conventional site specific search engine. More specifically, the objective of the user study was to investigate the usefulness of NavZone from the users' point of view. We were especially interested in how well or poorly the system and user interface performed for users, and how confident and satisfied users were in completing given tasks. We also aimed to provide an answer to the following question:

"Does adding a semi-automated navigation component to a site specific search engine enhance the user's experience in searching for information on the Web and navigating within its information structure?"

Our hypothesis was that a trail-based search and navigation engine improves users' navigation efficiency in terms of speed and accuracy of the answers found, and in terms of the level of confidence and satisfaction they have in comparison to a traditional search engine. Results were collected using both quantitative and qualitative methods ranging from manually capturing users' navigation behaviour to measuring user satisfaction using a post-test questionnaire and an informal interview session.

In the following subsections we discuss the methodology and procedures we have chosen for the usability study. In the subsection on stimuli, we discuss search tools used in the study. In methodology subsection, we review techniques followed throughout the testing, including sample questions and measurement tools. In the subsection on training we discuss the NavZone experiences of the subjects and, finally, in the last subsection we discuss the distribution of subjects used in the experiment.

Stimuli

For user testing, the site domain chosen was the UCL official Web site at http://www.ucl.ac.uk. Three different search engines were used in the testing: (1) NavZone, http://www.navigationzone.com; (2) Compass, the official UCL site search engine, at http://www.ucl.ac.uk/search/compass; and, (3) Google, restricted to the UCL site domain, at http://www.google.com/univ/ucl. Google was chosen as has it has been hailed as one of the best global search engines, and since it provides site specific searching within a particular domain. Both Google and Compass employ a single window interface with a linear representation of the results retrieved.

Methodology

Subjects were asked to answer two sets of questions ( Set 1 and Set 2) using either (1) NavZone and Google or (2) NavZone and Compass. We devised the question sets to be, as far as possible, at the same level of difficulty in order not to bias or skew the overall results. The results from the experiment revealed that there was no significant difference between the performance of users answering the two question sets. Prior to using either a search engine or the search and navigation engine, subjects were required to formulate the query term(s) that they thought were appropriate for the given question. While carrying out the task, subjects were asked to pick the Web pages that they perceived as most relevant to the query given. Users were allowed to reformulate the query terms and reiterate the navigational process in as many trials as they felt necessary in order to reach a satisfactory answer. Subjects were informed that it was possible to find all the answers within the Web site structure.

Before performing the information seeking tasks, subjects were required to plan their search and navigation strategy. We call this stage the initial planning stage. This involved stating the query terms to be used, how many search iterations they presume they will have to go through, and their level of confidence in finding the right answer and hence their confidence in completing the task given. This was done to scrutinise users' expectations in their own information seeking ability, and also users' expectations of the search engine's ability to direct them to the appropriate Web pages.

Each question set consisted of five separate questions. Each question was carefully selected for this experiment, where extra care was taken in order not to bias the questions towards NavZone. Each question was formulated within one of five types of information seeking activities:

  1. Simple fact finding questions that have only one simple answer, e.g. "Find the opening hours of the Windeyer Building computer cluster."
  2. Judgment questions, where users are required to determine whether they have found the correct answer based on the information they have collected, e.g. "You would like to build a Web site on the UCL server. Find the guidelines outlining details on design effectiveness and copyright issues."
  3. Comparison of fact questions, where users must investigate and compare two or more facts to derive the correct answers, e.g. " You are a home student and wish to apply for on-site accommodation. Find out whether home students are allowed to stay in their accommodation during the Christmas or Easter break."
  4. Comparison of judgment questions, where users are required to make comparisons and judgments in order to arrive at a satisfactory answer, e.g. "You are a non-EU postgraduate student at UCL and would like to apply for research funding. Find out the funds available."
  5. General navigational questions, where users are required to substantially explore the information structure in order to find the correct information, e.g. "You have been accepted to study Podiatry at UCL and need to find on-campus accommodation. Find the nearest possible halls of residence to your school."

The first four types of questions were adapted from the well-known study conducted by Spool et al. (1999).

Two independent variables in our experiment were the type of search engine used (i.e. whether the displayed results were linear or trail-based) and the question set tackled. The order of which system combination was used was randomised across subjects. Four dependent variables of interest were: (1) the total time to complete all five questions; (2) the total number of clicks employed to complete all five questions; (3) the accuracy or the correctness of responses, which is defined as the number of correct answers on searching for the information; and, (4) user satisfaction ratings.

User satisfaction ratings were measured using a post-test questionnaire to capture the users perception of (1) their overall reactions to the site search tool used; (2) their overall confidence with regards to the completion of tasks; (3) ease of learning of the site search tool used; (4) display mechanism of search results; (5) navigation within search results; and, (6) completion of tasks assigned.

Several questions in the "overall reactions to the site search tool" and the "learning" categories were adapted from the Questionnaire for User Interaction Satisfaction (QUIS) developed at the University of Maryland (Shneiderman, 1998). Other questions were specifically designed to capture the above mentioned categories. The questionnaire employed an attitude scale of semantic differential, using bi-polar adjectives (e.g. difficult-easy or rigid-flexible) at the end points of the scales (Coleman, Williges, and Wixon, 1985). Subjects were required to rate each question within each category in a scale of seven points between these paired adjectives.

Training

None of the subjects had any previous experience with NavZone and no proper training was provided to prepare them for the use of the system. This was deliberately done to determine the overall usability and ease of use of the system, especially for first-time users of NavZone. However, subjects were encouraged to read the introduction or help page that is available on NavZone. Subjects were also given a time limit of two minutes to explore the interface, and were allowed to ask questions regarding the interface within the time limit.

Subjects

Twenty-four subjects voluntarily participated in the usability study. Out of these, twelve were novices and twelve were expert users. Subjects ranged in age from 23 to 36 years old with mean age of 27. Half of the subjects were female. Table 1 and Table 2 summarise the distribution of subjects and their level of Web experience. Approximately one half of the subjects (54%) reported using the Compass search engine frequently, 38% of the users have had experience with NavZone (these subjects were recruited for the pilot study) and 58% of the subjects have used the Google site specific search engine (88% have used the global Google search engine).

 

Table 1: Distribution of Subjects

Status
Level of study (student)
Course of study (student)
Student 75% Postgraduate 89% Computer science 89%
Full time employment 25% Undergraduate 11% Accountancy 11%

 

Table 2: Level of Web experience

Daily User 71%
Weekly User (less than four times a week) 17%
Occasional User (less than five times a month) 12%

 

++++++++++

Results and Discussion

In this section we discuss the detailed results and findings of the usability study. We discuss the findings of a pilot study conducted prior to the main study and we provide detailed statistics and observations we gathered from the main study.

Pilot Study

Prior to the main usability study, we conducted a pilot study to assess the experimental method being proposed for the main experiment. The pilot study served as a test bed for us so that we could improve our experimental methodology, that is the test questions and the user satisfaction questionnaire set. It also helped us capture initial user feedback on the NavZone user interface design. In the pilot study we used an early version of NavZone with a relatively small coverage of the UCL Web site. Sixteen users voluntarily participated in this study. Users' response from this pilot study were very encouraging. Overall, with NavZone, subjects managed to complete their tasks significantly faster and employing less clicks. However, we did not observe any significant differences in the level of users' satisfaction when using NavZone except in one category, the display of results. We suspected that this is due to the limited coverage of the Web site and the relative instability of this early version of NavZone. A large number of the Web pages generated as a result of a submitted query contained dead links, and pages and images were missing as a result of the limited coverage. For the main experiment, a new version of NavZone was used which was more stable and had a significantly larger coverage of the UCL Web site.

We also received encouraging comments and useful feedback regarding NavZone especially with respect to the user interface design. Most subjects especially appreciated the idea of having the result list permanently displayed along with a small pop-up window displaying summary information of the highlighted link. One of the features mentioned by the subjects that they would have found desirable was the ability to distinguish between the previously visited trails and the next trail to be inspected. As there are many trails with similar headings, subjects commented that distinguishing the links that have already being inspected, will decrease their level of confusion and also prevent them from accidentally reselecting them. Taking this into consideration, we introduced a different colour scheme to distinguish visited Web pages on trails (the colour purple was chosen). Subjects in the main usability study used the updated version of NavZone for their experiment.

Main usability study

Overall, subjects performed better using NavZone in terms of the total completion time and, in addition, subjects using NavZone employed less clicks in completing their tasks.

Figure 3: Performance (time)

Figure 3 illustrates that the users on average took 13.11 minutes to complete their tasks with NavZone. Google came second with the overall average of 16.57 minutes, while subjects using Compass took on average the longest time of 17.6 minutes.

Figure 4: Performance (clicks)

Figure 4 indicates that subjects using NavZone employed far less clicking effort in completing their tasks, an average of 27.21 clicks overall. Subjects using Compass employed a total of 40.25 clicks, while users using Google engaged in a total of 44.08 clicks in completing tasks.

We used a nonparametic statistical test to ascertain whether the differences in behaviour were significant, as nonparametric tests do not require stringent assumptions regarding the underlying statistical distribution (Conover, 1999). As illustrated in Table 3, the Wilcoxon nonparametric test revealed a statistically significant difference in the total completion time and the number of clicks employed by subjects in completing their tasks. The extra clicking efforts for subjects using Google and Compass were mainly due to users backtracking to the result list either to reselect another link or to reformulate the query.

 

Table 3: Wilcoxon's test on user overall completion time and total number of clicks employed
*Significant at 10% level
**Significant at 5% level

  NavZone and Compass NavZone and Google
Time z = -2.353
two-tailed = 0.019**
z = -1.726
two-tailed = 0.084*
Click z = -2.984
two-tailed = 0.003**
z = -2.903
two-tailed = 0.004**

Figure 5: Completion time

Figure 5 and Figure 6 reveal that expert users generally performed better than novice users, both in completion time and the number of clicks employed.

Figure 6: Number of clicks

We are intrigued by the fact that novice users of NavZone managed to complete their tasks within a similar time frame and level of clicking effort as experts did using NavZone. Although the subjects' sample size might be too small for us to arrive at a general conclusion regarding the usability of NavZone, the results convinced us that the proposed user interface does indeed provide effective information retrieval assistance, especially for novice users.

We also observed that overall, subjects using NavZone managed to obtain more accurate answers during their information seeking tasks, followed by Compass and Google.

Figure 7: NavZone

Figure 8: Google

Figure 9: Compass

Figure 7, Figure 8 and Figure 9 illustrate the percentage of subjects that managed to accurately complete the tasks given. 54% of subjects using NavZone were able to complete and give accurate answers to all five questions (5/5) compared to 33% and 8% for Compass and Google, respectively. Moreover, 29% of subjects using NavZone were able to accurately identify four questions compared to 51% and 42% of subjects using Google and Compass, respectively. Overall, subjects using NavZone managed, on average, to answer 4.37 questions correctly. Subjects using Google and Compass performed equally well, with the average of finding the correct answers being 3.59 and 4.00, respectively.

During the testing, we observed that subjects were more likely to give up or abandon their current information seeking tasks using Google (three subjects gave up) and Compass (two subjects gave up), rather than when using NavZone (one subject gave up). This is especially true for "comparison of judgment questions", i.e. questions where two or more separate searches were required. As users navigate deeper into the Web site, the probability of losing sight of their original goal is also higher. Without easy access to navigational aids that can provide some guidance on assisting them in deciding on the next step or strategy to follow, this can be very frustrating for the users thus forcing them to abandon their current task.

Generally, subjects expressed a much higher confidence level in completing a task after the actual information seeking task. Almost 92% of the subjects indicated, during the initial planning stage, a lower confidence level of their perceived success in completing the tasks and finding the correct answer. One logical explanation for this scenario is that most subjects have some reservations regarding the search engine's ability to direct them to the appropriate Web pages, and also in their own information seeking skills. We also observed during the study that subjects who have had a previous "bad experience" or low success rate with a search engine demonstrated a lower confidence and motivation level during the initial planning stage and also when engaged in the information seeking tasks. The reverse is also true. For instance, one subject when being told that he has to use Compass for the study expressed his disappointment and said "I'm sure I'm not going to find anything with this search engine". On the other hand, most of the subjects assigned to use Google expressed their excitement and also were more optimistic about their predicted success rate. Subjects were more reserved and pessimistic about NavZone initially. We suspected that this is due to the unfamiliarity with the search and navigation engine.

During the initial planning stage, expert users reported much higher expectations of finding the correct answers than did the novices. On average, 83% of the subjects reported that they expect to find the answer to each question in just one attempt (i.e with no query reformulation) compared to only 33% of the novice users. On average, 67% of the novices thought that they would need at least two attempts to get to the right answer.

Figure 10

Figure 10 illustrates subjects' overall questionnaire average according to search engine. Overall, subjects expressed higher satisfaction levels with NavZone in all categories except "overall reactions to the user interface" and also "learning of the system". However for these categories, the Wilcoxon test revealed that these differences are not statistically significant. Moreover, for these two categories, subjects rated Google higher than NavZone and Compass.

 

Table 4: Wilcoxon's test on users' questionnaire responses for each search engine
*Significant at 1% level
**Significant at 5% level

Category
NavZone/Google
NavZone/Compass
Z
p<
Z
p<
Overall reaction to the site search with regards to the completion of tasks
-0.764
0.445
-2.908
0.004*
Overall reaction to the site search with regards to the user interface
-0.536
0.592
-1.071
0.284
Overall confidence with regards to completion of tasks
-0.289
0.773
-2.701
0.007*
Learning of the System
-1.025
0.305
-0.315
0.753
Display of results
-2.049
0.04**
-3.062
0.002*
Navigation
-1.604
0.109
-2.937
0.003*
Completing the tasks
-1.609
0.108
-2.764
0.006*

Table 4 gives a much more detailed analysis of the questionnaire scores. For subjects using NavZone and Google, there are no significant differences in responses except for the "display of results" category. On the other hand, there were several significant differences in responses for subjects using NavZone and Compass. Subjects' responses for NavZone were statistically significant in all categories except in the categories of "overall reaction to the user interface" and "learning of the system".

Overall, there were no significant differences in subjects' responses in the "overall reactions to the site search with regards to the user interface" category. Analysis into each subcategory revealed that users agreed that the linear interface display of the Google search engine is simpler and easier to use. However, when asked, users said that "familiarity" with the conventional search engine user interface was the main factor in their judgment. As most of the search engines that are available today on the Web utilise the linear interface model, it is hardly surprising that users' "overall reactions" did not significantly favour NavZone's novel user interface. It is interesting to note that while no significant difference is observed for the "overall reactions to the site search", the opposite is observed for the "results display" category. We were intrigued by this result favouring NavZone's tree structure display mechanism.

From the verbal and written comments about the NavZone interface, we found that users, especially novices, expressed their concern with the relative complexity of the NavZone interface. Most users found it difficult to grasp the concept of a trail-based search and navigation engine. Moreover, some found the interface quite intimidating at first, as it is a radical shift from their "usual" search engine interface paradigm. However, after we had explained the overall concept of NavZone and the navigational aids available, and after several iterations of using NavZone, we found that users were beginning to appreciate the new interface. It is also encouraging to find that users found the NavZone interface more stimulating (p<0.05) to use compared to Google and Compass overall.

It is also very encouraging to note that, overall, users were comfortable with the proposed user interface of NavZone. Users especially like the idea that the result list is constantly available within the left frame. This is extremely useful when users need to reselect different links to view once they have discovered that they have selected or followed the wrong links. We also observed that some users, especially expert users, when using Google or Compass, opened a new window to view the Web page of the selected link. When asked, users commented that they do not want to lose sight of the result list just in case they need to reselect another link to investigate. Furthermore, most users found it extremely inconvenient to have to use the back button, especially for the purpose of backtracking to the result list. By having the result list permanently being displayed on the left frame, users are able to perform their information seeking task much quicker and with less clicking effort. In this usability study, this is evident when measuring the number of clicks being employed and the time spent to complete the given tasks.

In navigating the search results and the information structure, further analysis into the subcategories revealed that there were significant differences in items identifying the current location in the information structure, keeping track of the pages that have been visited, and having enough information to help making a decision about the next step. Subjects were in general agreement that with NavZone it is much easier to identify their current location in the information structure and to keep track of all the pages visited previously, utilising the navigational aids within NavZone. To be more specific, trail-based results help users identify the context of Web pages. Additionally, NavZone provides users with the useful information pertaining to the Web pages that have been visited by means of changing the link colour. Finally, the navigation tool bar provides users with a means of identifying their navigation history and also serves as a recommendation system for Web pages that might be inspected next.

These observations were further supported by comments made by the subjects. Examples of user comments on NavZone were:

"Immediately gives you the best match page without having to click on it. Can see how its got to the page so easier to identify "dead-end" trails early on. "History" kept at top bar of screen is nice"

"Useful trails at side of the screen (once I got used to it) and an indication of the pages already looked at, and the pages that might be useful to look at."

"Good screen organisations. Good pop-up and down menu structure"

"If you want to reformulate your query, you don't have to return to a main menu as a keyword box is provided on every page you visit. There are helpful trails that can assist one with their query. You can type in long phrases which can take you immediately to your site"

"Showing link relationships helps to some extent, to put pages in context, enabling more informed assumptions about the content"

Some examples of users' criticism of NavZone were:

"Looks intimidating at first"

"Apparently no string search of boolean operators. Both of these made it difficult to narrow search"

"Too crowded with information. I guess this is a trade-off between how helpful the search site is, the amount of information and simplicity"

Finally, 96% of the subjects chose NavZone over Google and Compass as their preferred search engine during the usability study. This is quite surprising considering that most users were not familiar with NavZone and its novel user interface design. As much as we would like to believe that the high number of user preferences towards NavZone is due to its overall usability and performance values, there may be other factors that influenced the subjects such as the excitement of using a novel piece of software and the fact that they are taking part in a usability test. Taking all the evidence into account the results are very encouraging, especially users' positive reactions and acceptance of NavZone.

 

++++++++++

Concluding Remarks

While current Web search services are slow at incorporating navigational aids into their user interface, we are in the midst of developing a search and navigation engine, NavZone, which employs several navigational aids all within one coherent user interface. In particular, it employs the use of trails as well as history and recommendation mechanisms. The results of the usability study indicate that users performed better and expressed higher satisfaction levels in their searching experience when presented with these navigational aids. This also answers our initial research question, whether "adding semi-automated navigational aids into a site specific search engine enhances the user's experience in searching for information on the Web and navigating within its information structure". As the results of the experiment reveal, users, especially novices, significantly benefited from having access to several navigational aids within the search tool. Presenting results as preferred trails, organised in the form of a tree structure, and providing a navigational tool bar that acts as a history mechanism and recommender system, helps users to systematically review the results, show relationships between Web pages and also helps the user understand the context of a Web page within the site. As well as assisting users to "surf" more effectively, navigational aids are very useful in providing users with some guidance of what lies ahead, thus supporting users in making navigational decisions in the information seeking process. Navigational aids can also support users in traversing through the information structure by compensating for deficiencies that result from the user's query (such as an ill-defined query or a typing error) or deficiencies that result from the system itself (such as less relevance or inaccurate results).

However, it is also important to strike a balance between the need to provide as much information as possible in order to assist users in searching and navigating the information structure and the need to simplify the user interface so that users are not overloaded with too many details. It is evident from the study that users were having slight difficulties in digesting all the information that the user interface is trying to convey to them while trying to remain focused on their information seeking goals. We aim to find the answer to "how much is too much" with the intention of further improving the user interface of NavZone. Finding the right balance is essential as this has direct implications on the amount of time users need to learn and understand the user interface, and also on the efficiency of users' search and navigation process. End of article

 

About the Authors

Mazlita Mat-Hassan is a PhD student at the School of Computer Science and Information Systems at Birkbeck College, University of London.
Web: http://www.dcs.bbk.ac.uk/~azy
E-mail: azy@dcs.bbk.ac.uk

Mark Levene is Professor of Computer Science in the School of Computer Science and Information Systems at Birkbeck College, University of London.
Web: http://www.dcs.bbk.ac.uk/~mark
E-mail: m.levene@dcs.bbk.ac.uk

 

References

Michael Bieber and Charles Kacmar, 1995. "Designing Hypertext Support for Computational Applications," Communications of the ACM, volume 38, number 8 (August), pp. 99-107.

E. Carmel, S. Crawford, and H. Chen, 1992. "Browsing in Hypertext: A Cognitive Study," IEEE Transactions of Systems, Man and Cybernetics, volume 22, number 5 (September 1992), pp. 865-884.

W.D. Coleman, R.C. Williges, and D.R. Wixon, 1985. "Collecting Detailed User Evaluations of Software Interfaces," Proceedings, Human Factors Society, 29th Annual Meeting. Santa Monica, Calif.: Human Factors Society, pp. 240-244.

Jeff Conklin, 1987. "Hypertext: An Introduction and Survey," IEEE Computer, volume 20, number 9 (September), pp. 17-41.

W.J. Conover, 1999. Practical Nonparametric Statistics. Third edition. New York: Wiley.

W.C. Elm and D.D. Woods, 1985. "Getting Lost: A Case Study in Interface Design," Proceedings, Human Factors Society, 29th Annual Meeting. Santa Monica, Calif.: Human Factors Society, pp. 927-931.

Richard Furuta, Frank M. Shipman, Catherine C. Marshall, Donald Brenner, and Hao-wei Hsieh, 1997. "Hypertext Paths and the World-Wide Web: Experiences with Walden's Paths," Proceedings, 8th ACM Conference on Hypertext, pp. 167-176.

Marti A. Hearst, 1995. "Tilebars: Visualization of Term Distribution Information in Full Text Information Access," Proceedings, ACM SIGCHI Conference on Human Factors in Computing Systems, pp. 59-66.

Mark Levene and George Loizou, forthcoming. "Web Interaction and the Navigation Problem in Hypertext," To appear in the Encyclopedia of Microcomputers published by Marcel Dekker.

Mark Levene and Richard Wheeldon, 2001. "A Web Site Navigation Engine," Proceedings, 10th World Wide Web Conference, Hong Kong, at http://www.www10.org/cdrom/posters/1014.pdf

Xia Lin, 1997. "Map Display of Information Retrieval," Journal of American Society and Information Science, volume 48, number 1, pp. 40-54.

Clifford Lynch, 1997. "Searching the Internet," Scientific American, volume 276 (March), pp. 50-56, at http://www.sciam.com/0397issue/0397lynch.html

Raquel Navarro-Prieto, Mike Scaife, and Yvonne Rogers, 1999. "Cognitive Strategies in Web Searching," Proceedings, 5th Conference on Human Factors and the Web, at http://zing.ncsl.nist.gov/hfweb/proceedings/navarro-prieto/

P. Pirolli, 1997. "Computational Models of Information Scent-Following in a Very Large Browsable Text Collection," Proceedings, CHI 97, ACMSpecial Interest Group on Computer-Human Interaction (ACM/SIGCHI) Conference on Human Factors in Computing Systems, at http://www.acm.org/sigchi/chi97/proceedings/paper/plp.htm

Ben Shneiderman, 1998. Designing the User Interface: Strategies for Effective Human-Computer Interaction. Third edition. Reading, Mass.: Addison Wesley Longman.

A. Smeaton, 1997. "Information Retrieval: Still Butting Heads with Natural Language Processing?" In: Maria Teresa Pazienza (editor). Information Extraction: A Multidisciplinary Approach to an Emerging Information Technology, Lecture Notes of Computer Science, volume 1299, pp. 115-138.

Jared M. Spool, Tara Scanlon, Will Schroeder, Carolyn Snyder, and Terri DeAngelo, 1999. Web Site Usability: A Designer's Guide. San Francisco: Morgan Kaufmann.

Kenneth Utting and Nicole Yankelovich, 1989. "Context and Orientation in Hypermedia Networks," ACM Transactions on Information Systems, volume 7, number 1 (January), pp. 58-84.

P.T. Zellweger, 1989. "Scripted Documents: A Hypermedia Path Mechanism," Proceedings, Hypertext '89, pp. 1-14.


Editorial history

Paper received 31 July 2001; accepted 6 August 2001.


Contents Index

Copyright ©2001, First Monday

Can Navigational Assistance Improve Search Experience? A User Study by Mazlita Mat-Hassan and Mark Levene
First Monday, volume 6, number 9 (September 2001),
URL: http://firstmonday.org/issues/issue6_9/mat/index.html