Browsing by Author "Witten, Ian H."
Now showing 1 - 20 of 113
Results Per Page
Sort Options
Item Open Access ACQUIRING GRAPHICAL KNOW-HOW: AN APPRENTICESHIP MODEL(1988-03-01) Maulsby, David L.; Witten, Ian H.This paper studies the acquisition of procedural knowledge, or "know-how", from end-users in the domain of interactive graphics. In order to develop an open-ended system that is not restricted to any particular class of drawings, heavy emphasis is placed on the user interface. Experts (we call them simply "teachers") express procedures constructively, using any of the tools available in the interactive drawing environment. Well-structured procedures, including branches and loops, are inferred using a variety of weak generalization heuristics. The teacher's attention is concentrated on the system's perceptual and inferential shortcomings through a metaphorical apprentice called "Meta-Mouse". Its sensors are predominantly tactile, which forces teachers to make their constructions explicit. Meta-Mouse generalizes action sequences on the fly and eagerly carries out any actions it can predict. Theoretical support for the design comes from two sources: geometric phenomenology, which confirms that powerful problem-solving methods are associated with common-place spatial reasoning; and the fact that Meta-Mouse automatically imposes important "felicity conditions" on the teacher's demonstrations.Item Metadata only ADAPTIVE PERSONALIZED INTERFACES - A QUESTION OF VIABILITY(1984-04-01) Greenberg, Saul; Witten, Ian H.It is widely accepted that interfaces between computers and users should differ to accommodate individual, or group, needs. One method of "personalizing" an interface is to have the system form a limited model of the user and employ it to fashion the dialogue to his needs. Unfortunately, little is known about the effect of adaptation on the man/machine interface. Although obvious advantages accrue from "personalized" interfaces, there are also obvious disadvantages to presenting users with a changing, adapting, and perhaps apparently inconsistent interface. The goal of this work is to determine the viability of an adaptive interface through a human factor pilot study of a simple, specially-designed, interactive computer system. The system uses menu-driven selection to retrieve entries from a large ordered telephone directory. This simple task has several advantages: it is a realistic application area for interactive computers; plausible adaptive modeling methods exist and have been studied theoretically; and previous work has determined the best way to display the menus to users. The results of this empirical study support the use of adaptive user modeling. In the (admittedly highly constrained) example system, a computer interface can indeed adapt successfully to every user. Although it does not necessarily generalize to other user interfaces, the result supplies evidence to refute published objections to adaptive user modeling in general.Item Open Access Adaptive predictive text generation and the reactive keyboard(1988) Darragh, John Joseph; Witten, Ian H.Item Open Access ADAPTIVE PREDICTIVE TEXT GENERATION AND THE REACTIVE KEYBOARD(1989-05-01) Darragh, John J.; Witten, Ian H.This paper explores the application of predictive text generation to the human-computer interface. Predictive techniques exploit the statistical redundancy of language to accelerate and amplify user inputs. Acceleration is achieved by making more likely language elements faster to select, while amplification is accomplished by selection of concatenated elements. The language models used are created adaptively, decoupling the prediction mechanism from the application domain and user's vocabulary, and conforming automatically to whatever kind of text is entered. A device called the Reactive Keyboard is described along with two user interface implementations, one for keyboard entry and the other for a mouse/window environment. A clear separation is made between the system's user interface and the underlying model it employs, and the two versions share the same prediction technique and adaptive modeling mechanism. The basic idea is to order context-conditioned candidate strings, which are predicted by the model, according to popularity and display them for selection.Item Metadata only ADAPTIVE TEXT COMPRESSION TO ENHANCE A MODEM(1983-10-01) Darragh, John J.; Witten, Ian H.; Cleary, John G.The design of a coding system is described and evaluated in the context of a computer-to-terminal modem connection. Unlike other compression problems, it is hard to characterise the kinds of information that may require processing. For this reason the system uses an adaptive Markov model, in conjuction with arithmetic coding. The compression performance improves with available memory. Two techniques for storing the Markov model are described. Experimental results are reported for a variety of sample texts. It is shown that effective line speeds can be at least doubled, and in some cases tripled, using less than 64 Kbytes of memory.Item Metadata only ARITHMETIC, ENUMERATIVE AND ADAPTIVE CODING(1982-07-01) Cleary, John G.; Witten, Ian H.Using the close relationship between arithmetic and enumerative codes, expressions are developed for the performance of various non-adaptive codes. It is then shown that there exists adaptive codes whose performance can be guaranteed to be better than or close to these non-adaptive codes. On some actual examples the adaptive codes are significantly better than the non-adaptive ones.Item Open Access AUTONOMY, INTELLIGENCE, AND INSTRUCTABILITY(1988-10-01) Witten, Ian H.; MacDonald, Bruce A.Instructable systems constitute an important, useful, and practically realizable step towards fully autonomous ones. In many applications people will not want machines to be self-motivated, but they will want to teach them new jobs. The user interface must permit the teacher to guide the system through tasks. The system employs samples of behavior so gathered to drive an inductive process of concept learning. Learning becomes intractable unless the teacher fulfils certain felicity conditions. The real world frequently constitutes a competitive learning environment, and instructable systems may have to guard against their knowledge and skills being corrupted by incorrect or deliberately misleading teachers. Experimental prototypes of two instructable systems are presented, one for verbally editing robot movements, the other for automating office tasks. These examples show the potential utility of approaching autonomy via instructability; the next steps are to extend the power of their learning mechanisms, and to render them robust.Item Metadata only BILEVEL DISPLAY OF CONTINUOUS- TONE IMAGES USING PEANO CURVES(1981-12-01) Witten, Ian H.; Neal, Radford M.A new method is presented for showing grey-scale images on black-and-white displays. An incremental representation is used to determine points along a line with controlled cumulative error. Undesirable interactions between adjacent lines are avoided by scanning along a recursive space-filling curve. Peano curves are chosen because they limit the cumulative error in binary subdivisions of the image. The method is an improvement over earlier ones because it can easily compensate for dot-overlap in ink-and-paper displays. It also appears to be suitable for colour halftones.Item Open Access BONSAI: A COMPACT REPRESENTATION OF TREES(1991-10-01) Witten, Ian H.; Cleary, John G.; Darragh, John J.This paper shows how trees can be stored in a very compact form, called "Bonsai", using hash tables. A method is described that is suitable for large trees that grow monotonically within a predefined maximum size limit. Using it, pointers in any tree can be represented within $6 + \s-3 left ceiling \s+3 log sub 2 n \s-3 right ceiling \s+3 $ bits per node where \fIn\fR is the maximum number of children a node can have. We first describe a general way of storing trees in hash tables, and then introduce the idea of compact hashing which underlies the Bonsai structure. These two techniques are combined to give a compact representation of trees, and a practical methodology is set out to permit the design of these structures. The new representation is compared with two conventional tree implementations in terms of the storage required per node. Examples of programs that must store large trees within a strict maximum size include those that operate on trie structures derived from natural language text. We describe how the Bonsai technique has been applied to the trees that arise in text compression and adaptive prediction, and include a discussion of the design parameters that work well in practice.Item Open Access CANDIDACY EXAMINATIONS 1991(1991-12-01) Witten, Ian H.The Department of Computer Science at the University of Calgary holds both written and oral Candidacy Examinations for its Ph.D. students that are intended to ensure that every Ph.D. candidate has adequate background knowledge of the area of specialization in which he or she will pursue research. The written component is set individually for each student, and questions are generally set by members of the candidate's Supervisory Committee. Its scope is delimited by a reading list, prepared by the supervisor in consultation with the Supervisory Committee and other members of the department interested in the area of specialization, which is given to the candidate some months prior to the examination. In order to give examples of the scope of the candidacy examination, here are six reading lists and question papers that were set during 1991. They are all for students working in areas loosely related to artificial intelligence, and this should help to give some indication of the extent to which reading lists, and examinations, are tailored to individual students' interests. The written examination has both a closed-book and a take-home component. The former often contains more than one paper and the latter typically lasts for several days. The oral component follows within a few weeks, after the examination has been marked. It tends to focus on the student's answers to the written questions, and also on his or her research proposal.Item Open Access COMPARING HUMAN AND COMPUTATIONAL MODELS OF MUSIC PREDICTION(1992-05-01) Witten, Ian H.; Manzara, Leonard C.; Conklin, DarrellThe information content of each successive note in a piece of music is not an intrinsic musical property but depends on the listener's own model of a genre of music. Human listeners' models can be elicited by having them guess successive notes and assign probabilities to their guesses by gambling. Computational models can be constructed by developing a structural framework for prediction, and "training" the system by having it assimilate a corpus of sample compositions and adjust its internal probability estimates accordingly. These two modeling techniques turn out to yield remarkably similar values for the information content, or "entropy," of the Bach chorale melodies. While previous research has concentrated on the overall information content of whole pieces of music, the present study evaluates and compares the two kinds of model in fine detail. Their predictions for two particular chorale melodies are analyzed on a note-by-note basis, and the smoothed information profiles of the chorales are examined and compared. Apart from the intrinsic interest of comparing human with computational models of music, several conclusions are drawn for the improvement of computational models.Item Open Access COMPLEXITY-BASED INDUCTION(1991-07-01) Witten, Ian H.; Conklin, DarrellA central problem in machine learning research is the evaluation of proposed theories from a hypothesis space. Without some sort of preference criterion, any two theories that "explain" a set of examples are equally acceptable. This paper presents \fIcomplexity-based induction\fR, a well-founded objective preference criterion. Complexity measures are described in two inductive inference settings: logical, where the observable statements entailed by a theory form a set; and probabilistic, where this set is governed by a probability distribution. With complexity-based induction, the goals of logical and probabilistic induction can be expressed identically--to extract maximal redundency from the world, in other words, to produce strings that are random. A major strength of the method is its application to domains where negative examples of concepts are scarce or an oracle unavailable. Examples of logic program induction are given to illustrate the technique.Item Open Access COMPRESSING THE DIGITAL LIBRARY(1994-03-01) Bell, Timothy C.; Moffat, Alistair; Witten, Ian H.The prospect of digital libraries presents the challenge of sto ring vast amounts of information efficiently and in a way that facilitates rapid search and retrieval. Storage space can be reduced by appropriate compression techniques, and searching can be enabled by constructing a full-text index. But these two requirements are in conflict: the need for decompression increases access time, and the need for an index increases space requirements. This paper resolves the conflict by showing how (a) large bodies of text can be compressed and indexed into less than half the space required by the original text alone, (b) full-text queries (Boolean or ranked) can be answered in small fractions of a second, and (c) documents can be decoded at the rate of approximately one megabyte a second. Moreover, a document database can be compressed and indexed at the rate of several hundred megabytes an hour.Item Open Access Item Open Access CONCEPT LEARNING: A PRACTICAL TOOL FOR KNOWLEDGE ACQUISITION?(1987-01-01) Witten, Ian H.; MacDonald, Bruce A.Knowledge acquisition has emerged as a key technology underpinning the creation of expert systems. A number of schemes have appeared recently which learn structural concepts from examples, thereby offering the potential to interact directly with domain experts to acquire knowledge. This paper examines the fundamental distinctions which underlie current research, relates them to plain questions about the capabilities of learning systems, and critically reviews representative systems from the perspective of practical knowledge acquisition.Item Embargo Conceptual analysis in prolog(1985) Masrani, Roy W.; Witten, Ian H.Item Metadata only CONNECTING COMPUTERS TO MODEMS AND TERMINALS: FINDING YOUR WAY THROUGH THE STANDARDS JUNGLE(1982-04-01) Witten, Ian H.No AbstractItem Open Access CONSTRAINT-SOLVING IN INTERACTIVE GRAPHICS A USER-FRIENDLY APPROACH(1988-11-01) Maulsby, David L.; Kittlitz, Kenneth A.; Witten, Ian H.Solving constraints is an important part of interactive graphics, and a number of constraint solvers that operate in this domain have been designed and implemented. However, most of these systems are deficient in two respects: the method of specifying constraints is counter-intuitive, and only a restricted class of constraints is representable. After describing the problems inherent in current systems, we propose a simple constraint solver and its user interface, Metamouse. The user of this system specifies constraints by giving examples in the form of execution traces; the system induces a generalized procedure. Thus constraint specification is natural-the user simply performs his task as usual-and the class of representable constraints includes anything the user could accomplish manually with the graphics editor.Item Open Access A COURSE ON "EXPERT SYSTEMS" FOR ELECTRICAL ENGINEERING STUDENTS(1986-08-01) Witten, Ian H.A final year undergraduate course on Expert Systems, designed for Electrical Engineering students, is described. To cater for this audience the course has a highly practical nature, despite the students' lack of relevant prerequisites in Computer Science. This is achieved by emphasizing logic programming throughout to illustrate all concepts taught; weekly, scheduled laboratory sessions; and a carefully-graded series of assignments. We have demonstrated that bright engineering students can get to grips with practical issues in applied artificial intelligence through a short, intensive, course - starting from ground level. \fIPROLOG\fR was found invaluable as a pedagogical tool, as was the highly-structured engineering-style laboratory. Informal feedback indicates that the course has achieved its objectives and indeed exceeded expectations.Item Metadata only DATA COMPRESSION USING ADAPTIVE CODING AND PARTIAL STRING MATCHING(1982-09-01) Cleary, John G.; Witten, Ian H.The recently-developed technique of arithmetic coding, in conjunction with a Markov model of the source, is a powerful method of data compression in situations where a linear treatment is inappropriate. Adaptive coding allows the model to be constructed dynamically by both encoder and decoder during the course of the transmission, and has been shown to incur a smaller coding overhead than explicit transmission of the model's statistics. But there is a basic conflict between the desire to use high-order Markov models and the need to have them formed quickly as the initial part of the message is sent. This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that English text can be coded in as little as 2.2 bits/character with no prior knowledge of the source.