Connect public, paid and private patent data with Google Patents Public Datasets

Optical system auxiliary input calibration arrangement and method of using same

Info

Publication number
WO1993015496A1
WO1993015496A1 PCT/US1993/000874 US9300874W WO1993015496A1 WO 1993015496 A1 WO1993015496 A1 WO 1993015496A1 US 9300874 W US9300874 W US 9300874W WO 1993015496 A1 WO1993015496 A1 WO 1993015496A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
light
signal
instruction
image
program
Prior art date
Application number
PCT/US1993/000874
Other languages
French (fr)
Inventor
Roger Marschall
Jeffrey W. Busch
Leonid Shapiro
Richard M. Lizon
Lane T. Hauck
Original Assignee
Proxima Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • G06F3/0386Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry for light pen
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control and interface arrangements for touch screen
    • G06F3/0418Control and interface arrangements for touch screen for error correction or compensation, e.g. parallax, calibration, alignment

Abstract

A method and optical system auxiliary input calibration arrangement (10A) accurately and reliably discriminate between a user generated image and a video source generated image, such as one produced by a computer. The calibration arrangement includes amplifier devices (176a) for increasing the strength of a video information signal (178A) indicative of the video generated image, and a user generated auxiliary input signal (179A) indicative of an auxiliary input light image, without increasing spurious ambient light signals. A discrimination circuit (46A) generates a detection signal whenever the amplified information signals are greater than a predetermined reference level signal. The microprocessor (42A) controls the exposure time of the light sensing device and selects an appropriate level of amplification for the output signal from the sensing device to increase light sensitivity between the vieo source generated light images as compared to the user generated auxiliary light images. The optical auxiliary input arrangement (11B) accurately and reliably discriminates between user generated double click information without the need of the user being so steady of hand as to cause the auxiliary light information beam to illuminate the same precise location on the viewing surface during the double click operation.

Description

Description

OPTICAL SYSTEM AUXILIARY INPUT CALIBRATION ARRANGEMENT AND METHOD OF USING SAME Cross-Reference to Related Applications This application is a continuation-in-part of U.S. patent application filed February 14, 1991 entitled "METHOD AND APPARATUS FOR CALIBRATING GEOMETRICALLY AN OPTICAL COMPUTER INPUT SYSTEM," and a continuation-in- part application of U.S. patent application Serial No. 07/955,831 filed October 2, 1992, entitled "METHOD AND APPARATUS FOR CALIBRATING AN OPTICAL COMPUTER INPUT SYSTEM," which is a division of U.S. patent application Serial No. 07/611,416, filed November 11, 1990 entitled "METHOD AND APPARATUS FOR CALIBRATING AN OPTICAL COMPUTER INPUT SYSTEM," both of which are continuation-in-part applications of U.S. patent application Serial No. 07/433,029 filed November 7, 1989, entitled "COMPUTER INPUT SYSTEM AND METHOD OF USING SAME," now abandoned, all being incorporated herein by reference. Technical Field

This invention relates to the general field of an optical input arrangement and a method of using such an arrangement. More particularly, the present invention relates to an optical calibration technique for use with an optical system auxiliary input for facilitating accurate communication of user generated optical information utilized for display purposes. The present invention also relates to an auxiliary optical computer input system to facilitate recognition of an auxiliary optical input in a more precise and accurate manner. The present invention further relates to an optical auxiliary input technique for a system, which projects a computer generated image onto a viewing surface. Background Art

In one type of optical information system computer generated images are projected onto a screen for viewing by a large number of people simultaneously. An important aspect of such a system is to enable a user to enter information interactively into the system to modify images, or generate additional images during presentation.

In one successful arrangement, a user points a light generating device, such as a flashlight or laser pointer, at a projected image to provide auxiliary information for the system. In this regard, such a system generally includes a video information source, such as a computer, and a display projection arrangement, such as an overhead projector, for projecting images of the video information onto a viewing surface. An image processing arrangement detects and processes the displayed image reflecting from the viewing surface. Such a system detects the high intensity light images produced by the hand-held light generating device, and discriminates them from background ambient light as well as the light produced by the video information source. In this manner, light signals from the hand-held light generating device can be detected on the viewing surface, and then used by the system for modifying subsequently the projected video information. Such an optical auxiliary input system is described in greater detail in the above-mentioned U.S. patent application Serial No. 07/433,029.

While such an optical system and method of using it has proven highly satisfactory, such a system must be calibrated to assure the accurate communication of the user generated high intensity light information. Such calibration includes using a calibration arrangement to align properly an optical sensing device associated with the image processing arrangement relative to the viewing surface and the projected images. Such a calibration arrangement and method of using it, are described in greater detail in the above-mentioned copending U.S. patent application Serial No. 07/611,416. While such a calibration arrangement and calibration method has proven highly satisfactory under low ambient lighting conditions, such as in a darkened room, it would be desirable to facilitate calibration of such an optical system under a wide variety of ambient lighting conditions, even bright ambient lighting conditions.

Moreover, such a calibration technique should be able to be employed with many different types and kinds of optical systems generating images with substantially different luminance levels, as well as contrast levels between bright and dark images.

Such a calibration technique includes the proper alignment of the system, so that the viewing area of the system light sensing device is positioned properly to capture the entire computer generated projected image. Such alignment is desirable, because the viewing surface or screen of the system may be positioned at various distances and angular positions relative to the system light sensing device.

Also, the calibration of such a system entails sensitivity adjustments. Such adjustments are frequently necessary to accommodate for various projector light source intensities, different optical arrangements employed in conventional overhead projectors, and different optical characteristics exhibited by various liquid crystal display units employed in such systems. In this regard, calibration adjustments must be made to distinguish between the luminance levels of the various images reflecting from the viewing surface. Such adjustments however, are dependent upon several factors: the optical characteristics of the overhead projector including the power rating of projector lamps, the optical characteristics of the liquid crystal display unit employed, the distance the overhead projector is positioned from the viewing surface, and the intensity level of the user generated auxiliary images reflecting from the viewing surface.

Each of the above-mentioned factors directly affect the ability of the light sensing device to receive properly a reflected image, whether produced via the light generating pointing device or the projection display arrangement. In this regard, for example, if the overhead projector utilized in the projection display arrangement, is positioned a substantial distance from a viewing surface, the resulting image is large in size, but its overall luminance level is substantially reduced. Similarly, if an overhead projector employs a low intensity bulb, the projected image produced by the projector results in only a low luminance level.

Therefore, it would be highly desirable to have a new and improved calibration arrangement and method to calibrate the alignment, and improve the light sensitivity of an optical information system. Such an arrangement and method should enable a user to align conveniently the system optical sensing device to capture substantially the entire viewing area of a projected image. The arrangement and method should also enable the light sensitivity of the system to be adjusted so that it can be utilized with different types and kinds of liquid crystal display projection systems, employing different liquid crystal display panels and projection system. Another form of light sensitivity calibration necessary for such an optical input information system, is a calibration adjustment to distinguish between ambient background light, light from the high intensity user controlled light generating device and the light produced from the video image reflecting from the viewing surface. In this regard, because of variations in ambient background lighting, as well as various different intensity levels of both the high intensity auxiliary control light image, and light produced by the projection system, it is, of course, desirable to distinguish properly the auxiliary light image on the viewing surface or screen, from the other light being reflected therefrom. While the system has operated highly successful for many applications, it has been difficult, in some situations, to distinguish properly between the various light sources. For example, a light sensing device, such as a charge couple camera, must be positioned, not only in alignment to capture substantially the entire image reflecting from the viewing surface, but also it must be positioned in relatively close proximity to the viewing surface to produce a signal of sufficient potential to be processed for information extraction purposes. Therefore, it would be highly desirable to have a new and improved calibration arrangement and method to calibrate the alignment and light sensitivity of an optical auxiliary input information system so that an adjustment can be made conveniently so the system produces a sufficient amount of light for information processing purposes.

Conventional charge couple cameras, by the nature of their sensitivity to different levels of light intensities, typically produce a "haystack" shaped waveform signal in response to sensing an image produced by a projection system, such as an overhead projector. The haystack signal is the resulting response developed because the scattered light source of the overhead projector typically emanates from a light bulb centrally disposed beneath the stage of the projector. Such a response thus, makes it difficult to accurately detect auxiliary light information reflecting at or near the boundaries of a reflected image.

Therefore, it would be highly desirable to have a new and improved calibration arrangement and method for discriminating accurately and reliably between various intensities of light reflecting from a viewing surface under a wide variety of conditions. Such arrangement and method should also discriminate between different beam intensities produced by an auxiliary input light generating device over a wide variety of distances.

While such an optical system and method of using it as disclosed in U.S. patent application Serial No. 07/433,029 has proven highly satisfactory, the light generating device of such a system must be held in a very steady manner to assure the accurate communication of the user generated optical information. To help facilitate the accurate communication of such information, the light generating device generally includes a dual beam mode of operation. In this regard, the light generating device is activated by the user manually to generate a low intensity light beam to help the user position the auxiliary light beam on a desired location on the viewing screen without being sensed by the auxiliary input system. Once the low intensity beam is properly positioned in response to the actuation of a switch, the light generating device is then activated manually by the user to produce a high intensity light beam indicative of the auxiliary light information to interact with the computer. In this manner, the high intensity light signal from the hand held light generating device can generate auxiliary information for emulating a mouse. Such a dual beam light generating device and method of using it is described in greater detail in the above- mentioned U.S. patent application Serial No. 07/433,029. While such a light generating input device and input method has proven highly satisfactory for many applications, it would be desirable to provide a new and improved optical input arrangement and method that would more closely emulate both the single click and double click mouse features of a mouse device in a more convenient manner. More particularly, while the dual beam feature greatly facilitates the single click feature, it has been difficult for a user to use such a device for the double click feature.

The optical auxiliary input system can perform various different control functions, including those performed by conventional computer mouse input device. In this regard, the optical input system can perform such operations as a "single click" a "double click" and a tracking operation as is well known in the art. It is very important that the optical input device be able to function in a similar manner as a conventional computer mouse, since many application computer programs used today, are able to interface with a conventional mouse device for control purposes. In this manner, the optical input system should be compatible, more completely, with conventional application computer programs.

In this regard, the user must hold the light generating input device in such a steady manner, that the same location on the viewing screen is illuminated while the user turns the auxiliary light beam on and off in a repeated sequence. Thus, if the beam is not held at the same location on the viewing surface during the double click operation, the signal processing unit of the optical system can under certain circumstances misinterpret the auxiliary light information. For example, such a double actuation of the light could be interpreted as two, separate single click operations at two different locations on the screen. One manner of overcoming this problem is to have a much larger area on the screen to be hit by the high intensity light beam so that if the hand of the user should move inadvertently, the double click would still be interpreted correctly. However, this would require undue and unwanted restrictions on application computer programs. It would be far more desirable to have the optical auxiliary input device be more fully compatible with the existing computer program formats. Therefore, it would be highly desirable to have a new and improved optical auxiliary input arrangement and input method to more closely emulate a computer mouse type input device for use with an optical system. Such an arrangement and method should enable a user to emulate the mouse without the need of holding the auxiliary light image so steady that the auxiliary light information is projected precisely on the same location on the viewing screen during a double click operation.

The arrangement and method should also respond to the user in a fast and reliable manner to more completely emulate the functional features of a mouse input device. In this regard, such an arrangement should enable either a conventional computer mouse or the optical auxiliary input device, a light generating device, to communicate with the same video information source, whenever desired by the user, for modifying or changing displayed images in a simple and reliable manner.

In the past, projecting images onto a projection screen or other surface for viewing by a large number of people simultaneously, such as with transparencies and an overhead projector, provided a method for disseminating information in an efficient manner. However, because such transparencies were a fixed media, the user making the presentation was extremely limited in changing the form of the presentation except by using a large number of additional transparencies.

The ability of a user to change the form of a given presentation has been expanded significantly. In this regard, with the advancements in liquid crystal technology fixed media transparencies have evolved into dynamic images which are produced under the control of a computer or other video signal producing device. Thus, liquid crystal display panels have replaced the fixed transparencies to permit images, such as computer generated or video images, to be projected onto a screen or other surface for viewing purposes.

The capability of the presentation was expanded again when the user was given the ability to enter information interactively into the system in order to modify images, or generate additional images during the presentation, by simply directing a user controlled auxiliary beam of light onto specific areas of the projected image. In this manner, the user could interact with the computer or other device generating the projected image, in a manner similar to using a computer mouse control.

One such successful optical auxiliary input system is described in greater detail in the above-mentioned U.S. patent application Serial No. 07/901,253. The optical auxiliary input system described therein includes an arrangement wherein a user directs a high intensity light from a light generating device, such as a flashlight or a laser pointer, onto a relatively lower intensity projected image on a viewing area, such as a screen to provide auxiliary information for the system.

The system includes a video information source, such as a computer, and a display projection arrangement, such as an overhead projector, for projecting images of the video information onto the viewing surface. An image processing arrangement, including an optical sensing device, detects and processes the displayed image reflecting from the viewing surface. Such a system detects the high intensity light images produced by the hand-held light generating device, and discriminates the high intensity light images from background ambient light and light produced by the video information source. In this manner, light signals from the hand-held light generating device can be detected on the viewing surface, and then used by the system for modifying subsequently the projected video information.

The optical input light directed onto the viewing surface is detected by determining that the light intensity reflecting from the viewing surface has exceeded a predetermined reference level. In this regard, the high intensity auxiliary light source produces a brighter intensity light than the intensity of the projected image. While such a technique is satisfactory for most applications, under certain conditions, the high intensity input light shining on the viewing surface can go undetected. In this regard, if the input light is directed onto a portion of the projected image which is of a low intensity, the total light being reflected from the viewing surface will not exceed the predetermined reference, and thus the input light will not be detected. Thus, it would be highly desirable to have an even more precise and accurate detection technique for discriminating the auxiliary input signal from the projected image and the ambient light.

In order to function properly, such an auxiliary optical input system includes an optical sensing device, in the form of a video camera, associated with the image processing arrangement which must be properly aligned with the projected image. In this regard, the image must be completely within the area sensed by the optical sensing device.

Therefore, it would be desirable to have a new and improved technique which would enable a user to quickly and easily align the sensing device, such as the video camera, with the projected image on the viewing surface. In this regard, it would be highly desirable to have a technique whereby the user can align the sensing device in a manner of seconds, with little or no effort. Disclosure of Invention

Therefore, it is the principal object of the present invention to provide a new and improved arrangement and method for calibrating an optical system auxiliary input arrangement for proper alignment and light sensitivity for a wide variety of conditions.

Another object of the present invention is to provide such a new and improved optical system auxiliary input calibration arrangement and method to adjust the alignment and light sensitivity of an optical auxiliary input arrangement in a convenient manner so that the arrangement receives a sufficient amount of light for information processing purposes.

A further object of the present invention is to provide such a new and improved optical system auxiliary input calibration arrangement and method for discriminating accurately and reliably between various types of light sources associated with optical information systems.

Briefly, the above and further objects of the present invention are realized by providing a new and improved auxiliary input calibration arrangement and method for improved alignment and light sensitivity.

A method and optical system auxiliary input calibration arrangement accurately and reliably discriminate between a user generated image and a video source generated image, such as one produced by a computer. The calibration arrangement includes amplifier devices for increasing the strength of a video information signal indicative of the video generated image, and a user generated auxiliary input signal indicative of an auxiliary input light image, without increasing spurious ambient light signals. A discrimination circuit generates a detection signal whenever the amplified information signals are greater than a predetermined reference level signal. A microprocessor calculates the appropriate predetermined reference level signal based upon ambient lighting conditions, the strength of that portion of the information signal indicative of the video image, the type of optical system and the distance the optical system is disposed away from a viewing surface.

The microprocessor controls the exposure time of the light sensing device and selects an appropriate level of amplification for the output signal from the sensing device to increase light sensitivity between the video source generated light images as compared to the user generated auxiliary light images.

A dual beam light generating device produces both a low intensity laser beam for helping a user locate a desire portion of the video generated image and a high intensity laser beam for providing auxiliary input light of the desired position of the image illuminated by the low intensity beam.

Therefore, it is the principal object of the present invention to provide a new and improved optical auxiliary input arrangement and method for more closely emulating a mouse input device.

Another object of the present invention is to provide such a new and improved optical auxiliary input arrangement for emulating more closely a mouse double click feature, without requiring the user to hold the auxiliary light beam so steady that it must be projected precisely on the same position of a viewing screen during the double click operation. A further object of the present invention is to provide such a new and improved optical auxiliary input arrangement and method, which enables either a conventional mouse or the inventive light generating device to communicate with the video information source for modifying or changing displayed images, whenever desired by the user, in a simple and reliable manner.

Briefly, the above and further objects of the present invention are realized by providing a new and improved optical input arrangement and input method for emulating the functional features of a mouse input device in a more accurate and facile manner.

The optical auxiliary input arrangement for an optical system projecting computer generated images includes an image processing unit and communication interface for detecting the speed at which two high intensity auxiliary light images flash onto the projected computer image, to interpret the images as a mouse double click feature.

The optical auxiliary input arrangement accurately and reliably discriminates between user generated double click information without the need of the user being so steady of hand as to cause the auxiliary light information beam to illuminate the same precise location on the viewing surface during the double click operation. The image processing unit and communication interface cooperate together to permit both a low speed mouse and the high speed light generating device to communicate with the system.

Therefore, it is the principal object of the present invention to provide a new and improved arrangement and method for detecting an optical input signal projected onto a projected image.

Another object of the present invention is to provide such a new and improved optical input arrangement and method for alignment adjustment in an even more convenient manner.

Briefly, the above and further objects of the present invention are realized by providing a new and improved optical input arrangement and method for improved detection of a high intensity auxiliary optical input signal image reflecting from a viewing surface.

An optical input arrangement and method includes an optical device for sensing a projected image and for detecting the presence of a high intensity optical input signal light by discriminating it from the entire projected image and the ambient light reflecting from a viewing surface. A determination is made as to when the differences in intensity of sequentially measured pixel intensity values of the light reflected from the viewing surface exceed a positive threshold amount and substantially immediately thereafter decreases more than a negative threshold amount, to facilitate an even more precise discrimination between the input signal image and the overall projected image. An alignment device generates an optical signal for facilitating the alignment of the arrangement to capture the entire image reflecting from the viewing surface. Brief Description of Drawings

The above mentioned and other objects and features of this invention and the manner of attaining them will become apparent, and the invention itself will be best understood by reference to the following description of the embodiment of the invention in conjunction with the accompanying drawings, wherein: FIG. 1A is a pictorial view of a calibration arrangement, which is constructed according to the present invention, illustrating its use with an optical auxiliary input system; FIG. IB is a pictorial view of an optical input arrangement which is constructed according to the present invention, illustrating its use with an optical system;

FIG. 1C is a diagrammatic view of an optical input arrangement, which is constructed according to the present invention;

FIG. 2A is a symbolic block diagram of the calibration arrangement of FIG. 1A illustrating it coupled to an image processing apparatus forming part of the optical auxiliary input system of FIG. 1A; FIG. 2B is a symbolic block diagram of an image processing arrangement forming part of the optical input arrangement of FIG. IB;

FIG. 2C is a front elevational view of an optical sensing device of the optical input arrangement of FIG. 1C;

FIG. 3A is a symbolic block diagram of an amplifier device of the calibration arrangement of FIG. 2A;

FIGS. 3B to 10B are flow diagrams of the program of a microprocessor forming part of the image processing arrangement of FIG. 2B;

FIG. 3C is a schematic diagram of an alignment generating device of the optical input arrangement of FIG. 1C;

FIG. 4A is a symbolic block diagram of another calibration arrangement, which is constructed in accordance with the present invention;

FIGS. 4C-5C are firmware flow chart diagrams for a signal processing unit of the arrangement of FIG. 1C; FIG. 5A is a symbolic block diagram of still yet another calibration arrangement, which is constructed in accordance with the present invention;

FIG. 6A is a symbolic block diagram of still yet another calibration arrangement, which is constructed in accordance with the present invention;

FIGS. 6C-7C are intensity level versus time graphs depicting a typical detecting operation of the signal processing unit of the arrangement of FIG. 1C. FIG. 7A is graphical representation of the reflected light information signal generated by the light sensing device of FIG. 1A, illustrating the ambient background noise;

FIG. 8A is a graphical representation of the reflected light information signal of FIG. 1A, illustrating an insufficient black level signal voltage setting;

FIG. 9A is a graphical representation of the reflected light information signal of FIG. 7A, illustrating a properly adjusted black level signal voltage setting;

FIG. 10A is a graphical representation of the reflected light information signal generated by a light sensing device of FIG. 1A illustrating primary video information image;

FIG. 11A is a graphical representation of the reflected light information signal generated by the light sensing device of FIG. 1A illustrating both primary video image information and auxiliary image information; FIG. 11B is a symbolic block diagram of a communication interface of FIG. IB;

FIG. 12A is a graphical representation of the reflected light information signal of FIG. 11A, illustrating a discriminating reference level voltage; FIGS. 13A to 32A are flow diagrams of a program for a signal processing unit of FIG. 2A;

FIG. 33A is a graphical representation of reference level voltages for different contrast levels relative to given types of display device as a function of distance from a viewing screen of FIG. 1A;

FIG. 34A is a graphical representation of the reflected light information signal generated by a light sensing device of FIG. 1A; and FIG. 35A is a schematic diagram of the dual beam light generating device of FIG. 1A. Best Mode for carrying Out the Invention

Referring now to the drawings, and more particularly to FIGS. 1A and 2A, there is illustrated a calibration arrangement generally indicated at 9A, for calibrating an optical auxiliary input system generally indicated at 10A, and which is constructed in accordance to the present invention.

The optical auxiliary input system 10A, is more fully described in the above mentioned U.S. patent application Serial No. 07,433,029 and includes a video information source, such as a personal computer 12A, and a liquid crystal display unit 13A for displaying a primary image 24A indicative of the primary image information generated by the computer 12A. The liquid crystal display unit 13A is positioned on the stage of a projector, such as an overhead projector 20A, for enabling the displayed primary image information to be projected onto a viewing surface, such as a screen 22A. The optical auxiliary input system 10A also includes an image processing apparatus 14A and a dual beam light generating device 26A for generating auxiliary light information, such as a spot of reflected light 27A for facilitating the modifying or changing of the primary image information displayed by the liquid crystal display unit 13A.

The image processing apparatus 14A generally includes a light sensing device, such as a raster scan charge coupled device or camera 34A for generating a reflected light information signal 35A indicative of the luminance levels of the video images and other light reflecting from the surface of the screen 22A, and a signal processing unit 28A (FIG. 2A) coupled between the light sensing device 34A and the computer 12A by means (not shown) for converting the auxiliary light information generated by the device 26A into coordinate information to modify or change the displayed primary image information. The light sensing device 34A as best seen in FIG.

1A, has a field of view, indicated generally at 25A, that is substantially larger than the primary image 2 A. In this regard, the calibration arrangement 9A helps a user 32A align the light sensing device 34A relative to the viewing screen 22A, so that the field of view 25A of the device 34A is able to capture all of the displayed primary image 24A reflecting from the screen 22A. The calibration arrangement 9A, also helps facilitate adjusting the light sensitivity of image processing apparatus 14A, so that the signal processing unit 28A can accurately and reliably process the auxiliary light information for use by the computer 12A.

As best seen in FIG. 2A, the calibration arrangement 9A generally includes a signal amplifier circuit, generally indicated at 39A, for increasing the strength of the reflected light information signal 35A generated by the light sensing device 34A and a signal discrimination arrangement, generally indicated at 40A, for discriminating auxiliary light information from the other information components in the reflected light information signal 35A.

The signal discrimination arrangement 40A includes a comparator 46A, for facilitating discriminating between signals indicative of the various sources of light reflecting from the viewing surface 22A and a microprocessor 42A (FIG. 2A) for controlling a reference level signal 48A utilized by the comparator 46A for discrimination purposes. In this regard, for discrimination purposes, it should be understood that the light reflecting from the viewing surface 22A, has a plurality of luminance levels generally including background ambient light, primary image light, such as the image 24A, indicative of primary image information, and user 32A generated auxiliary image light, such as the spot of light 27A, indicative of auxiliary light information.

The microprocessor 42A also controls the exposure time of the light sensing device 34A, gain selection for the amplifier arrangement 39A, and an offset black level signal 43A that will be described hereinafter in greater detail.

The calibration arrangement 9A further includes an interactive position device 44A having a set of light emitting diodes 70A-73A for helping a user 32A to align the device 34A so that its field of view 25A captures the entire image 24A reflecting from the viewing surface 22A. The positioning device 44A is more fully described in copending U.S. patent application Serial No. 07/611,416 and will not be described in further detail.

For the purpose of calibration and alignment, the firmware of the microprocessor 42A includes a set of calibration algorithms to facilitate the alignment of the light sensing device 34A relative to the optical auxiliary input system 10A. The calibration algorithms include a field of view alignment algorithm 100A for user interactive alignment of the light sensing device 34A under normal ambient and harsh ambient lighting conditions, and a light sensitivity algorithm 300A for adjusting the light sensitivity of the signal discrimination arrangement 40A for facilitating detection and tracking of auxiliary light images. Each of the above mentioned algorithms will be described hereinafter in greater detail. Considering now the operation of the calibration arrangement 9A, when the computer 12A commences generating video information, the liquid crystal display unit 13A generates an initiation signal that is coupled to calibration arrangement 9A via a control cable 37A. The calibration arrangement 9A, in response to the initiation signal, generates an audible sound by means not shown to notify the user 32A that he or she may now initiate the calibration process.

To start the calibration process, the user 32A depresses a calibration button 45A located on the positioning device 44A. When the user 32A depresses the button 45A the calibration 9A apparatus via its program automatically instruct the user 32A through visual prompts via the light emitting diodes 70A-73A, how to position the device 44A so that the field of view 25A of the charge couple device 34A captures the entire image 24A reflecting from the viewing surface 22A. In this regard, the field of view alignment algorithm 100A includes a normal alignment subroutine 150A that utilizes the edge portions of the reflected video image to align the device 34A, and an alternative alignment subroutine 200A if the background ambient lighting condition are sufficiently harsh or if the luminance level of the reflected video image is sufficiently attenuated to prevent the normal alignment subroutine 15OA to effectively align the device 34A.

In the event the device 34A can not be aligned via the normal alignment subroutine 150A, the calibration arrangement 9A generates a distinguishable audible sound to notify the user 32A that he or she must use the alternative method of alignment. In this regard, the user 32A must depress the button 45A again and then activate the light generating device 26A to cause a high intensity auxiliary light image, such as the light spot 27A, to be reflected from the center of the projected image. The calibration arrangement 9A responds to the user via the alternative field of view alignment subroutine 200A by using the auxiliary light image 27A for aligning the light sensing device 34A.

Regardless of which of the subroutines 150A, 200A is utilized, both subroutines 150A and 200A cause the light emitting diodes 70A-73A to turn on and off in various configurations to provide the user 32A with visual prompts for aligning the light sensing device 34A via the positioning device 44A. Once the field of view 25A of the device 34A capture the center portion of image 24A, all of the diodes 70A-73A are de-energized to notify the user 32A that the device 34A is properly aligned. Once the device 34A has been properly aligned to capture the entire video image 24A, the program initiates the light sensitivity procedures to set up the internal light sensitivity factors for the arrangement 40A. Such internal light sensitivity factors include a black level factor determined by the voltage potential of the black level signal 43A, a reference level factor determined by the voltage potential of the reference level signal 48A, and a gain factor determined by the voltage potential of a gain select signal 47A (FIG. 3A) . Each of these factors will be described in greater detail hereinafter. Once the sensitivity factors have been set up the user 32A causes a spot of light to be reflected on and off at each respective corner of the image 24A, so the optical auxiliary input system 10A will be able to generate accurate and reliable coordinate information in response to detecting a spot of light produced by the device 26A. This latter process is more fully described in copending U.S. patent application Serial No. 07/611,416 and will not be described in further detail.

It should be understood however, that the above described technique enables the microprocessor 42A to be informed of the raster scan coordinate locations of the charge couple device 34A that correspond to the corner coordinate locations of the projected image. The microprocessor 42A then utilizes this acquired information to compute the conversion of the charge coupled coordinate location information into displayed image coordinate information that corresponds to pixel locations in the projected image 24A. The method of computation is more fully described in copending U.S. patent application 07/656,803.

From the foregoing discussion, one skilled in the art, will understand that once the processing unit 28A has acquired the above mentioned calibration information, the optical auxiliary input system 10A via user generated auxiliary light images can supply auxiliary video information to the computer 12A, which in turn, can generate primary video information that corresponds to the exact location of the auxiliary light image. Thus, prior to any video image being displayed by the display unit 13A, via computer generated video information, the optical auxiliary input system 10A can generate in a completely asynchronous manner, independent auxiliary video information. To align the charge couple device 34A so that its field of view captures the entire primary image 24A, the microprocessor 42A generates an exposure rate or time signal 31A that causes the charge couple device 34A to produce the reflected light information signal 35A. In this regard, if the exposure time selected for the charge coupled device 34A is not sufficient to enable the device 34A to generate an output signal of sufficient magnitude, the microprocessor 42A will increase the gain of the signal amplifier circuit 39A relative to the exposure time. The microprocessor 42A repeats this adjustment until proper gain and exposure time levels are determined.

The microprocessor 42A also causes the reference level signal to be set near zero to enable the output signal from the charge coupled device 34A to be passed by the comparator 46A. In this regard, the signal passed by the comparator 46A is coupled to the microprocessor 42A in the form of coordinate information that enables the microprocessor 42A to determine the size of a captured image relative to certain prestored expected maximum and minimum coordinate value.

After comparing the coordinate information with the maximum and minimum values, the microprocessor 42A determines the direction the field of view 25A of device 34A needs to be adjusted to capture the entire image 24A. After determining the correction factors, the microprocessor generates a set of signals which cause the light emitting diodes 70A-73A to be turned on or off in a particular configuration thus, providing the user 32A with a visual indication of how to adjust the positioning device 44A so the field of view 25A will capture a greater portion of the image 24A. This process is repeated iteratively until the entire image 24A is captured within the field of view 25A of device 34A. After the field of view 25A has been properly aligned, the microprocessor 42A adjusts light sensitivity. In this regard, the microprocessor 42A computes a reference level voltage that is sufficiently large to prevent low intensity auxiliary light information to be passed by the comparator 46A, but that is not so large as to prevent high intensity auxiliary light information to be passed by the comparator 46A. In order to compute the desired reference level voltage, the microprocessor 42A generates a bright image signal that causes the display unit 13A to produce a bright, clear white image which causes, in turn, the charge couple device 34A to produce a bright image information signal 60A (FIG. 34A) . The microprocessor 42A then adjusts the reference level signal 48A to a sufficient level to prevent the bright image information signal 60A from being passed by the comparator 46A. The bright image reference level voltage is indicated as b in FIG. 34A. Next the microprocessor 42A generates a dark image signal that causes the display unit 13A to produce a dark noncolored image which causes, in turn, the charge coupled device 34A to produce a dark image information signal 61A (FIG. 34A) . The microprocessor then adjusts the reference level signal 48A to a sufficient level to prevent the dark image information signal 61A from being passed by the comparator 46A. The dark image reference level voltage is indicated as d in FIG. 34A.

Next, the microprocessor 42A determines the model of the display unit 13A by communicating with the display unit 13A to obtain model number information. The microprocessor 42A utilizes the model number information to retrieve a set of adjustment factors that are utilized to compute the desired reference level voltage in accordance with the following formula:

Y = mx + b = desired reference level vol tage where m = .factor1 x = _ [reference level vol tage _ reference level

\ for bright image vol tage for dark image) x. _ [reference level vol tage + factor 2 ' for bright image in vol ts ,

The above mentioned equation, expresses the relationship between the image information signals 60A and 61A for a given type of display unit and projection unit that enables the information signals resulting from a dual intensity auxiliary light beam, to be distinguished by the comparator 46A.

From the foregoing, it will be understood that a low intensity auxiliary light beam can be displayed on any location of the projected image 24A, without generating auxiliary light information. Such a low intensity beam can therefore help a user in locating a precise spot on the image 24A before illuminating that spot with high intensity light or with auxiliary light information. Table IIIA illustrates various factors relative to a selected number of display units manufactured and sold by Proxima Corporation of San Diego, California.

Considering now the signal amplifier circuit 39A in greater detail with reference to FIGS. 2A and 3A, the amplifier arrangement 39A is coupled between the light sensing device 34A and the comparator 46A. The arrangement 39A generally comprises a direct current restoration and notch filter 75A having its input coupled via a conductor 35BA, to the reflected light information signal 35A produced by the charge coupled device 34A. The filter 75A helps remove extraneous noise from the reflected light information signal 35A before the signal is amplified and passed to the comparator 46A.

The amplifier arrangement 39A also includes a four stage multiple gain circuit indicated generally at 76A. The four stage multiple gain circuit 76A enables the reflected light information signal 35A to be amplifier to four discrete levels of 50, 100, 200 and 400 respectively. In this regard, the circuit 76A generally includes an input stage or multiple by 50 amplifier 77A coupled to the output of filter 75A and a set of series connected multiple by 2 amplifier stages 78A, 79A and 80A respectively. The amplifier arrangement 39A also includes an analog multiplexer unit 81A coupled to the output of each one of the amplifier stages 77A-80A for enabling selected ones of the stages 77A-80A to be coupled to the comparator 46A. In order to control which stage of the multiple gain circuit 76A will be coupled to the comparator 46A, the multiplexer 81A is coupled via a conductor 47AA to the gain select signal 47A produced by the microprocessor 42A. The output of the analog multiplexer 81A is coupled to a video signal input 35AA to the comparator 46A via a conductor 81AA.

In order to offset the output signal of the signal amplifier circuit 39A relative to the reflected light information signal 35A, the input to the multiple by 50A amplifier 77A is coupled via a conductor 43AA to the black level signal 43A produced by the microprocessor 42A. In this regard, the black level signal 43A causes the reflected light information signal 35A to be raised and lowered relative to a zero volt reference level as best seen in FIGS. 6A to 8A.

Considering now the signal discrimination arrangement 40A in greater detail with reference to FIG. 2A, the microprocessor 42A controls the exposure time of the charge couple device 34A, the reference level signal 48A for the comparator 46A, and the black level and gain select for the signal amplifier circuit 39A. In this regard, in order to convert the digital control signals produced by the microprocessor 42A to analog voltages, the signal discrimination arrangement 40A includes a set of digital to analog converters including a reference level signal converter 82A coupled to a positive input terminal of the comparator 46A, and a black level signal converter 83A coupled to the input of the amplifier arrangement 39A. The exposure time signal 31A is coupled directly to the timing generator 88A from the microprocess 42A via a conductor 84A. As best seen in FIG. 2A, the signal discrimination arrangement 40A also includes a counter arrangement 86A and a timing generator 88A.

The counter arrangement 86A includes a horizontal counter and latching arrangement 89A and a vertical counter arrangement 90A. The counter arrangement 86A is synchronized with the raster scan sequence of the charge coupled device by a pixel clock generated by the timing generator 88A. In this regard, the microprocessor 42A and timing generator 88A cooperate together to control the exposure time and scanning sequence of the charge couple device 34A. More particularly, they cooperate together so that the device 34A will produce an output signal of sufficient magnitude in response to the detection of light.

Considering now the light generating device 26A in greater detail with reference to FIGS. 1A and 35A, the light generating device 26A include a laser 85A powered by a battery 86A. The laser 85A produces a low intensity beam 87A for helping the user 32A locate a desired portion of the image to illuminate with the auxiliary light image 27A and a high intensity beam 89A for generating the auxiliary light image 27A. A dual position switch actuator indicated generally at 90A disposed on a handle 92A of the device 26A enables the user to switch beam intensities. The switch 90A include a low intensity light switch 93A and a high intensity light switch 95A. In this regard, when the user 32A depress the actuator 90A to a first or low beam position, switch 93A is enabled, causing the device 26A to produce the low intensity beam 89A. When the user 26A fully depresses the actuator 90A to a second or high beam position, switch 95A is enabled, causing the device 26A to produce the high intensity beam 89A.

From the foregoing, it should be understood the low beam mode of operation enables the user 32A to easily and quickly locate desired portions of the image 24A, without causing the generation auxiliary light information.

Thus, once a desired location is determined, the user 32A merely further depresses the actuator 90A to generate the auxiliary light image.

Considering now the field of view alignment algorithm 100A in greater detail with reference to FIG. 13A, the algorithm 100A commences at an instruction 102A in response to a user 32A depressing the calibration button 45A. Instruction 102A causes the microprocessor 42A to generate an initiation signal that in turn cause all of the light emitting diodes 70A-73A to be illuminated. The configuration of all of the diodes being illuminated, informs the user 32A that either the normal alignment subroutine 150A or the alternative alignment 300A will be used to align the field of view of the device 34A.

The program proceeds from instruction 102A to instruction 104A that causes the microprocessor 42A to generate a minimum gain control signal and a minimum exposure time signal which are coupled to the amplifier arrangement 39A and charge couple device 34A respectively. By setting the gain of the amplifier arrangement 39A to a minimum value, coupled with a minimum exposure time setting, assures that the calibration arrangement 9A will be able to detect the peak portions of reflected image information signal produced by the charge couple device 34A. The peak portions include a primary information peak portion resulting from the computer generated image 24A and an auxiliary information peak portion resulting from any user 32A generated auxiliary light image 27A reflecting from the viewing surface 22A.

The program then advances to an instruction 106A that causes an internal line width register (not shown) in the microprocessor 42A to be set to zero. The line width register is utilized to enable the calibration arrangement 9A to detect that portion of the reflected light information signal 35AA which is indicative of the widest area of projected light.

The program proceeds to instruction 108A that causes the microprocessor 42A to set the reference level signal 48A near zero. Setting the reference level signal 48A near zero allow substantially any video signal produced via the amplifier arrangement 39A to be passed by the comparator 46A. Thus, the zero reference level is a desired level for a black video image.

The program continues by causing an instruction 110A to be performed. In this regard, the microprocessor 42A generates a starting black level signal which is amplified by the amplifier arrangement 39A under minimum gain setting. The purpose of instruction 110A is to keep the elapse time for calibration purposes low. Thus, the starting black level and the incremental amount the black level is increased has been made gain dependent. In this regard, the black level adjustment has a range of 0 to 255 where 255 sets the black level at its lowest setting. Table IA shows the gain and black level relationships.

Table IA

The program continues to an instruction 112A to initiate a scanning sequence by the charge couple device 34A after about a 60 millisecond delay. The 60 millisecond delay is to allow the system hardware to properly settle after a major change in either the black level, the exposure time or the voltage potential of the reference level signal.

Instruction 113A is executed next to set a return address indication to return location 116A. After instruction 113A is executed, the system advances to a call instruction 114A that calls a black level set subroutine 500A (FIG. 32A) that will be described hereinafter.

When the black level set routine 500A is executed, the black level signal 43A is adjusted to near zero volts by first setting the black level high and then causing the black level signal 43A to be decreased until the widest video signal is found. FIG. 7A illustrates the reflected light information signal 35AA received by the comparator 46A, where a starting black level voltage setting is substantially above the reference level. FIG. 9A illustrates an adjusted black level signal with a corresponding widest video signal 35AA. FIG. 8A illustrates information signal 35AA received by the comparator 46A, where the black level voltage setting is substantially below the reference level.

After the black level set routine 500A has been executed, the program returns to return location 116A and proceeds to an instruction 118A. At instruction 118A, the microprocessor 42A sets the reference level signal 48A near its maximum value in order to initiate a search for the presence of auxiliary light information. More particularly, the program seeks to determine whether the user 32A has initiated the alternative field of view calibration process by activating the light generating device 26A.

After the reference level signal 48A has been set near its maximum level, the program proceeds to a decision instruction 12OA to determine whether the charge couple device 34A has completed its scan. If the scan has not been completed, the program waits at instruction 12OA until the scan is completed.

When the scan sequence is completed, the program advances to a decision instruction 122A to determine whether any light was captured during the scan. If no light was detected, the program goes to an instruction 124A that causes the voltage of the reference level signal 48A to be decreased by about 0.5 volts; i.e. one large incremental value. The program then proceeds to a decision instruction 126A to determine whether the reference level signal 48A has been decreased below a predetermined minimum value.

If the reference level signal 48A has been set below the predetermined minimum value, the program proceeds to the normal field of view alignment subroutine 150A. If the reference level signal 48A has not been set below the predetermined minimum value, the program goes to an instruction 128A that causes the light sensing device 34A to initiate another scanning sequence. After the scanning sequence has been started, the program returns to the decision instruction 12OA and proceeds as previously described.

At the decision 122A, the program advances to an instruction 130A if light was detected at the present voltage potential for the reference level signal 48A. At instruction 13OA, the voltage of the reference level signal 48A is increased by about 0.5 volts. In other words, the voltage of the reference level signal 48A is set at a level where light was not detected.

After increasing the voltage level of the reference level signal 48A, the program proceeds to an instruction 132A that causes the light sensing device 34A to commence another scanning sequence. The program then goes to a decision instruction 134A.

At decision instruction 134A, the program determines whether the last initiated scanning sequence has been completed. If the sequence has not been completed, the program waits at decision 134A. When the scanning sequence has been completed, the program advances to a decision instruction 136A to determine whether any light has been detected at the present reference level voltage. As instruction 13OA previously set the voltage of the reference level signal 48A at a sufficiently high level to prevent the detection of light, no light will be found during this scan sequence. The program therefore advances to an instruction 138A.

At instruction 138A, the microprocessor 42A causes the value of the current reference level voltage to be saved as a possible reference level voltage that is indicative of the peak of the auxiliary light image. After the value of the reference level voltage has been saved, the program goes to an instruction 140A. At instruction 140A the microprocessor 42A causes the voltage of the reference level signal 48A to be decreased by about 0.1 volts, i.e. one small increment.

After the value of the reference level voltage has been decreased, the program advances to a decision instruction 142A to determine whether the reference level signal 48A is below a predetermined minimum value. If the value is not below the predetermined value, the program returns to instruction 132A and proceeds as previously described. If the value of the reference level signal 48A is below the predetermined minimum value, the program proceeds to a decision 144A to determine whether an auxiliary light image has been detected. In this regard, the microprocessor 42A determines whether the previously saved reference level voltage less the present reference level voltage is greater than a predetermined constant. If the auxiliary light image has not ben detected, the program proceed to the normal alignment subroutine 150A. If the decision instruction 144A determines that an auxiliary light image has been detected, the program goes to an instruction 146A and computes the strength of the auxiliary light image from the following formula:

Current ' Refe ence Strength of vol taσ Vol tage Current

Auxiliary = g + Reference

Light Image 2 Vol tage

After the strength of the auxiliary light image has been computed, the program proceeds to an instruction 148A. The instruction 148A causes the display unit 13A to override the computer generated video image. In this regard, the projected images go to a blank image and then back to the computer generated image. This "flashing" sequence notifies the user 32A that the auxiliary light image, has been detected and that alignment of the charge couple device 3 A will proceed using the alternative alignment subroutine 200A.

Considering now the alternative alignment subroutine 200A in greater detail with reference to FIGS. 16A to 17A, the alternative alignment subroutine 200A commences at an instruction 202A which causes a scanning sequence to be initiated. The program then goes to a decision instruction 204A to wait for the scanning sequence to be completed. When the scanning sequence is completed, the program advances to a decision instruction 206A to determine whether the auxiliary light image 27A has been detected. If the auxiliary light image 27A is not detected, the program goes to an instruction 208A that causes all of the light emitting diodes 70A-73A to be illuminated. This particular configuration of illuminated diodes, informs the user 32A that the auxiliary light image was not detected. The program then returns to instruction 202A to start another scanning sequence. It should be understood, the program will proceed through the above described program sequence 202A, 204A, 206A, 208A, 202A, . . . repeatedly until an auxiliary light image is detected, thus, providing the user 32A with notification that an error condition exists and corrective action is required.

Referring to the decision 206A again, if the auxiliary light image is detected, the program goes to a decision instruction 210A to determine whether the auxiliary light image 27A has been detected within the middle of the field of view 25A of the charge coupled device 34A.

If the detected image is not within the middle of the field of view, the program goes to an instruction 212A that causes appropriate ones of the diodes 70A-73A to be illuminated or turned off. The diodes 70A-73A thus provide a visual indication to the user 32A of how to move the positioning device 44A to bring the detected auxiliary light image 27A into the center of the field of view of the device 34A. In this regard, the calibration arrangement 9A desires the detected auxiliary light image 27A to be positioned in a small imaginary rectangle in the middle of the field of view of the device 34A.

After providing the user 32A with a visual indication of how to position the charge coupled device 34A, the program proceeds to an instruction 214A to preset an internal timer (not shown) in the microprocessor 42A to a predetermined elapse time. As will be explained hereinafter, device 44A must remain in alignment for a predetermined period of time. Once the timer has been set, the program returns to the instruction 202A to initiate another scanning sequence. In this regard, the program proceeds as previously described until the user 32A properly aligns the device 34A. When the light sensing device 34A is aligned, all of the light emitting diodes 70A-73A turn off, thus providing the user 32A with a visual indication that center alignment has been completed.

When center alignment has been completed the program proceeds from the decision instruction 210A to a decision instruction 216A to determine whether the internal timer has completed its timing sequence. If the timer has not timed out, the program returns to the instruction 202A and repeats the sequence 202A, 204A, 206A, 208A, 216A, 202A . . . until the timer has completed its timing sequence.

Once the timing sequence has been completed, the program proceeds from the decision instruction 216A to an instruction 218A. Execution of instruction 218A causes the display unit 13A to override the computer generated image information and display a black image. The program then proceeds to an instruction 220A that causes the calibration arrangement 9A to generate a "chirp" sound to notify the user 32A that he or she should turn off the light generating device 26A. The program proceeds from the instruction 220A to an instruction 222A, to start another scanning sequence. The program then advances to a decision instruction 224A to wait for the scanning sequence to be completed.

When the scanning sequence has been completed, the program proceeds from the decision instruction 224A to a decision instruction 226A to verify that the user 32A has deactivated the light generating device 26A; i.e. the auxiliary light image 27A is no longer being detected. If the auxiliary light image 27A is still being detected, the program returns to instruction 222A to start another scanning sequence. From the foregoing, it will be understood the above described program sequence 222A, 224A, 226A, 222A . . . will be repeated until the user 32A deactivates the light generating device 26A. Considering now the normal alignment subroutine 150A in greater detail with reference to FIGS. 18A to 26A, the normal alignment subroutine 150A utilizes a bright clear white image displayed by unit 13A in order to facilitate the alignment of the device 34A. More particularly, during the normal alignment process the calibration arrangement 9A, seeks to identify a sharp change in the luminance level of the projected image and assume such a transition is one of a set of four edge portions defining the periphery boundaries of the projected image. The edge portions include a top edge portion 56A, a bottom edge portion 57A, a right side edge portion 58A and a left side edge portion 59A.

In order to detect an edge portion, the charge coupled device 34A must generate a reflected light image signal 35A having a sufficiently large amplitude to permit detection of substantially different contrast levels defined by clear bright images and dark substantially noncolored images. In this regard, the program enables the microprocessor 42A 1) to control the exposure time of the light sensing device 34A so that its output signal 35A has sufficient strength for contrast detection purposes; 2) to control the gain of the video path to the comparator 46A, so the comparator 46A is able to distinguish the different contrasts; and 3) to control the voltage potential of a black level signal 43A in order to assure the voltage levels of the reflected light signal 35AA are maintained within the voltage range of the comparator 46A.

The normal alignment subroutine 150A commences at an instruction 302A to set up a proper exposure time for the device 34A. In this regard, a flag is set indicating the exposure time is unknown. The program then advances to an instruction 304A that causes the microprocessor 42A to generate a signal that causes the display device 13A to override the computer generated video information and displays a bright substantially noncolored image.

After the bright image is generated, the program advances to an instruction 306A, that causes the exposure time for the device 34A to be set to its minimum exposure time. From instruction 306A, the program proceeds to an instruction 308A.

When the program goes to the instruction 308A, the microprocessor 42A causes all of the light emitting diodes 70A-73A to be illuminated. The program then advances to an instruction 310A where the microprocessor 42A sets the gain of the amplifier arrangement 39A to maximum. The calibration arrangement 9A, based on the foregoing, starts the alignment with the shortest exposure time and the maximum gain setting. After the gain has been set to the maximum level, the program advances to an instruction 312A that causes the microprocessor 42A to set the internal line width register to zero. After the line width register is set to zero, the program proceeds to an instruction 314A which causes the reference level signal 48A to be set close to zero volts.

Once the reference level voltage has been established, the program advances to an instruction 316A which causes an initial or starting black level to be set relative to the present gain setting (which is set at its maximum level) . Table IA, as noted earlier, indicates the relationship between the gain settings and the starting black levels. Although there are a total of two hundred and fifty five level settings for the black level, a less than maximum black level setting of 186A is selected initially because the calibration procedure takes too long to complete if the maximum level of 255A is initially set. The program then proceeds to an instruction 318A to start a scanning sequence after about a 60 millisecond delay that allows the circuits in the calibration arrangement 9A to settle. While the scanning sequence is commencing, the program advances to an instruction 320A that sets a returning address to a return location 324A. The program next executes a call instruction 322A to call the black level set routine 500A which causes the black level to be adjusted to near zero volts. When the black level set routine 500A is completed, the program returns to the previous set returning address, causing the program to return to return location 324A.

The program then advances to a decision instruction 326A to determine whether the exposure time flag for the alignment procedure is known. It should be noted that one of the initial alignment steps at instruction 302A caused the exposure flag to be set to the unknown setting.

If the exposure time is unknown, the program goes to an instruction 330A which sets the voltage potential of the reference level signal 48A to near its maximum level of about 10 volts. If the exposure time is known, the program goes to an instruction 328A and drops the black level setting by a fixed amount based on the gain setting. Table IIA, as noted earlier, provided the relationship between the gain settings and the decrement values applied to the black level setting.

After the black level setting is decreased, the program proceeds to the instruction 330A and sets the reference level signal at near its maximum voltage of about 10 volts. From instruction 330A the program advances to an instruction 332A and starts another scanning sequence after about a 60 millisecond delay. The program next executes a decision instruction 334A to determine whether the scanning sequence has been completed. If the sequence has not been completed, the program waits at the decision instruction 334A. When the scanning sequence is completed the program goes to a decision instruction 336A to again determine whether the exposure time is known. If the exposure time is unknown, the program proceeds to a decision instruction 338A to determine whether the reflected light image signal 35AA is greater than the reference levels signal 48A. In this regard, with the gain set to maximum, and the reference level signal 48A set to maximum, the comparator 46A will generate an output signal when the reflected light image signal 35AA is greater than the reference level signal 48A. The output signal from the comparator 46A is thus, indicative that at the present exposure' time setting, a video image can be detected. The exposure time is therefore known and the program advances to an instruction 340A that causes an internal flag in the microprocessor 42A to be set to indicate that the exposure time is known. As will be explained hereinafter, once the exposure time is sufficient to capture a given reflected light image signal 35A, the black level signal 43A is decreased to adjust the voltage potential of the reflected light image signal 35A to optimize the signal 35AA within the voltage range of the comparator 46A. In this regard, the program proceeds from instruction 340A to the instruction 328A which causes the black level setting to be decreased by a predetermined fixed amount as shown in Table IIA. The program then proceeds from instruction 328A as previously described.

Referring again to the decision instruction 338A, if the potential value of the reflected light image signal 35AA is not greater than the potential value of the reference level signal 48A, the program proceeds from instruction 338A to a decision instruction 342A. A determination is made at decision instruction 342A whether a longer exposure time is available.

If a longer exposure time is not available, the program advances to an instruction 380A that will be described hereinafter. If a longer exposure time is available, the program goes to an instruction 344A that sets the exposure time to the next highest level. The program then returns to instruction 312A, and proceeds as previously described but with a longer exposure time. In this regard, it should be understood that a longer exposure time will cause the voltage potential of the output signal from the light sensing device 34A to be increased.

The normal alignment subroutine 150A continues in the manner previously described from instruction 312A through instruction 344A repeatedly; however, through each sequence, the exposure time is increased until an output signal is generated. Such an output signal is indicative that the reflected image signal 35AA is greater than the reference level signal 48A.

If all of the exposure times have been attempted with the gain of the amplifier arrangement 39A set to a maximum without generating an output signal, the program will proceed to an instruction 380A that will be described hereinafter. In any event, the program determines whether any light can be found. If no light is found, the program will cause an audible alarm to be energized to notify the user 32A that corrective action must be taken. Referring again to the decision instruction 336A, if the exposure time is known, the program advances to a decision instruction 350A to determine whether the reflected light image signal 35AA is greater than the reference level signal 48A. In this regard, if the comparator 46A generates an output signal, the reflected light image signal 35AA is greater than the reference level signal 48A. The program in response to a "clipped video signal" determination, advances to a decision instruction 352A to determine whether the last completed scanning sequence was executed with the gain of the amplifier arrangement 39A sets at its lowest level.

If the gain was not set to the lowest level, the program advances to an instruction 354A which causes the microprocessor 42A to generate a select gain signal forcing the next lower gain level to be selected. The program then returns to the instruction 312A, and proceeds as previously described.

If the image just scanned was observed by the light sensing device 34A, with the gain set at its lowest level, the program goes to the instruction 380A. From the foregoing, it should be understood that with a known shortest exposure time, the calibration arrangement 9A will cause the gain setting of the amplifier arrangement 39A to be decreased repeatedly until the reflected image signal 35AA is less than the maximum setting for the reference level signal 48A.

Referring again to the decision instruction 350A, if the comparator 46A fails to generate an output signal, the reflected light image signal 35AA is less than the reference level signal 48A. Responsive to such a determination, the program proceeds to a decision instruction 360A to determine whether the gain is set at its maximum level. If the gain is set at a maximum level, the program proceeds to the instruction 380A. If the gain is not set at a maximum level, the program next executes an instruction 362A which sets the reference level signal 48A to a predetermined voltage of about 6 volts. This is the smallest acceptable reference level voltage setting (for all gain level setting) for alignment purposes. Stated otherwise, for the purpose of alignment the reflected light image signal 35AA must always be substantially greater than 6 volts.

The program next proceeds to an instruction 364A which causes another scanning sequence to be commenced. After the next scanning sequence has been commenced, the program executes a decision instruction 366A to wait for the scanning sequence to be completed.

When the scanning sequence has been completed, the program executes a decision instruction 368A to determine whether the reflected light image signal 35AA is greater than the reference level signal 48A. If the reflected light image signal 35AA is not too small, the program advances to the instruction 380A. If the reflected light image signal 35AA is too small, the program advances to an instruction 370A which causes the next higher gain level to be selected.

After the next highest gain level is set, the program advances to an instruction 371A which causes the video line width register to be reset to zero. The program then executes an instruction 372A which causes the reference level signal 48A to be set at about zero volts.

The program next executes an instruction 373A which sets a starting black level based on the gain setting as set forth in Table IA. Once the voltage potential of the starting black level signal 43A has been set, the program goes to an instruction 374A which causes another scanning sequence to be commenced. The program next executes an instruction 375A which sets a returning address for the program to a return location 37 'A. After setting the return location, the program advances to a call instruction 376A which causes the black level set subroutine 500A to be called. From the foregoing, it should be understood that the program causes another black level adjustment before commencing to search for a step change in the reflected light image 35AA.

After the black level set subroutine 500A has been executed, the program returns to the instruction 377A. The program then proceeds to an instruction 378A which causes the black level to be decreased based on the current gain setting as set forth in Table IIA.

The program then continues to the instruction 380A which initializes a set of internal registers (not shown) denoted as a midpoint of step register, a step size register, and a bottom of step register. As will be explained hereinafter in greater detail, these registers will be loaded with data that will be indicative of a step change in the luminance level of the reflected light image. The program next executes an instruction 382A which causes the reference level signal 48A to be set near zero volts. The program then proceeds to an instruction 384A to cause another scanning sequence to be commenced. The program proceeds to a decision instruction 386A to wait for the scanning sequence to be completed. When the scanning sequence is completed, the program advanced to a decision instruction 388A to determine whether any light was found at the existing reference level signal setting; i.e. if an output signal was generated by the comparator 46A, the output signal would be indicative that the reflected light image signal 35AA was greater than the present reference level signal 48A.

If light is not detected at the existing reference level voltage, the program goes to a decision instruction 420A that will be described hereinafter in greater detail. If light is detected, at the existing reference level voltage, the program proceeds to an instruction 400A which determines the maximum and minimum values stored in the horizontal and vertical counters 89A and 90A respectively. The maximum and minimum values are indicative of the top, bottom, left and right locations of the luminance level steps produced from the clear image generated by device 13A. The program next executes a decision instruction 402A, to determine whether the stored values are about the same as determined during the previous scan. As these values have not been previously stored, they will not be at about the same values. Responsive to a determination that the present values are about the same as the previously stored values, the program goes to an instruction 416A as will be described hereinafter. If the values are not about the same, the program proceeds to an instruction 403A that causes the step size to be computed based on the following formula:

Vol tage Potential of Vol tage Potential of

Step Size = Current Reference - Reference Level Signal

Level Signal for Saved Bottom of Step

After computing the step size, the program proceeds to a decision instruction 404A which determines whether a step of light has been detected. It should be noted that a step of light is defined as all four edges of the projected light image being at about the same value plus or minus a given constant and the step size is greater than or equal to V, where V is 314 millivolts.

If a step of light has not been detected, the program goes to an instruction 414A which causes a digital value indication of the voltage potential of the current reference level signal 48A to be saved. After saving the current reference level, the program advances to the instruction 416A which causes the reference level voltage to be increased by a predetermined amount of about 78 millivolts. It should be understood that the saved reference level voltage could be indicative of the luminance level transition at the edge of the projected image; i.e. the bottom of a step of light.

Referring to the decision instruction 404A, if a step of light has been detected, the program proceeds to a decision instruction 406A to determine whether the projected image size for the present step of light is within a set of predetermined maximum and minimum levels. Stated otherwise, the microprocessor 42A determines whether the top, bottom, left and right values are within the predetermined maximum and minimum levels. In this regard, the extreme values stored in the horizontal and vertical counters 89A and 90A respectively are compared with the following maximum and minimum value:

If the size is not within the maximum, minimum values, the program goes to the instruction 414A, which causes a digital value indicative of the potential of the current reference level signal 48A to be saved as previously described. If the size is within the maximum and minimum values, the program goes to a decision instruction 408A to determine whether the present step has a value that is about the value of a previously stored step (instruction 380A initially set at the step size value to zero) . If the step has about the same size, the program goes to the instruction 414A and proceeds as previously described. If the step is not about the same size, the program advances to a decision instruction 410A to determine whether the size of the present step is greater than the previously stored step size (again, instruction 380A initially set the step size value to zero) .

If the step size is not greater than the previously saved step size, the program goes to the instruction 414A and proceeds as previously described. If the step size is greater than the previously stored step size, the program next executes an instruction 412A, which causes a digital value indicative of the size of the step and a digital value indicative of the potential value of the reference level signal 48A at the midpoint of the step to be saved.

Next the program proceeds to the instruction 414A which causes a digital value indicative of the potential of the current reference level signal 48A to be stored as a possible value for the bottom of a step.

After executing instruction 414A the program advances to the instruction 416A which causes the voltage potential of the reference level signal 48A to be increased by a predetermined amount. After the reference level signal has been increased, the program goes to a decision instruction 418A to determine whether the potential value of the present reference level signal 48A is equal to about 10 volts or the maximum acceptable reference voltage potential for the comparator 46A.

If the reference level signal 48A is not set to the top range of the comparator 46A, the program returns to instruction 364A causing another scanning sequence. The program proceeds from instruction 364A as previously described.

If the reference level signal 48A is set to the top range of the comparator 46A, the program advances to the decision instruction 420A. Decision instruction 420A determines whether a step in luminance levels was found. If no step was found, the program advances to an instruction 422A which causes the calibration arrangement 9A to make a "buzz" sound notifying the user 32A that alignment was not possible. After the alarm is sounded, the program returns to instruction 302A in order to attempt another alignment. In this regard, when the buzz alarm is sounded, the user 32A must take some form of corrective action such as to darken the ambient lighting condition in the room, or to move the overhead projector 20A closer to the viewing surface 22A.

If a step of light was found at decision instruction 420A, the program next executes the instruction 424A which causes a timer to be set for continued alignment. After the timer is set, the program advances to an instruction 426A which causes an audible "chirp" sound to be produced notifying the user 32A that a step was found and camera alignment will now proceed. The program next executes an instruction 427A, that causes the voltage potential of the reference level signal 48A to be set to the mid point value previously stored relative to the detected step in light. The program then goes to an instruction 428A, that causes another scanning sequence to be commenced. The program then proceeds to a decision instruction 430A to wait for the scanning sequence to be completed.

When the scanning sequence is completed, the program advances to a decision instruction 432A to determine whether any light is found. If no light is found, the program proceeds to an instruction 440A, that causes all of the light emitting diodes 70A-73A to be illuminated. If light is found, the program advances to a decision instruction 433A. At decision instruction 433A, a determination is made whether the center of the computer generated reflected light image 24A is within a small imaginary rectangular area of the field of view of the light sensing device 34A. If the image is centered, program goes to an instruction 436A which causes all of the light emitting diodes 70A-73A to be turned off. This provides a visual indication to the user 32A that the device 34A has been properly aligned. If the image is not centered, the program goes to an instruction 434A that causes appropriate ones of the light emitting diodes 70A-73A to be energized for instructing the user 32A how to move the positioning device 44A in a predetermined manner; i.e. up, down, left, right or combinations thereof. The program next executes an instruction 435A which set an alignment timeout timer (not shown) . After the alignment timeout timer has been set, the program advances to an instruction 44IA which causes the program to delay for a predetermined period of time. The program then returns to instruction 428A and proceeds as previously described.

From the foregoing, it should be understood that the instruction loop from instruction 428A through 441A enables the user 32A to position the device 44A so the projected image is aligned with an imaginary rectangular square in the field view of the light sensing device 34A. Referring to instruction 436A, after all of the light emitted diodes 70A-73A have been turned off, the program goes to a decision instruction 438A to determine whether the alignment timeout timer has completed its sequence. If the timer has not completed its sequence, the program goes to the instruction 441A and proceeds as previously described. If the timer has completed its sequence, the program advances to an instruction 442A which causes the image on the display unit 13A to be a bright clear image.

Considering now the sensitivity subroutine 300A in greater detail with reference to FIGS. 27A to 31A, the sensitivity subroutine 300A commences at an instruction 443A, that causes all of the light emitting diodes 70A- 73A to be turned off. The program then advances to an instruction 444A, that sets the exposure time of the device 34A to a minimum level. A minimum exposure time is required for sensitivity alignment to assure reliable spot detection and tracking operations. From instruction 444A, the program executes an instruction 445A which sets the gain level to its maximum level. After the gain level has been set to maximum, the program goes to an instruction 446A, that causes the line width register to be reset to zero. Next an instruction 447A is executed that causes the reference level signal 48A to be set near zero volts. After the voltage potential of the reference level signal 48A is set, the program goes to an instruction 448A that sets the starting black level based on the gain setting in accordance with Table IA. The program then advances to an instruction 449A, that starts another scanning sequence after about a 60 millisecond delay to allow the calibration arrangement circuits to settle. When the scanning sequence is commenced, the program advances from instruction 449A to instruction 450A which causes the apparatus 9A to produce an audible "chirp" sound to indicate the optical auxiliary input system 10A is in alignment. The program next executes a return location instruction 451A that sets a return address to a return location 453A. The program proceeds from an instruction 451A to a call instruction 452A which calls the black level set subroutine 500A. After the black level set subroutine 500A has been executed, the program returns to the return location 453A and proceeds from thence to an instruction 454A. Instruction 454A sets the voltage potential of the reference level signal 48A to about 4.0 volts for detecting a reflected light image signal 35AA having auxiliary light information. Setting the reference level signal 48A to this predetermined potential level is necessary to adjust the gain for a particular desired signal level. After setting the reference level signal 48A to the desired potential, the program proceeds to an instruction 455A which commences another scanning sequence. The program then proceeds to a decision instruction 56A to wait for the scanning sequence to be completed. When the scanning sequence is completed, the program proceeds to a decision instruction 457A to determine whether the selected gain level is too large. In this regard, an excessively large gain setting would preclude detecting that portion of the reflected light information signal 35A that is indicative of the auxiliary light information. It should be noted that the determination is based upon the difference between the maximum and minimum values stored in the horizontal counter exceeding a prestored constant.

If the gain is too large, the program goes to a decision instruction 458A to determine whether a lower gain setting is available. If a lower gain setting is available, the program then advances to an instruction 459A, that causes the next lower gain to be selected.

After selecting the lower gain level, the program returns to instruction 451A and proceeds as previously described.

If the gain is not too large as determined at instruction 457A, the program goes to an instruction 461A that will be described hereinafter.

Referring again to the decision instruction 458A, if a lower gain is not available, the program proceeds to an instruction 460A, that causes the reference level signal 48A to be set to a maximum value. The program then goes to the instruction 461A, that causes another scanning sequence.

The program next executes a decision instruction 462A, to determine when the scanning sequence has been completed. When the scanning sequence has been completed, the program goes to a decision instruction 463A to determine whether the maximum level of the reflected light image signal 35AA has been found. If the top of the reflected light image signal 35AA has been found, the program proceeds to an instruction 466A, as will be described hereinafter. If the top of the reflected light image has not been found, the program proceeds to an instruction 464A which reduces the potential value of the reference level signal 48A by a predetermined amount. The program then advances to a decision instruction 465A to determine whether the potential of the reference level signal 48A has been set to a minimum value. If the signal 48A has not been set to a minimum value, the program returns to the instruction 461A, starting another scanning sequence, and proceeds as previously described. If the reference level signal 48A has been set to a minimum value, the program proceeds to the instruction 466A.

At instruction 466A, the microprocessor 42A generates a signal that causes the image displayed by the display unit 13A to a dark level. The program then advances to an instruction 467A which saves a digital value indicative of the voltage potential of the present reference level signal 48A as a possible maximum potential value for the reflected light image signal 35AA while the display 13A is generating a bright clear image. The program next executes an instruction 468A which causes another scanning sequence to be started. The program then advances to a decision instruction 469A and waits for the scanning sequence to be completed.

When the scanning sequence is completed, the program advances to a decision instruction 470A to determine whether a maximum level of the reflected image signal 35AA has been determined for the dark reflected image. If the maximum level of the reflected image signal 35AA is not established, the program proceeds to an instruction 471A, that causes the potential of the reference level signal 48A to be decreased by a predetermined amount. The program next determines, at a decision instruction 472A, whether the potential of the reference level signal 48A is at a minimum level.

If the potential of the reference level signal 48A is not at a minimum level, the program returns to instruction 468A to commence another scanning sequence for detecting the maximum level of the reflected image signal 35AA. If the reference level signal 48A is at a minimum potential, the program advances to an instruction 473A which allows the display unit 13A to display the normal computer generated image 24A in lieu of the dark image.

Referring again to decision instruction 470A, if a maximum vertical count is found for the dark image, the program goes to the instruction 473A and proceeds as previously described; i.e. the display unit 13A is permitted to display the normal computer generated image 24A instead of the dark image.

The program proceeds from instruction 473A, to a decision instruction 474A, to determine whether the display unit 13A is a given type of model. If the unit 13A is a known model, the program proceeds to an instruction 475A which causes a pair of optical correction factors to be retrieved from a look-up table. If the unit 13A is not a known model, the program proceeds to an instruction 476A which causes the calibration arrangement 9A to communicate with the display unit 13A for the purpose of receiving the correction factors indicative of its display characteristics. Table IIIA illustrates the optical correction factors for three types of liquid crystal display units sold and manufactured by Proxima Corporation of San Diego, California. Table IIIA

It should be noted as mentioned earlier, that there is a given relationship between various reflected light image signals 35A indicative of bright clear images, dark images and those portions of the corresponding reflected light image signal 35A which are indicative of auxiliary light information produced from an incandescent light source, a low intensity laser light source, and a high intensity laser light source. The relationship also extends to the projection direction of the reflected light image; i.e. front projection or rear projection. In this regard, the following formula has been determined experimentally for the different type of display indicated in Table IIIA while displaying dark images, bright or clear light images and auxiliary light images

produced from incandescent and laser light sources of having different luminance levels:

Vol tage Potential of π-a i-r. -, \

Reference Level Vol tage - act01 -w

'Maximum Vol tage Potential Maximum Vol tage Potential1 of Clear Light - of Dark Light Reflected Image signal Reflected Image Signal ,

Maximum Vol tage Potential + Factor 2 + of Clear Light

Reflected Image Signal

The above mentioned formula was derived by plotting the minimum voltage potential of the reference level signal to distinguish between a high intensity auxiliary light information beam and a low intensity auxiliary light information beam as a function of the difference between the voltage potential of the reference level when a bright image is displayed and when a dark image is displayed. FIG. 33A is a graphical representation for two of the display units listed in Table IIIA. More particularly, the A722 is represented by a graph 98A and the A822 is represented by a graph 99A.

Considering the computation of the reference level voltage in greater detail with reference to FIG. 33A, when Factor 1 equals one and Factor 2 equals zero the previously mentioned equation reduces to a basic formula given by: where b = _the reference lej .1.voltage

c = the difference between the reference level voltage relative to the bright image information signal 60A and the reference level voltage relative to the dark image information 61A; and d = the reference level voltage relative to the dark image information signal 61A. From the basic equation (y = b+c) , it can be determined readily that the low intensity auxiliary light information, indicated generally at 68A, must be less than c to avoid being passed by the comparator 46A when the reference level voltage is set at y volts.

In a similar manner, it also can be determined readily that high intensity auxiliary light information, indicated generally at 69A, must be greater than y in order to be passed by the comparator 46A when the reference level voltage is set at y volts. Thus, the voltage levels for the low intensity auxiliary light information 68A (BL0W/MAX) and the high intensity auxiliary light information 69A (BHIGH/HIN) can be expressed as follows:

Minimum Vol tage for High Intensi ty Beam = y-d = BHIGH MIlf

From the-foregoing i should be. understood that Max voltage or EowBeam intensi ty = c -- BLOW/MX

B LO /HAX must always be less than c or the voltage differences defined by the contrast of a given panel. Similarly, it should be understood that BHIGH/MIN must always be greater than b + c. In order to adjust for different types of display units, the factors, Factor 1 and Factor 2 are introduced into the above mentioned basic formula as follows: the equation for the computed, reference voltage thus becomes: From the foregoing, it should be understood that the y - mx + z signal information for the low beam auxiliary light

. r -_ • ■ , -. where m = factor l; information will never be passed by the comparator 46A.

After

the program proceeds to an instrac ion -£<3<7£o-{?h2Lch computes the reference level voltage based on the previously mentioned formula.

The program then proceeds to a decision instruction 478A to determine whether the computed reference level voltage exceeds the maximum permissible potential of the reference level signal 48A. If the potential is not too high, the program goes to an instruction 483A which causes the reference level signal 48A to be set to the computed voltage potential. After the voltage potential of the reference level signal 48A is set, the program goes to an instruction 484A which causes a series of audible sounds of "chirp," short "beep," "chirp" followed by short "beep" to notify the user 32A the system is ready for corner location calibration. The program then goes to a call instruction 485 which calls the corner calibration routine more fully described in copending U.S. patent application Serial No. 07/611,416.

If the potential of the reference level signal 48A is too large, the program proceeds from instruction 478A to an instruction 479A, that forces the displayed image to a bright clear image. The program next executes a decision instruction 480A to determine whether a lower gain is available. If a lower gain is not available at the decision instruction 480A, the program goes to an instruction 482A which forces the potential value for the reference level signal 48A to a maximum potential.

If a lower gain is available at decision instruction 480A, the program proceeds to an instruction 481A which causes the gain to be set to the next lower level. After the gain is set at the lower level, the program returns to instruction 451A and proceeds as previously described. Referring again to the instruction 482A, after instruction 482A is executed the program goes to the instruction 484A and proceeds as previously described.

From the foregoing, it will be understood by those skilled in the art, that if the computed reference level voltage is greater than the range of the comparator 46A, the program via the steps described in instructions 478A, 479A, 480A and 481A causes the gain of the amplifier arrangement 39A to be decreased in order to recalculate an appropriate potential for the reference level signal 48A. Considering now the black level set routine 500A in greater detail with reference to FIG. 32A, the black level set routine 500A illustrates the steps taken by the microprocessor 42A to offset the reflected light information signal 35AA so that its adjusted to the operating range of the comparator 46A. The black level set routine 500A starts at a commence instruction 502A and proceeds to a decision 504A to determine whether the charge couple device 34A has completed its scan. If the scan has not been completed, the program waits at decision 504A until the scan is completed.

When the scan is completed, the program proceeds to a decision instruction 506A to determine whether the widest line detected during the last scan period, is greater than the last saved widest line. In this regard, if any part of the computer generated image 24A is detected, it will result in a scan line greater than zero.

If the widest line detected is larger than the last save line width, the program advances to an instruction 508A that causes the microprocessor 42A to save the new wider line information and the current black level setting. The program then proceeds to an instruction 514A, that causes the voltage potential of black level signal 43A to be dropped by a given amount based on the present gain setting. Table IIA shows the relationship between gain and the black level settings.

Considering decision instruction 506A once again, if the widest line of the last performed scan, is not greater than the last saved line, the program proceeds to decision instruction 512A to determine whether the current widest line is less than, the last saved line less a constant K. If the current widest line is not, the program goes to the instruction 514A that causes the black level to be dropped by a given amount based on the gain setting. As the gain setting at this initial time is set at its lowest level, the black level is dropped by 16 levels.

Table IIA

Considering decision 512A again, if the current widest line saved is less than the saved line minus a predetermined constant K, the program advances to an instruction 513A. At the instruction 513A, the black level output is saved for widest line. The program then goes to a return instruction 515A which causes the program to return to a predetermined location.

Referring once again the instruction 514A, after instruction 514A is executed, the program goes to a decision instruction 516A to determine whether the black level is set at below a predetermined minimum value. If the black level is not below the minimum value, the program proceeds to instruction 518A, that causes the microprocessor 42A to output the black level signal. After the black level signal 43A is generated, the program proceeds to an instruction 52OA to start another scan sequence after about a 60 millisecond delay. The program then returns to the commence instruction 502A and proceeds as previously described.

At decision 516A, if the black level signal 43A is set below the minimum value, the program advances to a decision 522A to determine whether the saved widest black line is greater than zero. If the widest black line is greater than zero, the program goes to instruction 513A and proceed as previously described. If the widest black line is not greater than zero, the program goes to a decision 524A to determine whether the constant to decrease the black level is less than two. If the constant is less than two, the program proceeds to an instruction 526A. At instruction 526A, the black level output is set to its minimum value. From instruction 526A the program goes to instruction 513A and proceeds as previously described.

Referring again to decision instruction 524A, if the constant is not less than 2, the program goes to an instruction 528A that causes the constant to be decreased and the black level signal 48A to be reset to its maximum potential. After executing instruction 528A, the program goes to instruction 518A and proceeds as previously described. Referring now to the drawings and more particularly to FIG. 4A, there is shown a calibration arrangement 109A for calibrating an optical auxiliary input system 110A, and which is constructed in accordance to the present invention. The optical auxiliary input system 110A is substantially similar to the optical auxiliary input system 10A and is not shown for calibration purposes.

Considering now the calibration arrangement 109A in greater detail with reference to FIG. 4A, the calibration arrangement 109A includes a signal amplifier circuit 139A and a signal discrimination arrangement 14OA. The discrimination arrangement 140A is similar to the arrangement 40A and is not shown for clarification purposes.

Considering now the signal amplifier circuit 139A in greater detail with reference to FIG. 4A, the signal amplifier circuit 139A generally includes an operational amplifier 176A having a pair of input terminals 178A and 179A, and a variable feedback element 181A. The variable feedback element 18IA is coupled between the input terminal 178A and an output terminal 182A of the operational amplifier 176A and is controlled by a microprocessor 142A forming part of the signal discrimination arrangement 14OA. In this regard, the microprocessor 142A generates a gain control signal 135A that selects the gain of the operational amplifier 176A via the variable feedback element 18IA. The variable feedback element 18IA is a digital potentiometer that enables up to four discrete gain factors to be selected. Although in the preferred embodiment of the present invention the variable feedback element 18IA is a digital potentiometer, it will be understood by those skilled in the art that other types and kinds of variable feedback elements, such as a digital to analog converter or a digital gain chip, can be employed. It should also be understood that additional amplifier stages can also be employed to provide intermediate gain levels.

As best seen in FIG. 4A, the input terminal 179A is coupled to a black level signal 143A generated by the microprocessor 142A. The black level signal 143A, enables the output signal of the operational amplifier 176A to be offset.

Referring now to the drawings and more particularly to FIG. 5A, there is shown a calibration arrangement 209A for calibrating an optical auxiliary input system 210A, and which is constructed in accordance to the present invention. The optical auxiliary input system 210A is substantially similar to the optical auxiliary input system 10A and is not shown for clarification purposes. Considering now the calibration arrangement 209A in greater detail with reference to FIG. 5A, the calibration arrangement 209A includes a signal amplifier circuit 239A and a signal discrimination arrangement 240A. The discrimination arrangement 240A is similar to the arrangement 40A and is not shown for clarification purposes.

Considering now the signal amplifier circuit 239A in greater detail with reference to FIG. 5A, the signal amplifier circuit 239A generally includes an operational amplifier 276A having a pair of input terminals 278A and 279A, and a voltage controlled device 28IA. The voltage controlled device 281A is coupled between the input terminal 278A and an output terminal 282A of the operational amplifier 276A and is controlled by a microprocessor 242A forming part of the signal discrimination arrangement 240A. In this regard, the microprocessor 242A is similar to microprocessor 42A and generates a gain control signal 235A that selects the gain of the operational amplifier 276A via the voltage control device 281A. The voltage controlled device 281A is a voltage controlled impedance device that enables a plurality of gain factors to be selected.

As best seen in FIG. 5A, the input terminal 279A of the operational amplifier 276A is coupled to a black level signal 243A generated by the microprocessor 242A. The black level signal 243A, enables the output signal of the operational amplifier 276A to be offset.

Referring now to the drawings and more particularly to FIG. 6A, there is shown a calibration arrangement 309A for calibrating an optical auxiliary input system 310A, and which is constructed in accordance to the present invention. The optical auxiliary input system 310A is substantially similar to the optical auxiliary input system 10A and is not shown for calibration purposes.

Considering now the calibration arrangement 309A in greater detail with reference to FIG. 6A, the calibration arrangement 309A includes a signal amplifier circuit 339A and a signal discrimination arrangement 340A. The discrimination arrangement 340A is similar to the arrangement 40A and is not shown for clarification purposes.

Considering now the signal amplifier circuit 339A in greater detail with reference to FIG. 6A, the signal amplifier circuit 339A generally includes an operational amplifier 351A having a pair of input terminals 352A and 353A and feedback resistor 354A for high gain operation. The feedback resistor 343A is connected from the input terminal 352A to an output terminal 356A of the operational amplifier 351A. One of the input terminals 352A is connected via a conductor 355A, to a black level signal 343A generated by a microprocessor 342A forming part of the signal discrimination arrangement 34OA. The black level signal 342A functions as an offset voltage for the amplifier 351A.

The other one of the input terminals 353A is connected to a voltage controlled impedance device 362A for helping to control the gain of the operational amplifier 351A.

The voltage controlled impedance device 362A has a pair of input terminals 362A and 363A. One of the input terminals 362A is connected to a gain select signal 347A generated by the microprocessor 342A. The gain select signal 347A causes the impedance of the device 362A to be either high or low for attenuating the input signal to the amplifier 351A as will be explained hereinafter. The other one of the input terminals 364A is connected to a reflected light information signal 335A generated via the optical auxiliary input system 310A.

In operation, the feedback resistor 354A has a predetermined impedance that is selected to cause the operational amplifier 351A to have a maximum gain characteristic. The voltage controlled impedance device 362A is connected in the input path to the operational amplifier 351A as functions as an attenuator. In this regard, when the impedance of the device 362A is low, the input signal to the amplifier 351A is not attenuated and the output signal of the amplifier 351A has its maximum potential. Conversely, when the impedance of the device 362A is high, the input signal to the amplifier 351A is attenuated causing the output signal of the amplifier 351A to have its minimum potential.

Referring now to the drawings, and more particularly to FIGS. IB and 2B, there is illustrated an optical auxiliary input arrangement generally indicated at 9B, for emulating a mouse 10B employed in an optical system generally indicated at 11B, and which is constructed in accordance to the present invention.

The optical system 11B, is more fully described in the above mentioned U.S. patent application Serial No. 07/433,029 and includes a video information source, such as a personal computer 12B, and a liquid crystal display unit 13B for displaying a primary image 24B indicative of the primary image information generated by the computer 12B. The liquid crystal display unit 13B is positioned on the stage of an overhead projector 20B for enabling the displayed primary image information to be projected onto a viewing surface, such as a screen 22B.

The optical system 11B also includes a light sensing device, such as a raster scan charge coupled device or camera 34B for generating a reflected light information signal 35B indicative of the luminance levels of the video images and other light reflecting from the surface of the screen 22B.

As best seen in FIGS. IB and 2B, the optical auxiliary input arrangement 9B generally includes a user actuated dual intensity laser beam light generating device 26B for generating auxiliary light information, such as a spot of reflected light 27B for emulating the mouse 10B and for facilitating the modifying or changing of the primary image information displayed by the liquid crystal display unit 13B.

The optical auxiliary input arrangement 9B also includes a signal processing unit 28B coupled between the light sensing device 34B and the computer 12B for converting the auxiliary light information generated by the device 34B into coordinate information for emulating the mouse 10B. The signal processing unit 28B is substantially similar to signal processing unit 28A and will not be described in greater detail.

The optical auxiliary input arrangement 9B further includes a communication interface generally indicated at 45B that enables both the low speed mouse 10B and the high speed light generating device 26B via the signal processing unit 28B, communicate with the computer 12B at substantially different baud rates and data formats. In this regard, while the mouse 10B normally communicates with the computer 12B at a baud rate of about 1200 characters per second, the light generating device 26B, via the communication interface 45B, communicates with the computer 12B at a baud rate of about 9600 characters per second. This accelerated baud rate facilitates the tracking of the auxiliary light information entered by a user via the light generating device 26B.

For the purpose of enabling the light generating device 26B to emulate the mouse 10B, the optical auxiliary input arrangement 9B also includes a direct image double click algorithm 150B and a below screen double click algorithm 500B for enabling the light generating device 26B to simulate double click mouse operations and a baud rate algorithm 300B for controlling the baud rate of the communication interface 45B. The firmware for baud rate and data format algorithm 30OB is located partially within the optical auxiliary input arrangement 9B and partially within the computer 12B.

Considering now the operation of the optical input arrangement 9B, when the computer 12B commences generating video information, the liquid crystal display unit 13B generates an initiation signal that is coupled to the signal processing unit 28B which beeps to notify the user that he or she may initiate an alignment procedure which is more fully described herein.

In this regard, the user depresses an alignment button 55B that causes a series of visual prompts to be generated for informing the user how to adjust the position of the light sensing device 34B to capture the entire projected image 24B. Once the user has adjusted the position of the device 34B, the user calibrates the signal processing unit 28B by identifying the corner locations of the image 24B with the light generating device 26B. In this regard, the user causes a spot of light to be reflected on and off at each respective corner of the image 24B so the signal processing unit 28B will be able to generate accurate and reliable coordinate information in response to the detection of a spot of light produced by the device 26B. This calibration process is more fully described in copending U.S. patent application 07/611,416 and will not be described in further detail.

It should be understood by those skilled in the art, that since the field of view 25B of the device 34B is substantially larger than the image 24B, certain ones of the raster scan coordinates of the field of view of the device 34B are outside of the image 24B. These extraneous raster scan coordinates are utilized to facilitate double click mouse features via the below screen double click algorithm 200B as will be explained hereinafter.

At the end of the calibration process, the signal processing unit 28B generates an initialization signal that enables the light generating device 26B to emulate a mouse.

Considering now the signal processing unit 28B in greater detail with reference to FIGS. IB and 2B, the signal processing unit 28B generally include a signal amplifier circuit 39B for increasing the strength of the reflected light information signal 35B generated by the light sensing device 34B and a signal discrimination apparatus generally indicated at 4OB, for discriminating auxiliary light information from the other information components in the reflected light information signal 35B. The signal discrimination apparatus 4OB includes a comparator 46B, for facilitating discriminating between signals indicative of the various sources of light reflecting from the viewing surface 22B and a microprocessor 42B (FIG. 2B) for controlling a reference level signal 48B utilized by the comparator 46B for discrimination purposes. In this regard, for discrimination purposes, it should be understood that the light reflecting from the viewing surface 22B, has a plurality of luminance levels generally including background ambient light, primary image light, such as the image 24B, indicative of primary image information, and user 32B generated auxiliary image light, such as the spot of light 27B, indicative of auxiliary light information.

The microprocessor 42B also controls the exposure rate of the light sensing device 34B, gain selection for the amplifier arrangement 39B, and an offset black level signal 43B that is more fully described herein. Considering now the signal discrimination apparatus 40B in greater detail with reference to FIG. 2B, the signal discrimination apparatus 4OB controls the exposure rate of the charge couple device 34B, the reference level signal 48B for the comparator 46B, and the black level and gain select for the signal amplifier arrangement 39B. In this regard, in order to convert the digital control signals produced by the microprocessor 42B to analog voltages, the signal discrimination apparatus 40B includes a set of digital to analog converters including a reference level signal converter 82B to a positive input terminal of the comparator 46B, and a black level signal converter 83B coupled to the input of the amplifier arrangement 39B. As best seen in FIG. 2B, the signal discrimination apparatus 40B also includes a counter arrangement 86B and a timing generator 88B. The microprocessor 42B controls the exposure time via the timing generator 88B.

The counter arrangement 86B includes a horizontal counter and latching arrangement 89B and a vertical counter arrangement 9OB. The counter arrangement 86B is synchronized with a raster scan sequence of the charge coupled device by a pixel clock generated by the timing generator 88B. In this regard, the microprocessor 42B and timing generator 88B cooperate together to control the exposure rate and scanning sequence of the charge couple device 34B. More particularly, they cooperate together so that the device 34B will produce an output signal of sufficient magnitude in response to the detection of light.

Considering now the double click algorithm 150B in greater detail with reference to FIGS. 3B to 5B, the double click algorithm 15OB commences at a start instruction 152B (FIG. 4B) that is entered when the microprocessor 42B has been calibrated to generate raster scan information corresponding to pixel coordinate information of the image 24B.

The program proceeds from instruction 152B to an instruction 154B, that causes an internal memory location of the microprocessor 42B designated as "saved spot" to be initialized for the purpose of storing coordinate locations of auxiliary light information. The program next proceeds to an instruction 156B, that causes an internal spot timer 64B to be cleared and a spot on/off flag to be reset to off. In this regard, when the spot on/off flag is set to "on" the flag is indicative that a previous spot of auxiliary light, such as the spot 27B, was detected by the device 34B and processed by the microprocessor 42B. If the spot on/off flag is set to "off," the flag is indicative that a previous spot of auxiliary light was not detected by the light sensing device 34B.

After instruction 156B is executed, the program proceeds to an instruction 158B that causes the data from the charge coupled device 34B to be scanned by the microprocessor 42B a scanning sequence. The program then goes to a decision instruction 160B, to determine whether the scanning sequence has been completed. If the scanning sequence is not completed, the program waits at instruction 160B.

When the scanning sequence is completed, the program advances to a decision instruction 162B, to determine whether a spot of auxiliary light was detected. If a spot of auxiliary light was not detected, the program proceeds to a decision instruction 164B to determine whether the spot on/off flag was off. If a spot of auxiliary light was detected, the program proceeds to a decision instruction 172B to determine whether the spot on/off flag was "on." If the spot on/off flag was "off" at decision instruction 164B, the program advances to an instruction 166B that causes the internal spot timer 64B to be advanced by one time increment. The program then goes to a decision instruction 168B (FIG. 5B) , to determine whether the light generating device 26B has been deactivated for greater than a certain predetermined period of time t, where t is between about 0.5 second and 1.0 seconds. A more preferred time t is between about 0.6 seconds and 0.9 seconds, while the most preferred time t is about 0.75 seconds.

If the light generating device 26B has been deactivated for longer than time t, the program advances to an instruction 170B that causes the microprocessor 42B to set an internal move flag, that permits the user to move the position of the auxiliary spot of light 27B within an imaginary rectangular area of m by n raster scan pixel locations, such as an area 29B, for double click simulation purposes. If the light generating device 26B has not been deactivated for longer than time t, the program returns to the instruction 158B to start another scanning sequence. The program then proceeds from instruction 158B as previously described. Referring again to the decision instruction 172B, if the spot on/off flag was "on," the program goes to an instruction 174B that causes internal spot timer 64B to be cleared. The program then advances to an instruction 176B. If the spot on/off flag was on at decision instruction 172B, the program advances to the instruction 176B that cause the timer 64B to be advanced by one time increment. The program then proceeds to an instruction 178B, that causes the spot on/off flag to be set to its "on" condition. After instruction 178B is executed the program goes to a decision instruction 18OB, to determine whether the timer 64B has exceeded a predetermined period of time T, where T is between about 0.5 seconds and 1.0 seconds. A more preferred time T is between about 0.6 seconds and 0.9 seconds, while the most preferred time T is about 0.75 seconds.

If the timer 64B has not exceeded the predetermined period of time T, the program advances to a decision instruction 184B. If the timer 64B has exceeded the predetermined period of time T, the program advances to an instruction 182B that causes the move flag to be set to permit the user to move the position of the auxiliary spot of light 27B within an imaginary rectangular area, such as the area 29B for double click simulation purposes. In this regard, as previously noted the imaginary rectangular area is m pixels wide and n pixels tall, where m is about 12 pixels and n is about 6 pixels. A more preferred m is about 8 pixels and a more preferred n is about 4 pixels. The most preferred m pixels and n pixels is about 4 pixels and 2 pixels respectively.

From the foregoing, it will be understood by those skilled in the art, that so long as the user is able to keep the spot of light 27B within the imaginary rectangular area surrounding an initial detection location the microprocessor 42B will seek to determine whether the user is attempting to execute a double click operation.

Referring now to decision instruction 184B, if the move flag is not set, the program advances to decision instruction 186B, to determine whether a previously saved auxiliary spot location is near the current auxiliary spot location; i.e. is the current spot of auxiliary light within the imaginary rectangular area from where the previous spot of auxiliary light was detected. If the current spot of auxiliary light is within the imaginary area, the program advances to an instruction 188B (FIG. 5B) . If the current spot of auxiliary light is not within the imaginary area, the program advances to an instruction 187B (FIG. 5B) .

For explanation purposes, it should be understood that the double click feature requires the user to activate, deactivate activate and deactivate, the light generating device 26B, while holding the auxiliary light beam sufficiently steady to cause the spot to remain within an imaginary rectangular area, such as the area 29B, while the above mentioned sequence is completed.

Referring again to decision instruction 186B, as the saved position was initialized at instruction 154B, the program proceeds from the decision instruction 186B to the instruction 187B that causes the move flag to be set. Instruction 187B also causes the microprocessor 42B to store the raster scan location of the current auxiliary spot as a saved spot location. The program then proceeds to an instruction 19IB that converts the raster scan location into coordinate information that corresponds to image coordinates.

After the raster scan location has been converted into coordinate information, the program advances to an instruction 193B that causes the communication interface 45B to transmit the coordinate information to the computer 12B.

Referring again to decision instruction 184B, if the current spot of auxiliary light has not been on for more than T seconds, and the spot on/off flag is on, the program advances to an instruction 189B as the move flag will have been set at instruction 182B. Instruction 189B causes the microprocessor 42B to store the current spot location as a saved position and then proceeds to the instruction 19IB. The program proceeds from instruction 19IB as previously described.

Referring again to decision instruction 184B, if the current spot of auxiliary light has been on for less than T seconds, and the spot on/off flag is off, the program advances to the decision instruction 186B as the move flag will not have been set at instruction 182B. Decision instruction 186B, causes the microprocessor 42B to determine whether the location of the saved auxiliary light spot is near the location of the current auxiliary light spot. If the spot is within the area 29B for example, the program advances to the instruction 188B that causes the saved auxiliary light location to be converted into coordinate information. The program then goes to an instruction 19OB, that causes the coordinate information to be transmitted to the computer 12B via the communication interface 45B. After transmitting the coordinate information to the computer 12B, the program returns to instruction 158B and proceeds as previously described.

From the foregoing, it should be understood that the same coordinate information will be transmitted to the computer 12B so long as the user executes the double click operation within the predetermined period of time t and T respectively and keeps the auxiliary light spot 27B within the boundaries of an associated imaginary rectangular area, such as the area 29B.

Referring again to decision instruction 164B (FIG. 4B) , if the spot on/off flag is not "off," the program advances to an instruction 165B (FIG. 5B) that causes the microprocessor 42B to set the spot on/off flag to "off." The program then proceeds to an instruction 167B, that causes the timer 64B to be cleared. After the timer 64B has been cleared at instruction 167B, the program returns to instruction 158B and proceeds as previously described.

Considering now the baud rate algorithm 300B in greater detail with reference to FIGS. 6B-8B and 10B, the baud rate algorithm 30OB begins at a start instruction 302B (FIG. 10B) and proceeds to a decision instruction 303B to determine whether the calibration of the charge couple device 34B has been completed. If calibration has not been completed the program waits at instruction 303B. When calibration has been completed, the program goes to a decision instruction 304B to determine whether auxiliary light information has been received by the microprocessor 42B. If auxiliary light information has not been received, the program waits at decision instruction 304B until auxiliary light information is received.

When auxiliary light information is received, the program advances to decision instruction 305B to determine whether a dmux signal and a smux signal has been asserted. If the signals have not been asserted, the program goes to an instruction 307B that causes the dmux and smux signals to be asserted. The smux signal informs the computer 12B that the baud rate must be switched to the 9600 baud rate.

If the dmux and smux signals have already been asserted, the program goes to a decision instruction 306B to determine whether the auxiliary light image is new or has moved from its previous position. Referring again to instruction 307B, once the dmux and smux signals have been asserted, the program goes to the decision instruction 306B. Also, the program executes a call instruction 309B that call an interrupt subroutine 325B that will be described hereinafter in greater detail. When the smux signal is received by the computer 12B, the computer 12B passes control to the interrupt subroutine 325B implemented in the software of the computer 12B. Referring to decision instruction 306B, if the light has moved or is new, the program goes to a decision instruction 37OB to determine whether auxiliary information is ready to be sent to the computer 12B. If the information is not available, the program waits at instruction 37OB.

When the auxiliary information is available to be transmitted to the computer 12B, the program advances to an instruction 372B that causes the microprocessor 42B to transmit the auxiliary light information to the computer 12B. The program then goes to a decision instruction

374B to determine whether the auxiliary light information has been transmitted to the computer 12B. If the information has not been transmitted the program waits at decision instruction 374B until the transmission is completed.

Once the transmission of the auxiliary light information has been completed, the program advances to an instruction 376B that causes the dmux signal and smux signals to be negated to inform the computer 12B that the light generating device 26B no longer requires the serial port. The program then changes the state of the dmux and smux signals which causes the interrupt subroutine 325B (FIGS. 6B-8B) to be executed by the computer 12B. The computer 12B then switches the interface parameters. The program also returns to the decision instruction 304B and proceeds as previously described.

Considering now the interrupt subroutine 325B in greater detail with reference to FIGS. 6B-8B, the interrupt subroutine 325B which resides in the computer 12B commences at an interrupt entry instruction 327B (FIG. 7B) and proceeds to an instruction 329B that causes a set of CPU registers (not shown) in the computer 12B to be saved. The program then proceeds to an instruction 331B that causes a set of working registers to be loaded with information to address the serial port residing in computer 12B. The program then goes to an instruction 333B that causes a serial port interrupt identification register (not shown) to be retrieved.

The program then advances from instruction 333B to a decision instruction 335B, to determine whether the retrieved information was a modem status interrupt or data available interrupt.

If the retrieved information is indicative of a modem status interrupt, the program advances to an instruction 337B that causes the computer 12B to read and save the modem status from the serial port. If the retrieved information indicates a data available interrupt, the program advances to an instruction 34OB that causes the received data to be read and stored. Referring again to the instruction 337B, after the computer 12B reads and saves the modem status, the program proceeds to an instruction 339B, that causes the computer 12B to issue an end-of-interrupt operation to an interrupt subsystem (not shown) . The program then goes to a decision instruction 341B to determine whether the data carrier detect signal changed causing the modem status interrupt.

If the data carrier detect signal did not change, the program proceeds to an instruction 360B (FIG. 8B) that causes the CPU registers to be restored and control returned to computer 12B.

If the decision instruction 341B determines the data carrier detect signal changed, the program proceeds to an instruction 343B that prepares the computer 12B to change the serial port parameters. The program then advances to a decision instruction 350B (FIG. 8B) that determines whether the state of the data carrier detect signal specifies that the communication interface 45B is set for the mouse 10B or the light generating device 26B via the microprocessor 42B.

If the data carrier detect signal specifies the mouse 10B, the program goes to an instruction 352B that causes the communication registers to be loaded with a set of mouse parameters that includes a baud rate parameter, a data bit packet parameter, a parity parameter, and a stop bit parameter. After loading the communication register with the mouse parameters, the program goes to an instruction 354B that causes the new baud rate to be loaded into a serial port interface chip (not shown) to enable communication at the new baud rate. After loading the serial chip, the program goes to the instruction 36OB and proceeds as previously described.

Referring again the decision instruction 350B (FIG. 8B) , if the data carrier detect signal specifies the light generating device 34B via the microprocessor 42B, the program advances to an instruction 356B that causes the communication registers to be loaded with optical input device parameters that include a baud rate parameter, a data packet parameter, a parity parameter, and a stop bit parameter. After loading the communication registers, the program goes to instruction 354B and proceeds as previously described.

After the data received on the serial port is stored at instruction 340B, the program advances to a decision instruction 342B to determine whether a complete packet or record has been received. If the record is not complete, the program goes to an instruction 346B that causes an end-of-interrupt signal to be generated to the interrupt subsystem the end-of-interrupt signal is generated, the program goes to instruction 36OB and proceeds as previously described.

If a determination is made at decision instruction 342B that the packet was complete, the program goes to an instruction 34IB and processes the coordinate information received from either the mouse 10B or the optical input device 34B. After processing the data the program advances to instruction 346B and proceeds as previously described. Referring now to the communication interface 45B in greater detail with reference to FIG. 11B, the communication interface 45B includes a gang switch 49B having three discrete switches 50B, 51B and 52B for dynamically switching the data path between a low baud rate device, such as 1200 baud, and a (high) baud rate device, such as 9600 baud. The communication interface also includes a pull up resistor 82B for helping to facilitate the state of the system operation (when the optical auxiliary input device 78B is not plugged into connector 61B) and the mouse 10B is used, dmux signal is a logical low.

The communication interface 45B further includes a set of 3 DB9 pin connectors. In this regard, the set of connectors include a host computer connector 60B for the computer 12B, an optical input device connector 61B for the microprocessor 42B, and a mouse connector 62B for the mouse 10B.

The signal names of the connections between the computer 12B and the input devices, such as the optical auxiliary input arrangement 9B and the mouse 10B, are defined by an IEEE RS-232C specification serial port to external modem as mapped to a 9-pin connector by IBM corporation.

Considering now the host computer connector 60B in greater detail with reference to FIG. 11B, Table IB illustrates the connector pin numbers and the signals carried by each respective pin.

Table IB

Considering now the optical input device connector 61B in greater detail with reference to FIG. IIB, Table IIB provides the connector pin numbers and signals caused by each respective pin, as used by the communication interface 45B.

Table IIB

Considering now the mouse connector 62B in greater detail with reference to FIG. IIB, Table IIIB shows the signals carrier by each respective pin. Table IIIB

Considering now the operation of the communication interface 45B with reference to FIG. IIB, when the microprocessor 42B causes the signal dmux to be generated, switch 5OB is forced to a closed position establishing a data transmission path between the microprocessor 42B and the computer 12B. The dmux signal also causes switch 51B to an opened position to terminate the data communication path between the mouse 10B and the computer 12B.

The dmux signal further causes switch 52B to an opened position to inhibit the mouse 10B from receiving data from the computer 12B. From the foregoing, it should be understood that the dmux signal causes the communication path from the host computer 12B to be switched either to the optical auxiliary input port for the light generating device 26B or to the mouse port for the mouse 10B.

Considering now the below screen click algorithm 500B in greater detail with reference to FIG. 9B, the below screen click algorithm 50OB commences at a start instruction 501B and proceeds to an instruction 502B that causes the microprocessor 42B to set its saved position register to none and to set a double click flag to disable below screen double click feature.

The program next executes an instruction 504B that causes the charge coupled device 34B to execute another scanning sequence. The program then proceeds to a decision instruction 506B to determine whether the scan has been completed. If the scan has not been completed, the program waits at decision instruction 506B.

When the scan is completed, the program goes to a decision instruction 508B to determine whether an auxiliary light image has been detected. If no image was detected the program returns to instruction 504B and proceeds as previously described.

If an auxiliary light image is detected, the program goes to a decision instruction 510B to determine whether the auxiliary light image was detected outside of and below the image 24B. If the auxiliary light image was not outside the image 24B, the program goes to an instruction 52OB that causes the raster scan coordinates of the auxiliary light image to be saved.

Next the program executes an instruction 522B that causes the below screen disable flag to be set on. The program then goes to an instruction 524B that converts the raster scan coordinates into image coordinate information and then transmits the coordinate information to the computer 12B. The program returns to instruction 504B and proceeds as previously described. Referring again to decision instruction 510B, if the detected auxiliary light image was below and outside the image 24B, the program advances to a decision instruction 512B to determine whether the below screen double click enable flag was set. If the flag was not set, the program returns to instruction 504B and proceeds as previously described.

If the enable flag was set, the program goes to an instruction 514B that sets the double click flag to disable the below screen double click feature. The program then advances to an instruction 516B that causes the double click command to be transmitted to the computer 12B from the saved position. The program then returns to instruction 504B and continues as previously described.

While the above referenced algorithm determines that an auxiliary light image was generated below the image 24B, it will be understood by those skilled in the art that an algorithm could also detect auxiliary light above the image 24B on the viewing surface 22B.

Referring now to the drawings, and more particularly to FIG. 1C thereof, there is illustrated an optical input arrangement, generally indicated at IOC, for permitting optical control of an optical auxiliary input system generally indicated at 12C, and which is constructed in accordance with the present invention.

The optical input system 12C, is more fully described in the above-mentioned U.S. patent application Serial No. 07/901,253 and includes a video information source, such as a host computer 14C, and a liquid crystal display unit 16C, for displaying a primary image 24C indicative of the image information generated by the host computer 14C. The liquid crystal display unit 16C is positioned on the stage of an overhead projector (not shown) for enabling the image information generated by the computer 14C to be projected onto a viewing surface, such as a screen 22C, as a projected image 24AC.

The optical input arrangement IOC includes an image processing apparatus 30C having a CCD raster scan charge couple source video camera, indicated at 34C, for generating signals indicative of detected images mounted on the unit 16C, and a signal processing system 50C coupled to the image processing apparatus 30C for processing the signals for use by the host computer 14C. In this way, the optical input arrangement IOC cooperates with a light generating device 26C which generates auxiliary high intensity light information, such as a spot of reflected light 27C directed onto the image 24AC, for facilitating the modifying or changing of the primary image information 24C displayed by the liquid crystal display unit 16C.

The arrangement IOC also includes an alignment light source 40C (FIGS. 1C and 3C) mounted on the front of the image processing apparatus 30C for producing an alignment spot 46C for facilitating alignment of the image processing apparatus 30C with the projected image 24AC. In this regard, the alignment light source 40C helps a user align the optical sensing device 34C relative to the projected image 24AC, such that the field of view 25C of the device 34C is able to include the complete displayed projected image 24AC reflecting from the screen 22C. The device or camera 34C (FIG. 2C) senses light reflecting from the screen 22C and generates a reflected light information signal indicative of the luminance levels of the reflected images including other light reflecting from the surface of the screen 22C. The optical sensing device 34C as best seen in FIG. 1C, has a field of view, indicated generally at 25C, that is substantially larger than the primary image 24AC. A band pass filter 36C (FIG. 2C) disposed over the lens (not shown) of the device 34C limits the range of wavelengths of light permitted to be sensed by the device 34C (FIG. 2C) . The optical filter 36C is of the band pass variety, whereby only a selected range of wavelengths of light are permitted to pass therethrough. A preferred range of wavelengths permitted to pass through the filter 36C is between about 660 nanometers and 680 nanometers, and centered about the 670 nanometer wavelength. In this regard, the optical filter 36C excludes all optical light sources outside of the range specified from being sensed by the camera 34C.

The light generating device 26C generates an auxiliary optical input or command signal spot 27C as described in U.S. patent application Serial

No. 07/901,253, whose optical wavelength is within the specified range of the band pass filter 36C. In this regard, the auxiliary optical command signal spot 27C is sensed by the device 34C while surrounding reflected light of the projected image 24AC, whose optical wavelengths is outside of the specified range, is not permitted to be sensed by the camera 34C.

In operation, after the image processing apparatus 30C has been pointed towards the screen 22C, the user causes the optical auxiliary input system 12C to produce the projected image 24AC on the screen 22C. As the image processing apparatus 30C is pointed generally toward the screen 22C, the apparatus 30C is able to sense the reflected light of the image 24AC. In this regard, the reflected light of the primary image 24AC generally comprises light substantially from the entire optical wavelength spectrum. Thus, to limit the wavelength spectrum to be sensed by the device 34C, the reflected light is first filtered optically by the optical filter 36C. In this way, the wavelength of the reflected light permitted to reach the camera 3 C is restricted to facilitate the detection of the auxiliary optical input signal spot 27C which is characterized by a very narrow optical wavelength falling within the band of optical wavelengths that filter 36C permits to pass through to the camera 34C.

From the foregoing, it will be understood by those skilled in the art, that the filter 36C reduces the amount of extraneous incoming light which will be sensed for detection of the auxiliary optical input signal spot 27C.

The image processing apparatus 30C is attached to the liquid crystal display unit 16C in such a way that it may be rotated on both its horizontal and vertical axes. This rotating process is more fully described in U.S. patent application Serial No. 07/955,831 and will not be described in further detail.

The image processing apparatus 30C generates a video signal indicative of the light reflecting from the screen 22C. In this regard, the signal is indicative of the image 24AC as well as the light spot 46C. This signal is coupled to the signal processing system 50C.

When the signal processing system 50C receives the video signal from the device 34C, it converts the signal into a digital signal indicative of the luminance level of the image 24AC at a given location on the screen 22C. In this regard, as the field of view of the device 34C is greater than the site of the image 24AC, the device 34C detects the image 24AC when properly aligned relative thereto.

Considering now the alignment light source 40C in greater detail with reference to FIG. 3C, the light source 40C includes a series arrangement of a source of electrical energy such as a battery 41C, a pushbutton 42C, and a light emitting diode 44C, wherein the pushbutton 42C is disposed between the source 41C and the diode 44C to permit activating and deactivating the diode 44C by depressing or releasing the pushbutton 42C. By completing the circuit between the source and the diode 44C, the diode is electrically activated and generates the alignment spot 46C.

In operation, the alignment light source 40C facilitates the method for aligning the image processing apparatus 30C with the screen 22C. In this regard, when the computer 14C commences image 24C, the image 24C is projected onto the screen 22C as the projected or primary image 24AC by means of the overhead projector (not shown) . The user must then align the image processing apparatus 30C in such a way that the primary image 24AC is located substantially within the camera field of view 25C.

By depressing the pushbutton 42C on the top of the apparatus 30C, the alignment spot 46C is generated. The user then manually adjusts the apparatus 30C and the display 16C to position the field of view of the device 34C, while simultaneously continuing to depress the pushbutton 42C, until the alignment spot 46C is located substantially at the center of the primary image 24AC. In this way, the primary image 24AC is contained substantially within the camera field of view 25C. Once the spot 46C is so located, the user releases the pushbutton 42C, to extinguish the alignment spot 46C. It should be understood that this alignment operation is performed without the use of the signal system 50C or the host computer 14C.

Considering now the signal processing system 50C in greater detail with reference to FIG. 1C, the signal processing system 50C is coupled between the image processing apparatus 30C and the host computer 14C for detecting the auxiliary optical command signal spot 27C and for transmitting detection information to the host computer 14C. The signal processing system 50C is connected to the image processing apparatus 30C via cable 52C. Cable 52C supplies a variety of signals including a VSYNC signal 61C, an HSYNC signal 63C, a VIDEO signal 65C, and a clock signal 67C. The clock signal 67C facilitates synchronization of the image processing apparatus 30C and the signal processing system 50C. The signal processing system 50C generally includes an analog to digital converter 54C for converting the video signal 65C into a digital signal 69C indicative of a given luminance level, a high speed digital processor 56C for detecting luminance levels indicative of the auxiliary optical command signal spot 27C on the screen 22C, and a clock generator for developing the clock signal 67C. The system 50C also includes a host computer interface 60C and an input/output processor 58C for facilitating communication between the system 50C and the host computer 14C.

Considering now the digital signal processor 56C in greater detail with reference to FIG. 1C, the processor 56C is coupled to the input/output processor 58C by a cable 53C. The processor 56C is a model ADSP2105, as manufactured by Analog Devices Inc. and is fully described in the ADSP2102/ADSP2105 User's Manual, February 1990, for performing various high speed operations. The operations performed by the processor 56C are performed under the control of a set of algorithms 70AC and 80AC which each will be described hereinafter in greater detail.

Considering now the operation of the digital signal processor 56C with reference to FIGS. 4C-7C, the digital signal processor 56C is controlled by algorithms 70AC and 80AC which determine when video data should be acquired, determine differences in optical intensity values for processing, and process the differences in optical intensity values to detect the presence of the auxiliary optical command signal spot 27C. Considering now the incrementing algorithm 70AC in greater detail with reference to FIG. 4C, the algorithm 70AC enables the digital signal processor 56C to prepare for acquiring video data from the apparatus 30C. The video data to be acquired corresponds to the horizontal line N received from the analog to digital converter 54C according to the present invention. The value of horizontal line N is dependent upon the total number of horizontal lines to be scanned.

Initialization of the incrementing algorithm 70AC begins with instruction box 70C where a variable LINE_CT is cleared to 0 and a variable LINE is set to equal N. Next, the digital signal processor 56C awaits the beginning of a new scan sequence at decision box 71C. The beginning of a new scan sequence is indicated by assertion of the VSYNC signal 61C. If no VSYNC signal

61C is asserted, control returns to the decision box 71C.

When the VSYNC signal 61C is asserted, the digital signal processor 56C awaits the assertion of the an HSYNC signal 63C at decision box 72C. Assertion of the HSYNC signal 63C indicates that a new horizontal line is about to be acquired by the device 34C. If no HSYNC signal 63C is asserted, control returns to decision box 72C. However, if the HSYNC signal 63C is asserted, the program proceeds to an instruction box 73C which causes the LINE_CT to be incremented by 1. Next, decision box 74C determines whether the LINE_CT is equal to N, indicating that the desired horizontal line n has been reached. If LINE_CT is not equal to N, control returns to decision box 72C where the assertion of another HSYNC signal 63C is awaited. The return operation from decision box 74C to decision box 72C will continue until the desired horizontal line N is reached.

Once the desired horizontal line N has been reached, an ACQUISITION routine or GET PIXEL DATA routine 80AC, described hereinafter in greater detail, is summoned at box 75C. After acquisition is complete, control returns from acquisition routine 80AC to the incrementing algorithm 70AC. Thereafter, incrementing algorithm 70AC continues to box 76C where the values obtained from the ACQUISITION routine 80AC, are used to determine a differential intensity value D and to compare the differential intensity value D with threshold values. Considering now the ACQUISITION routine 80AC in greater detail with reference to FIG. 5C, the ACQUISITION routine 80AC enables the digital signal processor 56C to acquire the horizonal line N and to store the differential intensity value D. Acquisition routine 80AC, commences with a START command 75AC which is entered from the incrementing algorithm 70AC at box 75C. The program then proceeds to a command instruction box 80C which initializes a sample count SAMPLE_CT, a previous pixel value Y and a memory pointer PTR. Further, memory pointer PTR is set to a memory location BUFF, which indicates a free area of random access memory (RAM) to be used as a buffer.

Routine 80AC then proceeds to a decision box 81C where a determination is made as to whether or not a transmission of pixel data from the device 34C has begun. If transmission has not yet begun, control is returned to box 81C until such time that the transmission does begin.

Once transmission has begun, the program proceeds to an instruction command at box 82C which indicates that a pixel intensity value X is digitized by analog to digital converter 54C and stored. The value of the present pixel value X is then subtracted from the previous pixel value Y to determine the differential intensity value D in box 83C. D is then stored, as indicted in instruction box 84C, and memory pointer PTR is incremented by 1 to facilitate memory allocation. Next the program goes to instruction box 85C which replaces the value stored as Y with the value stored as X, thereby making the present value now the previous value for the next intensity value comparison, as shown in box 83C. SAMPLE_CT is incremented by 1 at box 87C before control continues to decision box 87C, where

SAMPLE_CT is tested as to whether all possible pixels on the sampled horizontal line N have been acquired. If all possible pixels have not been acquired, the routine returns to box 82C where another pixel intensity value X is digitalized. When all of the possible pixels have been acquired, the acquisition routine 80AC returns control to the incrementing algorithm 70AC at CALL instruction 76C.

Considering now the processing of the differential intensity value D in greater detail, with respect to

FIGS. 6C and 7C, there is shown in FIG. 6C a graph which indicates individual pixel intensity values 101C-110C for a typical horizontal line N. As the auxiliary optical command signal spot 27C is acquired by the signal processing system 50C, the individual pixel intensity values 101C-110C will indicate an increase in intensity magnitude followed a decrease in intensity magnitude. The acquisition of the spot 27C is indicated in FIG. 6C as pixel intensity values 104C-108C. FIG. 7C shows the differential intensity value D, as determined by acquisition routine 80AC, for the pixel intensity values 101C-110C acquired for horizontal line N. Each data point 111C-119C represents the differential intensity value D of each previous sample and current sample. For example, intensity values 104C and 105C (FIG. 6C) are +5 units apart. The corresponding data point 114C (FIG. 7C) , representing differential intensity value D, is shown as 5C. Similarly, intensity values 107C and 108C (FIG. 6C) are -6 units apart and the corresponding data point 117C (FIG. 7C) is shown as -6. Thus, FIG. 7C indicates that signal processing system 50C is an indicator for change in slope of a line 100C (FIG. 6C) which represents the intensity values 101C-110C acquired. When particular change in slope characteristic are calculated, the system 50C has detected the spot 27C and can then transmit this detection to the IOP 58C for communication to the host computer 14C.

Referring to FIG. 7C, in operation, a positive threshold 120C and a negative threshold 121C are established, where the threshold 121C is the negative value of the threshold 120C. The differential intensity values, such as data points 111C-119C, are calculated according to the processing described previously, but are not considered for spot detection purposes until a differential intensity value exceeds the positive threshold 130C, such as data points 114C and 115C, and is subsequently followed by a differential intensity value that is lower than negative threshold 121C, such as data point 116C. At this point, the signal processing system 50C has detected the spot 27C from surrounding reflected light and then transmits this information to the IOP 58C which translates the information to a form compatible for interfacing with the host computer 14C. The output of the digital signal processor 56C is coupled to the input/output processor 58C, such as the SIGNETICS 87C652, to facilitate the communication of information processed by the signal processing system 50C to the host computer 14C. A host computer interface 60C is coupled to the IOP 58C to permit transmission of data from the signal processing system 50C to the host computer 14C in a form which is compatible with the host computer 14C. The data sent via the combination of the IOP 58C and the host computer interface 60C include a DATA DISPLAY signal, an ADB signal, and an RS232 signal. While particular embodiments of the present invention have been disclosed, it is to be understood that various different modifications are possible and are contemplated within the true spirit and scope of the appended claims. There is no intention, therefore, of limitations to the exact abstract or disclosure herein presented.

Claims

What is claimed is:
1. An optical system auxiliary input calibration arrangement for an optical system including a light sensing device for generating an information signal indicative of a plurality of luminance levels reflecting from a viewing surface, said information signal having a background ambient light information portion and a primary light information portion, said calibration arrangement comprising: microprocessor means for generating timing signals for controlling the exposure time of the light sensing device; multiple gain means for increasing the strength of the primary light information portion of the information signal relative to the exposure time of the light sensing device; attenuation means for decreasing the strength of the background ambient light information portions of the information signal relative to the strength of the primary light information portion of the information signal; and discrimination means for distinguishing the difference between the background ambient light information portion and the primary light information portion of the information signal so the exposure time of the light sensing device can be adjusted relative to a plurality of background ambient lighting condition, for permitting the light sensing device to detect video images reflecting from the viewing surface.
2. A method for adjusting the light sensitivity of a light sensing device to permit detection of video images reflecting from a viewing surface, said light sensing device generating an information signal having an ambient light information portion indicative of the ambient light reflecting from the viewing surface and a primary light information portion indicative of the video images reflecting from the viewing surface, comprising: generating timing signals for controlling the exposure time of the light sensing device; increasing the strength of the primary light information portion of the information signal relative to the exposure time of the light sensing device; decreasing the strength of the ambient light information portion of the information signal relative to the strength of the primary light information portion of the information signal; and distinguishing the difference between the ambient light information portion and the primary light information portion of the information signal so the exposure time of the light sensing device can be adjusted relative to a plurality of background ambient lighting conditions for permitting the light sensing device to detect video images reflecting from the viewing surface.
3. An optical system auxiliary light calibration arrangement for an optical system including an image projection system having a liquid crystal display unit for displaying images having a plurality of luminance levels, a projection arrangement for causing the images to be displayed on a viewing surface, and a light sensing device for generating an information signal indicative of a plurality of luminance levels reflecting from the viewing surface, said information signal having a background ambient light information portion indicative of the ambient light reflecting from the viewing surface, a primary light information portion indicative of the liquid crystal display unit images displayed on the viewing surface, and an auxiliary light information portion indicative of a user generated auxiliary light image reflecting from the viewing surface' for modifying or changing the image displayed by the liquid crystal display unit, said calibration arrangement comprising: multiple gain means for increasing the strength of the primary light information portion and the auxiliary light information portion of the information signal relative to a given exposure time for the light sensing device; comparator means coupled to said multiple gain means for generating a detection signal whenever the information signal is greater than a predetermined reference level signal; sensitivity means for controlling the sensitivity of said comparator means relative to the information signal, said sensitivity means including attenuation means for controlling the strength of the background ambient light information portion of the information signal relative to the strength of the primary light information portion of the information signal and a given calibration reference level signal; means for generating a select gain signal for causing the strength of the primary light information portion of the information signal to be increased or decreased a sufficient amount to cause said detection signal to be indicative of only the primary light information portion of the information signal; and reference level selection means for generating a predetermined reference level signal having a sufficient electrical strength to enable said comparator means to distinguish between the primary information portion and the auxiliary information portion of the information signal and to distinguish between an auxiliary light information portion of the information signal emanating from a low intensity auxiliary light source and an auxiliary light information portion of the information signal emanating from a high intensity auxiliary light source.
4. A calibration arrangement according to claim 3, wherein said reference level selection means includes contrast means coupled to the liquid crystal display unit for causing it to selectively display a pair of contrasting images, one of said pair of images being a bright substantially white image, and the other one of said pair of images being a dark substantially noncolored image.
5. A calibration arrangement according to claim 4, wherein said reference level selection means further includes algorithm means for calculating the electrical strength of said predetermined reference level signal.
6. A calibration arrangement according to claim 5, wherein said algorithm means includes a formula: y = mx + b wherein y is a minimum voltage potential value for enabling said comparator means to distinguish between said low intensity auxiliary light source and said high intensity auxiliary light source; wherein x is the potential difference between a bright image reference level signal and a dark image reference level signal; wherein b is the potential value of said bright image reference level signal; and wherein m is a constant indicative of a luminance level for one of a plurality of different kinds of image projection systems.
7. A method for adjusting the light sensitivity of an optical auxiliary input system to permit detection of an auxiliary light image, said optical auxiliary input system including an image projection system having a liquid crystal display unit for displaying primary images having a plurality of luminance levels, a projection arrangement for causing the primary images to be displayed on a viewing surface, and a light sensing device for generating information signal indicative of a plurality of luminance levels reflecting from the viewing surface, said information signal having a background light information portions indicative of the ambient light reflecting from the viewing surface, a primary light information portion indicative of the primary images reflecting from the viewing surfaces, and an auxiliary light information portion indicative of a user generated auxiliary light image reflecting from the viewing surface for modifying or changing the image displayed by the liquid crystal display unit, said method comprising: using comparator means; increasing the strength of the primary light information portion and the auxiliary light information portion of the information signal relative to a given exposure time for the light sensing device; generating a detection signal whenever the information signal is greater than a predetermined reference level signal; controlling the sensitivity of said comparator means relative to the information signal, controlling the strength of the background ambient light information portion of the information signal relative to the strength of the primary light information portion of the information signal and a given calibration reference level signal; generating a select gain signal for causing the strength of the primary light information portion of the information signal to be increased or decreased a sufficient amount to cause said detection signal to be indicative of only the primary light information portion of the information signal; and generating a predetermined reference level signal having a sufficient electrical strength to enable said comparator means to distinguish between the primary information portion and the auxiliary information portion of the information signal and to distinguish between an auxiliary light information portion of the information signal emanating from a low intensity auxiliary light source and an auxiliary light information portion of the information signal emanating from a high intensity auxiliary light source.
8. A calibration method according to claim 7, further comprising: displaying selectively, one of a pair of contrasting images, one of said pair of images being a bright substantially white image, and the other one of said pair of images being a dark substantially noncolored image.
9. A calibration method according to claim 8, further comprising: calculating the electrical strength of said predetermined reference level signal.
10. A calibration method according to claim 9, wherein the step of calculating includes solving the equation y = mx + b; wherein y is a minimum voltage potential value for enabling said comparator means to distinguish between said low intensity auxiliary light source and said high intensity auxiliary light source; wherein x is the potential difference between a bright image reference level signal and a dark image reference level signal; wherein b is the potential value of said bright image reference level signal; and wherein m is a constant indicative of a luminance level for one of a plurality of different kinds of image projection systems.
11. A light generating device, comprising: low intensity light means for generating a low intensity laser beam to illuminate a projected video image with locating image information; high intensity light means for generating a high intensity laser beam to illuminate said projected video image with auxiliary light information; and switch means for causing selectively either said low intensity beam or said high intensity beam to be generated in response to user actuation.
12. In an optical system including a light sensing device for generating an information signal indicative of a plurality of luminance levels reflecting from a viewing surface, said information signal including luminance level information indicative of background ambient light, primary video image light, auxiliary image light and spurious image light, an image processing unit, comprising: gain means for causing said information signal to adjusted relative to a given black level voltage; amplifier means responsive to said gain means for increasing the strength of said information signal a sufficient amount to permit the auxiliary image light information of said information signal to be discriminated from the primary video image light information of said information signal; discrimination means for distinguishing the difference between the auxiliary image light information of said information signal and the spurious image light information of said information signal and for distinguishing the difference between the auxiliary image light information of said information signal and the primary video image light information of said information signal; and signal processing means responsive to said discrimination means for generating image coordinate information indicative of the coordinate location of the primary video image light illuminated by the auxiliary image light but not illuminated by the spurious image light.
13. A method for optically emulating a mouse, comprising: converting a video information signal into primary image coordinate information to help facilitate the emulating of the mouse; said video information signal including video image information indicative of a primary image reflecting from a viewing surface; determining whether said video information signal includes a coded auxiliary light information sequence indicative of a mouse double click; determining whether said coded auxiliary light information sequence was detected within a given area of said viewing surface; and transmitting a double click coordinate location twice within a given period of time when said coded auxiliary light information sequence was detected within said given area of said viewing surface.
14. A method for optically emulating a mouse in accordance with claim 13 wherein said given area of said viewing surface is below said primary image.
15. A method for optically emulating a mouse in accordance with claim 13 wherein said given area is within a small imaginary rectangular area of the primary image reflecting from said viewing surface.
16. A method of optically emulating a mouse in accordance with claim 15, wherein said small imaginary rectangular area is defined as m by n pixel locations within the periphery of said primary image.
17. A method of optically emulating a mouse in accordance with claim 16 wherein m by n is 4 by 2 pixels.
18. An optical input arrangement for optically emulating a mouse, comprising: image processing means for converting a video information signal into primary image coordinate information to help facilitate the emulating of the mouse; said video information signal including video image information indicative of a primary image reflecting from a viewing surface; means for determining whether said video information signal includes a coded auxiliary light information sequence indication of a mouse double click; means for determining whether said coded auxiliary light information sequence was detected within a given area of said viewing surface; and communication means transmitting a double click coordinate location twice within a given period of time when said coded auxiliary light information sequence was detected within said given area of said viewing surface.
19. An optical input arrangement for emulating a mouse in accordance with claim 18, wherein said means for determining whether said coded auxiliary light information sequence was detected within a given area of said viewing screen includes below primary image algorithm means.
20. An optical input arrangement in accordance with claim 19, wherein said below primary image algorithm means determines whether said coded auxiliary light information sequence resulted from auxiliary light images reflecting from a designated area beyond the periphery of said primary image.
21. An optical input arrangement in accordance with claim 19, wherein said designated area is below said primary image.
22. An optical input arrangement in accordance with claim 18, wherein the second mentioned means for determining includes direct primary image algorithm means.
23. An optical input arrangement in accordance with claim 18, wherein said direct primary image algorithm means determines whether said coded auxiliary light information sequence resulted from auxiliary light images reflecting from a designated area within the periphery of said primary image.
24. An optical input arrangement in accordance with claim 23, wherein said designated area is a small imaginary rectangular area defined as a certain one of m by n pixels within the periphery of said primary image.
25. An optical input arrangement in accordance with claim 24, wherein said certain one of m by n pixels is determined by detecting an auxiliary light image having a size of about x by y pixels within the periphery of said primary image and having a duration of no greater than t seconds.
26. An optical input arrangement in accordance with claim 24, wherein said certain one of m by n pixels is 12 by 6 pixels.
27. An optical input arrangement in accordance with claim 26, wherein a more preferred m by n pixels is 8 by 4 pixels.
28. An optical input arrangement in accordance with claim 27, wherein the most preferred m by n pixels is 4 by 2 pixels.
29. A method in accordance with claim 13, wherein said coded auxiliary light information sequence is defined by two successive auxiliary light pulses separated by no more than t seconds and neither one of the two light pulses having a duration of greater than T seconds.
30. A method in accordance with claim 29, wherein T is about 0.75 seconds and t is about 0.75 seconds.
31. A communication interface device, comprising: connector means for connecting a plurality of input units having substantially different communication rates to a host computer, said host computer having at least two separate communication speeds for receiving information from said input units. switching means coupled to said connector means for establishing selectively a designated communication path between the host computer and a n individual one of said plurality of input units; processor means for generating a selection signal for causing said switching means to establish a data communication path between said host computer and a single one only of said plurality of input units; and algorithm means disposed partially in said processor means and partially within said host computer for causing said host computer to receive data at a certain one of its communication rates.
32. An optical input arrangement for a liquid crystal display system for projecting an image onto a surface, comprising: means for sensing optically the projected image; means for discriminating electrically an auxiliary optical command signal from the projected image, said auxiliary optical command signal having a narrow band of wavelengths; and optical means for filtering incoming light received from the surface to a narrow band to pass only said narrow band of optical wavelengths so that the electrical discrimination of said auxiliary optical command signal is facilitated.
33. An auxiliary optical command arrangement according to claim 32, wherein said optical means restricts said optical wavelengths to between about 600 nanometers and 740 nanometers.
34. An auxiliary optical command arrangement according to claim 32, wherein said optical means restricts said optical wavelengths to between about 635 nanometers and 705 nanometers.
35. An auxiliary optical command arrangement according to claim 32, wherein said optical means restricts said optical wavelengths to between about 660 nanometers and 680 nanometers.
36. An auxiliary optical command arrangement according to claim 32, wherein said auxiliary optical command signal wavelength band is between about 665 nanometers and 675 nanometers.
37. An auxiliary optical command arrangement according to claim 36, wherein said auxiliary optical command signal wavelength band is centered at about 670 nanometers.
38. A method using an auxiliary optical command arrangement for a liquid crystal display system for projecting an image onto a surface, comprising: sensing optically the projected image; discriminating electrically an auxiliary optical command signal from the projected image; filtering incoming light received from the surface to a narrow band to pass only said narrow band of optical wavelengths so that the electrical discrimination of said auxiliary optical command signal is facilitated.
39. An optical input arrangement for a liquid crystal display system for projecting an image onto a surface, comprising: means for sensing optically the projected image having an associated viewing area; means for generating an alignment optical signal coupled to said sensing means to produce an alignment light spot on said viewing area to facilitate adjustment of said sensing means, wherein when said alignment light spot is adjusted to be substantially at the center of said viewing area, said viewing area encompasses all of the projected image; and means for discriminating an auxiliary optical input signal from the projected image.
40. An auxiliary optical command arrangement according to claim 39, wherein said sensing means includes lens means having an optical center to direct incoming light.
41. An auxiliary optical command arrangement according to claim 40, wherein said generating means includes a high intensity light source mounted substantially on a horizontal optical axis of said lens means and in close proximity to said optical center.
42. An auxiliary optical command arrangement according to claim 40, wherein said high intensity light source is a light emitting diode.
43. An auxiliary optical command arrangement according to claim 41, wherein said high intensity light source is a laser.
44. A method of using an optical input arrangement for a liquid crystal display system for projecting an image onto a surface, comprising: sensing optically the projected image, said image having an associated viewing area; generating an alignment optical signal to produce an alignment light spot on said viewing area; adjusting said alignment light spot to be substantially at the center of said viewing area; discriminating an auxiliary optical input signal from the projected image.
45. An optical input arrangement for a liquid crystal display system for projecting an image onto a viewing surface, comprising: means for sensing optically light reflected from the viewing surface; means for discriminating a user controlled optical input image from the projected image, said optical input image having substantially higher intensity than said projected image; said discriminating means including means for determining a differential intensity value of the light reflected from the viewing surface; and said discriminating means further including means for detecting when said differential intensity value exceeds a positive threshold amount and substantially immediately thereafter decreases more than a negative threshold amount, thereby indicating the detection of said optical input signal from said projected image.
46. An optical input arrangement according to claim 45, wherein said means for sensing optically includes a video camera.
47. An auxiliary optical command arrangement according to claim 45, wherein said discriminating means further includes a digital signal processor.
48. A method of using an optical input arrangement for a liquid crystal display system for projecting an image onto a viewing surface, comprising: sensing optically reflected light from the viewing surface; converting the sensed reflected light into a digital signal for facilitating representation of intensity of said reflected light; determining a differential intensity value of said reflected light; and when said differential intensity value increases more than a positive threshold amount and substantially immediately thereafter decreases more than a negative threshold amount, thereby indicating the detection of a user controlled optical input signal having a substantially higher intensity than the intensity of the projected image.
PCT/US1993/000874 1992-02-03 1993-02-02 Optical system auxiliary input calibration arrangement and method of using same WO1993015496A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US82991692 true 1992-02-03 1992-02-03
US82988092 true 1992-02-03 1992-02-03
US07/829,916 1992-02-03
US07/829,880 1992-02-03

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
DE1993630637 DE69330637D1 (en) 1992-02-03 1993-02-02 Calibration system for an additional input signals using methods in an optical system and
DE1993630637 DE69330637T2 (en) 1992-02-03 1993-02-02 Calibration system for an additional input signals using methods in an optical system and
EP19930904791 EP0625276B1 (en) 1992-02-03 1993-02-02 Optical system auxiliary input calibration arrangement and method of using same
JP51347593A JPH07503562A (en) 1992-02-03 1993-02-02

Publications (1)

Publication Number Publication Date
WO1993015496A1 true true WO1993015496A1 (en) 1993-08-05

Family

ID=27125305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1993/000874 WO1993015496A1 (en) 1992-02-03 1993-02-02 Optical system auxiliary input calibration arrangement and method of using same

Country Status (1)

Country Link
WO (1) WO1993015496A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2485573C2 (en) * 2006-10-12 2013-06-20 Конинклейке Филипс Электроникс Н.В. System and method of controlling light

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3885096A (en) * 1972-07-15 1975-05-20 Fuji Photo Film Co Ltd Optical display device
US4280135A (en) * 1979-06-01 1981-07-21 Schlossberg Howard R Remote pointing system
US4523231A (en) * 1983-01-26 1985-06-11 Ncr Canada Ltd - Ncr Canada Ltee Method and system for automatically detecting camera picture element failure
US4745402A (en) * 1987-02-19 1988-05-17 Rca Licensing Corporation Input device for a display system using phase-encoded signals
US4846694A (en) * 1988-06-20 1989-07-11 Image Storage/Retrieval Systems, Inc. Computer controlled, overhead projector display
US5138304A (en) * 1990-08-02 1992-08-11 Hewlett-Packard Company Projected image light pen
US5146049A (en) * 1990-01-22 1992-09-08 Fujitsu Limited Method and system for inputting coordinates using digitizer

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3885096A (en) * 1972-07-15 1975-05-20 Fuji Photo Film Co Ltd Optical display device
US4280135A (en) * 1979-06-01 1981-07-21 Schlossberg Howard R Remote pointing system
US4523231A (en) * 1983-01-26 1985-06-11 Ncr Canada Ltd - Ncr Canada Ltee Method and system for automatically detecting camera picture element failure
US4745402A (en) * 1987-02-19 1988-05-17 Rca Licensing Corporation Input device for a display system using phase-encoded signals
US4846694A (en) * 1988-06-20 1989-07-11 Image Storage/Retrieval Systems, Inc. Computer controlled, overhead projector display
US5146049A (en) * 1990-01-22 1992-09-08 Fujitsu Limited Method and system for inputting coordinates using digitizer
US5138304A (en) * 1990-08-02 1992-08-11 Hewlett-Packard Company Projected image light pen

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0625276A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2485573C2 (en) * 2006-10-12 2013-06-20 Конинклейке Филипс Электроникс Н.В. System and method of controlling light

Similar Documents

Publication Publication Date Title
US6603867B1 (en) Three-dimensional object identifying system
US6414672B2 (en) Information input apparatus
US6634749B1 (en) Eye tracking system
US5589874A (en) Video imaging system with external area processing optimized for small-diameter endoscopes
EP1429290B1 (en) Image correction apparatus and image pickup apparatus
US6819355B1 (en) Computer manipulatable camera and computer, computer product and system for manipulating camera
US5216480A (en) Surveying instrument
US7139445B2 (en) Image capture device and method of selecting and capturing a desired portion of text
US7028269B1 (en) Multi-modal video target acquisition and re-direction system and method
US6714247B1 (en) Apparatus and method for inputting reflected light image of a target object
US6598978B2 (en) Image display system, image display method, storage medium, and computer program
US20050251015A1 (en) Magnified display apparatus and magnified image control apparatus
US5297061A (en) Three dimensional pointing device monitored by computer vision
US6847356B1 (en) Coordinate input device and its control method, and computer readable memory
US5031118A (en) Apparatus and method for adapting multiple operating mode monitor
US5062058A (en) Image processing method for color scanner
US20010010514A1 (en) Position detector and attitude detector
US6091450A (en) Image pickup apparatus having function of detecting proximity of eye
US6317266B1 (en) Coordinate input apparatus
US6075557A (en) Image tracking system and method and observer tracking autostereoscopic display
US20070030540A1 (en) Image scanning
US20070291152A1 (en) Image pickup apparatus with brightness distribution chart display capability
US6256411B1 (en) Image processing device and method for detecting objects in image data
US6727885B1 (en) Graphical user interface and position or attitude detector
US6654062B1 (en) Electronic camera

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH DE DK ES FI GB HU JP KP KR LK LU MG MN MW NL NO PL PT RO RU SD SE

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
COP Corrected version of pamphlet

Free format text: PAGES 1/40-34/40 AND 36/40-40/40,DRAWINGS,REPLACED BY NEW PAGES BEARING THE SAME NUMBER;AFTER THE RECTIFICATION OF OBVIOUS ERRORS AS AUTHORIZED BY THE UNITED STATES PATENT AND TRADEMARK OFFICE IN ITS CAPACITY AS INTERNATIONAL SEARCHING AUTHORITY

ENP Entry into the national phase in:

Ref country code: CA

Ref document number: 2129346

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 2129346

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 1993904791

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1993904791

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWG Wipo information: grant in national office

Ref document number: 1993904791

Country of ref document: EP