AU3122199A - 3-D imaging system - Google Patents

3-D imaging system Download PDF

Info

Publication number
AU3122199A
AU3122199A AU31221/99A AU3122199A AU3122199A AU 3122199 A AU3122199 A AU 3122199A AU 31221/99 A AU31221/99 A AU 31221/99A AU 3122199 A AU3122199 A AU 3122199A AU 3122199 A AU3122199 A AU 3122199A
Authority
AU
Australia
Prior art keywords
unsigned
pixel
image
button
byte
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU31221/99A
Inventor
Sheldon S. Zelitt
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visualabs Inc
Original Assignee
Visualabs Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/368,644 external-priority patent/US5790086A/en
Application filed by Visualabs Inc filed Critical Visualabs Inc
Publication of AU3122199A publication Critical patent/AU3122199A/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Description

S F Ref: 385904D1
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT .j a.
a a. *e a.
a
ORIGINAL
Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: Visualabs Inc.
Bay 10 2915-10th Avenue N.E.
Calgary Alberta T2A 5L4
CANADA
Sheldon S. Zelitt Spruson Ferguson, Patent Attorneys Level 33 St Martins Tower, 31 Market Street Sydney, New South Wales, 2000, Australia 3-D Imaging System The following statement is a full description of this invention, including the best method of performing it known to me/us:- 5845 3-D IMAGING SYSTEM BACKGROUND OF THE INVENTION The present invention relates to 3-dimensional image display techniques and, in particular, to such a technique in which the use of special headgear or spectacles is not required.
The presentation of fully 3-dimensional images has been a serious technological goal for the better part of the twentieth century. As early as 1908, Gabriel Lippman invented a method for producing a true 3dimensional image of a scene employing a photographic plate exposed through a "fly's eye" lenticular sheet of small fixed lenses. This technique became known as "integral photography", and display of the developed image was undertaken through the same sort of fixed lens lenticular sheet. Lippman's development and its extensions through the years (for example, US Patent No. 3,878,329), however, failed to produce a technology readily amenable to images which were simple to produce, adaptable to motion presentation, or capable of readily reproducing electronically generated images, the predominant format of this latter part of the century.
The passage of time has resulted in extensions of the multiple-image-component approach to 3-dlmensional S: 25 imagery into a variety of technical developments which include various embodiments of ribbed lenticular or lattice sheets of optical elements for the production of stereo images from a single specially processed image (for example US Patent No. 4,957,311 or US Patent No. 4,729,017, to cite recent relevant examples). Most of these suffer from a common series of deficiencies, which include severe restrictions on the viewer's physical position with respect to the viewing screen, reduced image quality resulting from splitting the produced image intensity between two separate images, and in many, parallax viewable in only one direction.
-2- Other prior art techniques for generating real 3-dimensional images have included the scanning of a physical volume, either by mechanically scanning a laser beam over a rotating helical screen or diffuse vapour cloud, by sequentially activating multiple internal phosphor screens in a cathode-ray tube, or by physically deviating a pliable curved mirror to produce a variable focus version of the conventional image formation device. All of these techniques have proved to be cumbersome, difficult to both manufacture and view, and overall not readily amenable to deployment in the consumer marketplace.
During the same period of time, a variety of technologies relating to viewer-worn 0i o appliances emerged, including glasses employing two-colour or cross-polarized filters for o the separation of concurrently displayed dual images, and virtual reality display headgear, Sall related to the production of stereopsis, that is, the perception of depth through the assimilation of separate left- and right-eye images. Some of these have produced stereo images of startling quality, although generally at the expense of viewer comfort and 5 convenience, eye strain, image brightness, and acceptance among a portion of the viewing population who cannot readily or comfortably perceive such stereo imagery.
Compounding this is the recently emerging body of ophthalmological and neurological studies which suggest adverse and potentially long-lasting effects from the extended use of stereo imaging systems, user- worn or otherwise.
Japanese patent publication 62077794 discloses a 2-dimensional display device on which an image formed by discrete pixels is presented, the display device having an array of optical elements aligned respectively in front of the pixels and means for individually varying the effective focal length of each optical element to vary the apparent visual distance from a viewer, positioned in front of the display device, at which each individual pixel appears, whereby a 3-dimensional image is created.
R:\LIBL\00156.doc -3- More particularly, the optical elements in this Japanese publication are lenses made of nematic liquid crystals and the focal length of the lenses can be varied by varying an electrical field which varies the alignment of the crystals. The system requires transistors and other electrical connections directed to each microlens and special packaging between glass plates is necessary. Additionally, the change in effective focal length achieved is very small requiring use of additional optical components such as a large magnifier lens which both renders the system unacceptably large and unduly constrains the available lateral image viewing angle.
SUMMARY OF THE INVENTION 10 It is an object of the present invention to provide a method for including picture depth information in a television broadcast signal.
*According to a first aspect of the present invention there is provided a method of encoding a television broadcast signal comprising the steps of generating a depth signal for each pixel and adding the depth signal as a component of the broadcast signal.
1 (The next page is page R:\LIBL\00156.doc BRIEF DESCRIPTION OF THE DRAWINGS The advantages of the present invention will become apparent from the following description, viewed in conjunction with the attached drawings. Throughout these drawings, like parts are designated by like reference numbers: FIG. 1(a) is an illustration of one embodiment of a pixel-level optical device, viewed obliquely from the rear.
FIG. l(b) is an illustration of a different embodiment of the same type of pixellevel optical assembly which comprises three optical elements.
FIG. 2 illustrates the manner in which varying the point of input of a collimated 10 light beam into the back (input end) of a pixel-level optical device varies the distance in space from the viewer at which that point of
S.
R:\LIBL\00156.doc light appears.
FIG. 3(a) illustrates how this varying input illumination to a pixel-level optical device may be provided in one preferred embodiment by a cathode-ray tube.
FIG. 3(b) illustrates a different view of the varying input illumination, and the alignment of the pixellevel optics with pixels on the phosphor layer of the cathode-ray tube.
FIG. 3(c) Illustrates the relationship between the size and aspect ratio of the collimated input beam of light to the size and aspect ratio of the pixel-level optical device.
FIG. 4(a) illustrates how an array of pixel-level optics is presented across the front of an illumination 15 source such as the cathode-ray tube in a computer monitor, television or other essentially flat screen imaging device.
FIG. 4(b) illustrates a second preferred pattern :of image tube pixels which may be employed for the purpose.
S. FIG. 5 illustrates the manner in which the depth signal is added to the horizontally scanned raster lines in a television or computer monitor image.
FIG. 6 illustrates how the specific point of light input to pixel-level optics may be varied using motion picture film or some other form of illuminated 25 transparency as the illumination source.
FIG. 7 illustrates how an array of pixel-level optics may be employed to view a continuous strip of motion Picture film for the viewing of sequential frames of film in the display of 3-dimensional motion pictures.
FIG. 8 illustrates a method whereby the depth component of a recorded scene may be derived through image capture which employs one main imaging camera and one secondary camera.
FIG. 9(a) illustrates the process by which a depth signal may be retroactively derived for conventional 2 -dimensional imagery, thereby making that imagery capable of being displayed in 3 dimensions on a suitable display device.
FIG. 9(b) illustrates the interconnection and operation of image processing devices which may be employed to add depth to video imagery according to the process illustrated in Fig. 9(a).
FIG. 10 illustrates the application of the pixellevel depth display techniques derived in the course of these developments to the 3-dimensional display of printed images.
FIG. 11 illustrates the energy distribution of the conventional NTSC video signal, indicating the .i luminance and chrominance carriers.
FIG. 12 illustrates the same NTSC video signal 15 energy distribution, but with the depth signal encoded into the spectrum.
FIG. 13(a) illustrates the functional design of the circuitry within a conventional television receiver which typically controls the vertical deflection of the scanning electron beam in the cathode-ray tube.
FIG. 13(b) illustrates the same circuitry with the addition of the circuitry required to decode the depth component from a 3-D-encoded video signal and suitably alter the behaviour of the vertical deflection of the 25 scanning electron beam to create the 3-D effect.
FIG. 14 illustrates a preferred embodiment of the television-based electronic circuitry which executes the depth extraction and display functions outlined in Fig.
13(b).
Fm
G
15 Illustrates an alternative pixel-level optical structure in which the position of the input light varies radially rather than linearly.
FIG. 16 is similar to FIG. 2 but illustrating an alternative means for varying the visual distance from the viewer of light emitted from an individual pixel.
FIG. 17 illustrates how the arrangement shown in FIG. 16 is achieved in a practical embodiment.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT OF THE INVENTION Fig. l(a) illustrates in greatly magnified form one possible embodiment of an optical element 2 employed to vary the distance from the viewer at which a collimated point of light input into this device may appear. For reference purposes, the size of such an optical element may vary considerably, but is intended to match the size of a display pixel, and as such, will be typically, for a television monitor, in the order of 1 mm in width and 3 mm in height. Optics as small as 0.5 mm by 1.5 mm have been i demonstrated for a computer monitor which is designed to be viewed at closer range, and as large as 5 mm wide and 15 mm 15 high, a size intended for application in a large-scale commercial display designed for viewing at a considerable distance.
The materials from which these pixel-level optics have been made have been, to date, either fused silica glass (index of refraction of 1.498043), or one of two plastics, being polymethyl methacrylate (index of refraction of 1.498) or methyl methacrylate (index of refraction of 1.558). There is, however, no suggestion made that these are the only, or even preferred, optical S. 25 materials from which such pixel-level optics may be fabricated.
In Fig. l(a) the pixel-level optical element is seen obliquely from the rear, and as may be seen, while the front surface 1 of this optical device is consistently convex from top to bottom, the rear surface varies in shape progressively from convex at the top to concave at the bottom. Both linear and non-linear progressions in the variation of optical properties have been employed successfully. A collimated beam of light is projected through the optical device in the direction of the optical axis 3, and as may be seen, the collective optical refracting surfaces of the device through which that collimated light beam passes will vary as the beam is moved in input point from the top to the bottom of the device.
Although the embodiment illustrated in Figure l(a) possesses one fixed surface and one variable surface, variations on this design are possible in which both surfaces vary, or in which there are more than two optical refracting surfaces. Figure for example, illustrates a second embodiment in which the pixel-level optics are a compound optical device composed of three optical elements.
Tests in the laboratory suggest that compound pixel-level optical assemblies may provide improved image quality and an improved viewing angle over single element optical assemblies and in fact the most successful embodiment of 15 this technology to date employs 3-element optics. However, as single element optical assemblies do operate in this invention as described herein, the pixel-level optical assemblies illustrated throughout this disclosure will be *':portrayed as single element assemblies for the purposes of clarity of illustration.
F'Pig. 2 illustrates, in compressed form for clarity of presentation, a viewer's eyes 4 at a distance in front of the pixel-level optical element 2. A collimated beam of light may be input to the back of optical device 2 25 at varying points, three of which are illustrated as light beams 5, 6 and 7. As the focal length of device 2 varies depending upon the input point of the light beam, FIG. 2 illustrates how the resulting point of light will be presented to the viewer at different apparent points in space 5a, 6a or 7a, corresponding to the particular previously described and numbered placement of input beams.
Although points 5a, 6a and 7a are in fact vertically displaced from one another, this vertical displacement is not detectable by the observer, who sees only the apparent displacement in depth.
Fig. 3(a) illustrates how, in one preferred embodiment of this invention, each individual pixel-level optical device may be placed against the surface of a cathode-ray tube employed as the illumination source. In this drawing, optical element 2 rests against the glass front 8 of the cathode-ray tube, behind which is the conventional layer of phosphors 9 which glow to produce light when impacted by a projected and collimated beam of electrons, illustrated at different positions in this drawing as beams 5b, 6b and 7b. For each of these three illustrative electron beam positions, and for any other beam position within the spatial limits of the pixel-level optical device, a point of light will be input at a unique point on the back of the pixel-level optics. The vertical position of the electron beam may be varied using entirely conventional electromagnetic beam positioning coils as found on conventional cathode-ray tubes, according to a specially prepared signal, although experiments undertaken in the lab have suggested that imagery presented at a high frame rate, that is, substantially over 100 frames per second, may require beam positioning coils which are constructed so as to be more responsive to the higher deflection frequencies inherent in high frame rates. The pattern of phosphors on the cathode-ray tube, however, must match the arrangement of pixel-level optics, in both length 25 and spatial arrangement, that is, an optic must be capable of being illuminated by the underlying phosphor throughout its designed linear input surface. Fig. 3(b) illustrates this arrangement through an oblique rear view of pixellevel optic 2. In this diagram, adjacent phosphor pixels 35, of which 9 are presented, wIll be of 3 different colours as in a conventional colour cathode-ray tube, and of an essentially rectangular shape. Note that the size and aspect ratio (that is, length to width ratio) of each phosphor pixel matches essentially that of the input end of the pixel-level optic which it faces. As may be seen by observing the phosphor pixel represented by shading, the electron beam scanning this phosphor pixel can be focused at any point along the length of the phosphor pixel, illustrated here by the same 3 representative electron beams 5b, 6b and 7b. The result is that the point at which light is emitted is displaced minutely within this pixel.
Fig. 3(c) illustrates the importance of the size and aspect ratio of the beam of light which is input to pixel-level optical device 2, here shown from the rear.
The visual display of depth through a television tube is more akin in resolution requirement to the display of chrominance, or colour, than to the display of luminance, or black-and-white component, of a video image. By this we mean that most of the perceived fine detail in a video image is conveyed by the relatively high resolution luminance component of the image, over which a lower resolution chrominance component is displayed. It is possible to have a much lower resolution in the chrominance because the eye is much more forgiving where the perception of colour is concerned than where the perception of image detail is concerned. Our research in the laboratory has suggested that the eye is similarly forgiving about the perception of depth in a television image.
d.t Having said that, however, the display of viewable depth is still generated by the physical movement of a light beam which is input to a linear pixel-level optical device, and it will be obvious that the greater the range of movement of that input light beam, the greater opportunity to influence viewable depth.
In Fig. pixel-level optical device 2 is roughly three times as high as it is wide. Collimated input light beam 66a, shown here in cross-section, is round, and has a diameter approximating the width of optical device 2. Collimated input light beam 66b is also round, but has a diameter roughly one-fifth of the length of optical device 2. On one hand, this allows beam 66b to traverse a greater range of movement than beam 66a, providing the prospect of a greater ranger of viewable depth in the resulting image, but on the other hand, this is at the expense of a cross-sectional illuminating beam area which is only approximately 36 per cent of that of beam 66a. In order to maintain comparable brightness in the resulting image, the intensity of input beam 66b will have to be approximately 2.7 times that of beam 66a, an increase which is entirely achievable.
Beam 66c is as wide as the pixel-level optical device 2, but is a horizontal oval of the height of beam 66b, that is, only one-fifth the height of optical device 2. This resulting oval cross-section of the illuminating beam is less bright than circular beam 66a, but almost twice as bright as smaller circular beam 66b. This design is highly functional, and is second only to the perfectly rectangular cross-section illuminating beam 66d. This is in fact the beam cross-section employed in our latest and most preferred embodiments of the invention.
Fig. 4(a) illustrates how the pixel-level optics 2 are arranged into an array of rows, twelve of which are pictured for illustrative purposes, and how these are placed on the front of an illumination source, here i pictured as a cathode-ray tube 10 in one preferred embodiment. As the controlled electron beam is scanned 25 across a row of pixel-level optics, its vertical displacement is altered individually for each pixel, producing a horizontal scan line which is represented for illustrative purposes as line 15, shown both as a dotted line behind the pixel array and separately for clarity as a solid line within the ellpse to the left. As may be seen, the horizontal scan line which, in a conventional cathoderay display is straight, is minutely displaced from the midline of the scan for each individual pixel, thereby creating an image which, varying in its distance from the viewer as it does pixel by individual pixel, contains substantial resolution in its depth perception.
Experience has shown that a minute interstitial gap between the individual pixel-level optical elements minimizes optical "cross-talk" between optical elements, resulting in enhanced image clarity, and that this isolation of the optics can be further enhanced by the intrusion of a black, opaque material into these interstitial spaces. Interstitial gaps on the order of 0.25 mm have proven to be quite successful, but gaps as small as 0.10 mm have been demonstrated, and have functioned perfectly as optical isolators, most especially when infused with the opaque material referred to above.
Arrays of these pixel-level optics have been built through the process of manually attaching each S. individual optic to the surface of an appropriate cathode- 15 ray tube using an optically neutral cement. This process is, of course, arduous, and lends itself to placement errors through the limitations in accuracy of hand-assisted mechanics. Arrays of optics have, however, been very successfully manufactured by a process of producing a metal "master" of the complete array of optics in negative, and then embossing the usable arrays of optics into thermoplastic materials to produce a "pressed" replica of the master which is then cemented, in its entirety, to the surface of the cathode-ray tube. Replication of highly detailed surfaces through embossing has been raised to an artform in recent years through the technical requirements of replicating highly detailed, information-rich media such as laser discs and compact discs, media typically replicated with great accuracy and low cost in inexpensive plastic materials. It is anticipated that a preferred manufacturing technique for generating mass-produced arrays of pixel-level optics will continue to be an embossing process involving thermoplastic materials. We have, as well, successfully produced in the laboratory arrays of Pixel-level optics through the technique of injection molding. To date, three layers of different pixel-level r 14 optics, each representing a different optical element, have been successfully aligned to produce an array of 3-element micro-optics. In some preferred embodiments, these layers are cemented to assist in maintaining alignment, but in others, the layers are fixed at their edges and are not cemented together.
In the placement of the pixel-level optics onto the surface of the cathode-ray or other light-generating device, precise alignment of the optics with the underlying pixels is critical. Vertical misalignment causes the resulting image to have a permanent bias in the displayed depth, while horizontal misalignment causes constraint of the lateral viewing range afforded by the 3-D display device. As well, the optical linkage between the light- 15 generating pixels and the input surface of the pixel-level optics is enhanced by minimizing where possible the physical distance between the illuminating phosphor and the input surface of the optics. In a cathode-ray tube environment, this implies that the front surface glass of the tube to which the optics are applied should be of the minimal thickness consistent with adequate structural integrity. In large cathode-ray monitors, this front surface may be as thick as 8 mm, but we have successfully illustrated the use of these optics with a specially constructed cathode-ray tube with a front surface thickness of 2 mm. One highly successful embodiment of a cathode-ray tube has been constructed in which the pixel-level optics have actually been formed from the front surface of the tube.
Figs. 3(b) and 4(a) illustrate an essentially rectangular pattern of image tube pixels 35 and pixel-level linear optical elements 2, that is, arrays in which the rows are straight, and aligned pixel to pixel with the rows both above and below. This pattern of pixels and optics produces highly acceptable 3-D images, but should not be assumed to be the only such pattern which is possible within the invention.
Fig. 4(b) illustrates a second preferred pattern of pixels 35 in which horizontal groups of three pixels are vertically off-set from those to the left and right of the group, producing a "tiled" pattern of three-pixel groups.
As this configuration has been built in the laboratory, the three-pixel groups, comprise one red pixel 35r, one green pixel 35g and one blue pixel 35b. As in a conventional 2-D television tube, colour images are built up from the relative illumination of groups, or "triads" of pixels of these same three colours. A different ordering of the three colours is possible within each triad, but the order illustrated in Fig. 4(b) is the embodiment which has been built to date in our laboratory.
Fig. 5 illustrates the minute modification by the depth signal of the horizontal scan lines in a raster image such as a conventional television picture. In the conventional cathode-ray television or computer monitor tube shown at the top right of Fig. 5, each individual picture in a motion sequence is produced by an electron 9" 9 beam which scans horizontally line by line down the screen, illustrated in FIG. 5 by four representative scan lines 17.
This highly regular scanning is controlled within the electronics of the television or computer monitor by a horizontal scan line generator 16, and not even variations in the luminance or chrominance components of the signal create variations in the regular top-to-bottom progression of the horizontal scan lines.
The present invention imposes a variation on that regularity in the form of the minute displacements from a straight horizontal scan which produce the depth effect.
Such variation is physically effected through the use of a depth signal generator 18 whose depth signal is added through adder 19 to the straight horizontal lines to produce the minute variations in the vertical position of each horizontal scan line, producing lines which representatively resemble lines 20. The depth signal generator portrayed in Fig. 5 is a generic functional representation; in a television set, the depth signal generator is the conventional video signal decoder which currently extracts luminance, chrominance and timing information from the received video signal, and which is now enhanced as described below to extract depth information which has been encoded into that signal in an entirely analogous fashion. Similarly, in a computer, the depth component generator is the software-driven video card, such as a VGA video card, which currently provides r@ luminance, chrominance and timing information to the computer monitor, and which will also provide softwaredriven depth information to that monitor.
Fig. 6 illustrates the manner in which a film :transparency 14 may be employed to provide the controlled input illumination to the pixel-level optical device 2 in another preferred embodiment of the invention. In this example, the portion of the film which is positioned behind the illustrated optical element is opaque except for one transparent point designed to allow light to enter the optical device at the desired point. The film-strip is Jconventionally illuminated from the rear, but only the light beam 5c is allowed through the transparent point in 25 the film to pass through optical element 2. As may be seen, this situation is analogous to the situation in FIG.
3, in which a controlled electron beam in a cathode-ray tube was used to select the location of the illumination beam. The film transparencies employed may be of arbitrary size, and embodiments utilizing transparencies as large as eight inches by ten inches have been built.
Fig. 7 illustrates the manner in which an array 11 of pixel-level optical elements 2, twelve of which are pictured for illustrative purposes, may be employed to display imagery from a specially prepared film strip 13.
Optical array 11 is held in place with holder 12. An image 17 on film strip 13 is back-lit conventionally and the resulting image focused through a conventional projection lens system, here represented by the dashed circle 22, onto array 11, which is coaxial with film strip 13 and projection lens 22 on optical axis 23. The 3- dimensional image generated may be viewed directly or may be employed as the image generator for a 3-dimensional real image projector of known type. As well, the 3-dimensional images generated may be viewed as still images, or in sequence as 10 true 3- dimensional motion pictures at the same frame rates •as conventional motion pictures. In this embodiment, the individual pixels in film strip 13 may be considerably smaller than those utilized for television display, as the resulting pixels are intended for expansion on projection; the resolution advantage of photographic film over television displays easily accommodates this reduction in pixel size.
Fig. 8 illustrates a scene in which two cameras 20 are employed to determine the depth of each object in a scene, that is, the distance of any object within the scene from the main imaging camera. A scene to be captured, here e viewed from above, is represented here by a solid rectangle 24, a solid square 25 and a solid ellipse 26, each at a different distance from the main imaging camera 27, and therefore each possessing different depth within the captured scene. The main imaging camera 27 is employed to capture the scene in its principal detail from the artistically preferred direction. A secondary camera 28 is positioned at a distance from the first camera, and views the scene obliquely, thereby capturing a different view of the same scene concurrently with the main imaging camera.
Well known techniques of geometric triangulation may then be employed to determine the true distance from the main imaging camera which each object in the scene possesses.
One preferred manner in which these calculations ~1 may be done, and the resulting depth signal generated, is in a post-production stage, in which the calculations related to the generation of the depth signal are done "off-line", that is, after the fact of image capture, and generally at a site remote from that image capture and at a pace of depth signal production which can be unrelated to the pace of real-time image capture. A second preferred manner of depth signal generation is that of performing the requisite calculation in "real-time", that is, essentially 1 0 as the imagery is gathered. The advantage of the real-time depth signal generation is that it enables the production of "live" 3-dimensional imagery. The computing requirements of real-time production, however, are substantially greater than that of an "off-line" process, in which the pace may be extended to take advantage of S' lower, but lower cost, computing capability. Experiments S: conducted in the laboratory suggest that the method of conducting the required computation in real-time which is preferred for reasons of cost and compactness of electronic 20 design is through the use of digital signal processors (DSP's) devoted to image processing, ie. digital image processors (DIP's), both of these being specialized, narrow-function but high speed processors.
As the secondary camera 28 is employed solely to capture objects from an angle different from that of the main imaging camera, this secondary camera may generally be of somewhat lower imaging quality than the main imaging camera, and therefore of lower cost. Specifically within motion picture applications, while the main imaging camera will be expensive and employ expensive film, the secondary camera may be a low cost camera of either film or video type. Therefore, as opposed to conventional filmed stereoscopic techniques, in which two cameras, each employing expensive 35 mm. or 70 mm. film, must be used because each is a main imaging camera, our technique requires the use of only one high quality, high cost camera 19 because there is only one main imaging camera.
While this comparative analysis of two Images of the same scene acquired from different angles has proved to be most successful, it is also possible to acquire depth cues within a scene by the use of frontally placed active or passive sensors which may not be inherently imaging sensors. In the laboratory, we have successfully acquired a complete pixel-by-pixel depth assignment of a scene, referred to within our lab as a "depth map", by using an 10 array of commercially available ultrasonic detectors to acquire reflected ultrasonic radiation which was used to illuminate the scene. Similarly, we have successfully i employed a scanning infrared detector to progressively ''"acquire reflected infrared radiation which was used to illuminate the scene. Finally, we have conducted successful experiments in the lab employing microwave radiation as the illumination source and microwave detectors to acquire the reflected radiation; this technique may be particularly useful for capturing 3-D imagery through the use of radar systems.
Fig. 9(a) illustrates the principal steps in the process by which a depth signal may be derived for conventional 2-dimensional imagery, thereby enabling the process of retro-fitting 3-D to conventional 2-D imagery, both film and video.
In Fig. the same series of three objects 24, 25 and 26 which were portrayed in a view from above in Fig. 8 are now viewed on a monitor from the front. In the 2-D monitor 29, of course, no difference in depth is apparent to the viewer.
In our process of adding the depth component to 2-D imagery, the scene is first digitized within a computer workstation utilizing a video digitizing board.
A
combination of object definition software, utilizing wellknown edge detection and other techniques, then defines each individual object in the scene in question so that each object may be dealt with individually for the purposes of retrofitting depth. Where the software is unable to adequately define and separate objects automatically, a human Editor makes judgmental clarifications, using a mouse, a light pen, touch screen and stylus, or similar pointing device to outline and define objects. Once the scene is separated into individual objects, the human Editor arbitrarily defines to the software the relative distance from the camera, i.e. the apparent depth, of each object in the scene in turn. The process is entirely arbitrary, and it will be apparent that poor judgement on the part of the Editor will result in distorted 3-D scenes being produced.
SIn the next step in the process, the software scans each pixel in turn within the scene and assigns a *depth component to that pixel. The result of the process Ss represented by depth component scan line 31 on monitor which represents the representative depth signal one would obtain from a line of pixels across the middle of 20 monitor scene 29, intersecting each object on the screen.
The top view of the placement of these objects presented in Fig. 8 will correlate with the relative depth apparent in the representative depth component scan line 31 in Fig.
9(a).
The interconnection and operation of equipment which may be employed to add depth to video imagery according to this process is illustrated in Fig. In this drawing, an image processing computer workstation with an embedded video digitizer 71 controls an input video tape recorder (VTR) 72, and output video tape recorder 73, and a video matrix switcher 74 (control is illustrated with the dashed lines in Fig. and signal flow with solid lines). The video digitizer accepts a frame of video from the input VTR through the matrix switcher on command from the workstation. The frame is then digitized, and the object definition process described in Fig. 9(a) is applied to the resulting digital scene. When the depth signal has been calculated for this frame, the same frame is input to an NTSC video generator 75 along with the calculated depth component, which is added to the video frame in the correct place in the video spectrum by the NTSC generator. The resulting depth-encoded video frame is then written out to the output VTR 73, and the process begins again for the next frame.
Several important points concerning this process have emerged during its development in the laboratory. The first such point is that as the depth component is being added by an NTSC generator which injects only the depth component without altering any other aspect of the signal, the original image portion of the signal may be written to the output VTR without the necessity for digitizing the image first. This then obviates the visual degradation imparted by digitizing an image and reconverting to analog form, and the only such degradation which occurs will be Se-. :the generation-to-generation degradation inherent in the 20 video copy process, a degradation which is minimized by utilizing broadcast format "component video" analog VTR's such as M-II or Betacam devices. Of course, as is well known in the imaging industry, with the use of all-digital recording devices, whether computer-based or tape-based there will be no degradation whatever in the generation-togeneration process.
The second such point is that as this is very much a frame-by-frame process, what are termed "frameaccurate" VTR's or other recording devices are a requirement for depth addition. The Editor must be able to access each individual frame on request, and have that processed frame written out to the correct place on the output tape, and only devices designed to access each individual frame (for example, according to the SMPTE time code) are suitable for such use.
The third such point is that the whole process may be put under computer control, and may be therefore operated most conveniently from a single computer console rather than from several separate sets of controls. Given the availability of computer controllable broadcast level component VTR's and other recording devices, both analog and digital, certain aspects of the depth addition process may be semi-automated by exploiting such computer-VTR links as the time-consuming automated rewind and pre-roll.
The fourth such point is that the software may be endowed with certain aspects of what is commonly referred to as "artificial intelligence" or "machine intelligence" to enhance the quality of depth addition at a micro feature So level. For example, we have developed in the lab and are currently refining techniques which add greater reality to the addition of depth to human faces, utilizing the .topology of the human face, i.e. the fact that the nose protrudes farther than the cheeks, which slope back to the ears, etc., each feature with its own depth characteristics. This will alleviate the requirement for much Editor input when dealing with many common objects found in film and video (human faces being the example employed here).
The fifth such point is that the controlling software may be constructed so as to operate in a semiautomatic fashion. By this it is meant that, as long as the objects in the scene remain relatively constant, the controlling workstation may process successive frames automatically and without additional input from the Editor, thereby aiding in simplifying and speeding the process. Of COurse the process will once again require Editorial input should a new object enter the scene, or should the scene perspective change inordinately. We have developed in the lab and are currently refining techniques based in the field of artificial intelligence which automatically calculate changes in depth for individual objects in the scene based upon changes in perspective and relative object size for aspects which are known to the software.
The sixth such point is that when working with still or motion picture film as the input and output media, the input VTR 72, the output VTR 73 and the video matrix switcher 74 may be replaced, respectively, with a high resolution film scanner, a digital data switch and a high resolution film printer. The remainder of the process remains essentially the same as for the video processing situation described above. In this circumstance, the 1 0 injection of the depth signal using the NTSC generator is obviated by the film process outlined in Figure 8.
The seventh such point is that when working in an all-digital recording environment, as in computer-based image storage, the input VTR 72, the output VTR 73 and the video matrix are switcher 74 are effectively replaced entirely by the computer's mass storage device. Such mass storage device is typically a magnetic disk, as it is in the computer-based editing workstations we employ in our laboratory, but it might Just as well be some other form of 20 digital mass storage. In this all-digital circumstance, the injection of the depth signal using the NTSC generator is obviated by the addition to the computer's conventional So image storage format of the pixel-level elements of the depth map.
Attached as Appendix A is a copy of some of the software listing used under laboratory conditions to achieve the retro-fitting discussed above with reference to Figures 9(a) and 9(b).
Fig. 10 illustrates the application of the pixellevel depth display techniques derived in the course of these developments to the 3-dimensional display of printed images. Scene 32 is a conventional 2-dimensional photograph or printed scene. A matrix 33 of pixel-level microlenses (shown here exaggerated for clarity) is applied over the 2-D image such that each minute lens has a different focal length, and therefore presents that pixel at a different apparent depth to the viewer's eye. Viewed greatly magnified in cross section 34, each microlens may be seen to be specific in shape, and therefore optical characteristics, so as to provide the appropriate perception of depth to the viewer from its particular image pixel. While microlenses with diameters as small as 1 mm have been utilized in our laboratories to date, experiments have been conducted with fractional mm microlenses which conclude that arrays of lenses of this size are entirely feasible, and that they will result in 3-D printed imagery with excellent resolution.
In mass production, it is anticipated that the depth signal generating techniques described herein will be employed to produce an imprinting master, from which high volume, low cost microlens arrays for a given image might be, once again, embossed into impressionable or thermoplastic plastic materials in a fashion analogous to the embossing of the data-carrying surfaces of compact discs or the mass-replicated reflection holograms typically 20 applied to credit cards. Such techniques hold the promise of large-scale, low cost 3-D printed imagery for inclusion in magazines, newspapers and other printed media. While the matrix 33 of microlenses is portrayed as being rectangular in pattern, other patterns, such as concentric circles of microlenses, also appear to function quite well.
It is important to note that the picture, or luminance, carrier in the conventional NTSC video signal occupies significantly greater video bandwidth than either of the chrominance or depth sub-carriers. The luminance component of an NTSC video picture is of relatively high definition, and is often characterized as a picture drawn with "a fine pencil". The chrominance signal, on the other hand, is required to carry significantly less information to produce acceptable colour content in a television picture, and is often characterized as a "broad brush" painting a "splash" of colour across a high definition black-and-white picture. The depth signal In the present invention is in style more similar to the colour signal in its limited information content requirements than it is to the high definition picture carrier.
One of the critical issues in video signal management is that of how to encode information into the signal which was not present when the original was constructed, and to do so without confusing or otherwise obsoleting the installed base of television receivers.
Fig. 11 illustrates the energy distribution of the conventional NTSC video signal, showing the picture, or S. luminance, carrier 36, and the chrominance, or colour information, carrier 37. All of the information in the video spectrum is carrier by energy at separated frequency intervals, here represented by separate vertical lines; the remainder of the spectrum is empty and unused. As may be seen from Fig. 11, the architects of the colour NTSC video signal successfully embedded a significant amount of additional information the colour) into an established signal construct by utilizing the same concept of concentrating the signal energy at separated frequency points, and then interleaving these points between the established energy frequency points of the picture carrier such that the two do not overlap and interfere with each other.
In a similar fashion, the present invention encodes still further additional information, in the form of the required depth signal, into the existing NTSC video signal construct, utilizing the same interleaving process as is employed with the chrominance signal. Fig. 12 illustrates this process by showing, once again, the same luminance carrier 36 and chrominance sub-carrier 37 as in Fig. 11, with the addition of the depth sub-carrier 38.
For reference purposes, the chrominance sub-carrier occupies approximately 1.5 MHz of bandwidth, centred on 3.579 MHz, while the depth sub-carrier occupies only approximately 0.4 MHz, centred on 2.379 MHz. Thus, the chrominance and depth sub-carriers, each Interleaved with the luminance carrier, are sufficiently separated so as not to interfere with each other. While the stated sub-carrier frequency and occupied bandwidth work quite well, others are in fact possible. For example, in experiments conducted in the lab, we have successfully demonstrated substantial reduction of the stated 0.4 MHz. bandwidth requirement for the depth sub-carrier by applying wellknown compression techniques to the depth signal prior to insertion into the NTSC signal; this is followed at the playback end by decompression upon extraction and prior to its use to drive a depth-displaying imaging device. As well, similar approaches to embedding the depth signal into the PAL and SECAM video formats have been tested in the laboratory, although the specifics of construct and the relevant frequencies vary due to the differing nature of those video signal constructs. In an all-digital environment, as in computer-based image storage, a wide 20 variety of image storage formats exists, and therefore, the method of adding bits devoted to the storage of the depth Smap will vary from format to format.
Fig. 13(a) illustrates in functional form the circuitry within a conventional television receiver which typically controls the vertical deflection of the scanning electron beam in the cathode-ray tube, using terminology common to the television industry. While some of the details may vary from brand to brand and from model to model, the essentials remain the same.
In this diagram representing the conventional design of a television receiver, the object is to generate a sweep of the scanning electron beam which is consistent and synchronized with the incoming video signal. Signal is obtained by Tuner 49 and amplified by Video IF amp 50, then sent to Video detector 51 to extract the video signal. The output of the video detector 51 is amplified in Detector Out Amp 52, further amplified in the First Video Amplifier 53, and passed through a Delay Line 54.
Within a conventional video signal, there are 3 major components: the luminance (that is, the brightness, or "black-and-white" part of the signal); the chrominance (or colour part); and the timing part of the signal, concerned with ensuring that everything happens according to the correctly choreographed plan. Of these components, the synchronization information is separated from the :i 0 amplified signal in the Synchronization Separator 55, and the vertical synchronization information is then inverted in Vertical Sync Invertor 56 and fed to the Vertical Sweep generator 64. The output of this sweep generator is fed to 9 9the electromagnetic coil in the cathode-ray tube known as the Deflection Yoke, 65. It is this Deflection Yoke that causes the scanning electron beam to follow a smooth and straight path as It crosses the screen of the cathode-ray tube.
As described earlier, in a 3-D television tube, 20 minute variations in this straight electron beam path are introduced which, through the pixel-level optics, create 0 the 3-D effect. Fig. 13(b) illustrates in the same functional form the additional circuitry which must be added to a conventional television to extract the depth component from a suitably encoded video signal and translate that depth component of the signal into the minutely varied path of the scanning electron beam. In this diagram, the functions outside the dashed line are those of a conventional television receiver as illustrated in Fig. 13(a), and those inside (that dashed line represent additions required to extract the depth component and generate the 3-D effect.
As described in Fig. 12, the depth signal is encoded into the NTSC video signal in a fashion essentially identical to that of the encoding of the chrominance, or colour signal, but simply at a different frequency.
Because the encoding process is the same, the signal containing the depth component may be amplified to a level sufficient for extraction using the same amplifier as is used in a conventional television set for amplifying the colour signal before extraction, here designated as First Colour IF amplifier 57.
This amplified depth component of the signal is extracted from the video signal in a process identical to that used for extracting the encoded colour in the same signal. In this process, a reference, or "yardstick" signal is generated by the television receiver at the frequency at which the depth component should be. This signal is compared against the signal which is actually present at that frequency, and any differences from the 15 "yardstick" are interpreted to be depth signal. This reference signal is generated by Depth Gate Pulse Former 59, and shaped to its required level by Depth Gate Pulse Limiter 58. The fully formed reference signal is synchronized to the incoming encoded depth signal for the same Synchronization Separator 55 used to synchronize the *horizontal sweep of the electron beam in a conventional television receiver.
.When the amplified encoded depth signal from First Colour IF Amplifier 57 and the reference signal from Depth Gate Pulse Limiter 58 are merged for comparison, the results are amplified by Gated Depth Synchronization Amplifier 63. This amplified signal will contain both colour and depth components, so only those signals surrounding 2.379 MHz, the encoding frequency of the depth signal, are extracted by extractor 62. This, then, is the extracted depth signal, which is then amplified to a useful level by X'TAL Out Amplifier 61.
Having extracted the depth component from the composite video signal, the circuitry must now modify the smooth horizontal sweep of the electron beam across the television screen to enable the display of depth in the
;I
resulting image. In order to modify this horizontal sweep, the extracted and amplified depth signal is added in Depth Adder 60 to the standard vertical synchronization signal routinely generated in a conventional television set, as described earlier in Fig. 13(a). The modified vertical synchronization signal which is output from Depth Adder is now used to produce the vertical sweep of the electron beam in Vertical Sweep Generator 64, which, as in a conventional receiver, drives the Deflection Yoke 65 which controls the movement of the scanning electron beam. The end result is a scanning electron beam which is deflected minutely up or down from its conventional centreline to generate a 3-D effect in the video image by minutely varying the input point of light to the pixel-level optics described earlier.
Fig. 14 illustrates electronic circuitry which is a preferred embodiment of those additional functions described within the dashed line box in Fig. 13.
Fig. 15 illustrates an alternative means of 20 varying the position of the light which is input to a different form of pixel-level optical structure. In this alternative, pixel-level optical structure 39 has an appropriate optical transfer function, which provides a focal length which increases radially outwardly from the axis of the optical element 39 and is symmetrical about its axis 43. Light collimated to cylindrical form is input to the optical structure, and the radius of the collimated light cylinder may vary from zero to the effective operating radius of the optical structure. Three such possible cylindrical collimations 40, 41 and 42 are illustrated, producing from a frontal view the annular input light bands 40a, 41a and 42a respectively, each of which will produce, according to the specific optical transfer function of the device, a generated pixel of light at a different apparent distance from the viewer.
FIG. 16 illustrates, in compressed form for clarity of presentation, still another alternative means of varying the visual distance from the viewer of light emitted from a individual pixel. In this illustration, a viewer's eye 4 are at a distance in front of the pixellevel optics. A collimated beam of light may be incident upon an obliquely placed mirror 76 at varying points, three of which are illustrated as light beams 5, 6 and 7. Mirror 76 reflects the input light beam onto an oblique section of a concave mirror 77, which, by the image forming characteristics of a concave mirror, presents the light beam of varying visual distance from the viewer 5a, 6a, and 7a, corresponding to the particular previously described and numbered placement of input beams. The concave mirror may have mathematics of curvature which are of variety of 15 conic sections, and in our laboratory we have successfully employed all of parabolic, hyperbolic and spherical curvatures. In this embodiment, experimental results suggest that both the planar and curved mirrors should be of the first-surface variety.
FIG. 17 illustrates how in one preferred embodiment of the arrangement shown in FIG 16, pixel-level combinations of planar mirror 76 and concave mirror 77 are arranged against the surface of a cathode-ray tube employed as an illumination source. In the drawings the concave mirror 77 from one pixel is combined with the planar mirror from the adjacent (immediately above) pixel to form a combined element 78, which rests against the glass front 8 of the cathode-ray tube, behind which are the conventional layers of phosphors 9 which glow to produce light when impacted by a projected and collimated beam of electrons, illustrated at different positions in this drawing as beams, 5b, 6b and 7b. For each of these three illustrative positions, and for any other beam position within the spatial limits of the pixel-level optical device, a point of light will be input at a unique point to the assembly, and will therefore be presented to the viewer at a correspondingly unique point. As with the refractive embodiments of this invention, other light sources than cathode-ray are capable of being employed quite suitably.
Se o 3D0105.cpp
APNI
APPENDIX A AGENTS OF CHANGE INC.
Advanced Technology 3-D Retrofitting Controller Software Employing Touch Screen Graphical User Interface V.01.05 /1 Includes the following control elemcnis: #include #include #include #include #include #include #include "<dos.h sidio.h "<conio.h "<graphics.h stdlib.h "<string.h "<iostream.h #define MOUSE #define BUT1PRESSED #define BUTY2PRESSED #define TRUE #define FALSE 0x33 2 void ActivMouseo II activate mouse.
-AX =32: genirnerrupt(MOUSE): mnt ResetMouseo) mouse reset.
-AX 0: geninterrupi(MOUSE), return(_AX): void ShowMouseO turWn on mouse cursor.
AX 1: geninterrupt(MOUSE); void HideMouscO) turwn off mouse cursor.
-AX 2: geninterrupt(MOUSE):.
void ReadMouse(int mnt mt *but) mnt temp: -AX 3 genintcrrupt(MOUSE).
II wh ich button pressed: I left. 2 Tight, 3 =both.
temp =BX: *buttemp: SUBSTITUTE SHEET (RULE 26) II horizontal coordinates.
*h CX.
II vertical coordinates.
DX:
class Button ii this class creates screen buttons capable of being displayed raised Iior dcpressed. Labels displayed on the buttons change colour when Ii the button is depressed, public: int button -centrex. button_centrey. button -width, button height; int I eft. top, right. bottom. text -size, text_fields. Ifont; char button ccxci buttontcext2f40]; unsigned upattern: 1/ button -centrex. button centrey is the centre of the button placement.
1/ button -width and button -heizht are the dimensions of the button in pixels.
/1 button text is the label on the button.
/1 text-si:ze is the text size for settexcstyleO.
mt mouseX. mouseY. mouseButton: inc oldMouseX. oldMouseY: inc buttonI Down. button2Down: mnt pressed: Button(inc x. mtE y, int width. mnt height. int tfields. char *biextl. char *btext2). mt Lsize, mnt f) II this constructor initializes the button variables.
button centrex x; *0 0 button-centrey y, 0.0:.obutton-width =width: button height =height; strcpy(button textl. btcxt I).
strcpy(button-tcxt2 btex.2): text size =tsize: text fields=tfields: I font= f: left=bution -cencrex button-width/2; top=butcon cencrey button height/2: 00 right=bucron centrex button width/2: .0 botom buttn~cetrey+ button height!?: old.MouseX=0: oldMouseY=0: button I Down= FALSE.
button2Down
FALSE:
pressed FALSE: void up() II draws a raised button and prints the required label on it.
setlinestyle(SOLID -LINE.upatten.NORMWIDTH): setfllstyle(SOLID FILL. LIGHTGRAY); bar3d(Ieft,top~right.botos.O.O); setcolor(WHITE); setlinestyle(SOLID-LINB.upatter.THICKWIDTH): SUBSTITUTE SHEET (RULE 26) line(left +2.bottom- 1. left+ 2.top 1): line(left+ l~top+2.right-l,top+2); setcolor(DARKGRAY); setlinestyle(SOLID_-LINEupattern.NORMWIDTH): line(left+4.bottom.3.right-I .bottom-3); line(left 3.bottor-2. right- 1. bottom-2): line(left bottorn- I.righc. 1 bottomn- 1); line(right-3.bottom. 1..right-3. top+ line(right-2.bottont- I,right-2.top linc(right- 1 .bottom-1, .right-1 .top 2); II put the required text in the button.
settextjustify(CENTERTEXT. CENTER TEXT): settextstyle(l font. HORIZ_-DIR, text-sizc); Ifcout button text.2 endl: if (text fields 1) outtextxv (button centrex. button centrev-4*(float(button button-text I); else outtextxy(buttoncentrex. button cenirev- 13*(float(button button-textil): outtextxv( button centrex. button centrey +9*(floatuburton buttonjtext.2): .:pressed =FALSE; void downo II draw a depressed button and prints the required label on it.
setlinestvle(SOLIDLINE.upattern. NORM -WIDTH): '99setfilistyie(SOLID -FILL. DARKGRAY): bar3d(Ieft.top. right. bottom.O.O): *setlincsrvle(SOLID -LINE.upattern.THICK WIDTH): line(left+2.bottom-l.left-i2.top~
I):
9999 linefleft L top +2.right- I .top 2: setcolor(LIGHTGRAY): setlinestyle(SOLIDLINE.upattern. NORM WIDTH): Iine(Ieft +4.bottom-3 .right- I .bottom-3) Iine(left 3. bottom-2. right-l .bottom-2): :line(left +2.bottom- Ijight- I .bottom-l1).
I ine(right-. bottom- 1, right-3, top+ 1 ine(right-2. bottom- 1 .right-2. top 3): 1Iine(right- I.bottom- 1. right- 1. top+ 2); II put the required text in the button.
setcolor(WHITE);.
settextjustify(CENTER_TEXT.
CENTER_-TEXT):
settextstyle(i font. HORIZ_-DIR, text size): /1cout button text.2 end!: if (text-f ields ==l1) outtextxy(button-centrex. button centrey -4 (float (button-h e button text I); else outtextxy(button-Centrex. button cenrrey- 13*(float(button_height)150.), button textl); outtextxv(button centrex. button centrev 9(float(button button text2): SUJBSTTTE SHEET (RULE 26) pressed =TRUE: in( couchedo II determines whether a button has been touched. and returns II TRUE for yes, and FALSE for no. Touching is emulated II by a mouse click.
mtE tenmp; -AX =3; geninterrupt(MOUSE): If which button pressed: I left. 2=right, 3=both.
temp=_BX: mouseButton =temp: II horizontal coordinates.
mousex =_CX: II vertical coordinates.
mouseY= DX: if (mouscButton&BUTI PRESSED) buttonlIDown TRUE: return 0: *else if (buttonlIDown) If if button I was down and is now up. it was clicked! II check whether the mouse is positioned in the button.
.*if ((((mouseX- e ft) *(mouseX- right)) (((mouseY-top)*(mouseYbottom)) II if this evaluates as TRUE then do the following.
button I Down FALSE: return 1: button I Down FALSE: return 0: 1/ XXXXXXXXXXXXXXXXXXX M A I N XXXXXXXXXXXXXXXXXXXXX void main() I' this is the system main.
int Page 1 flag. Page -2 flag, Page_3_flag. Page_-4_-flag, Page flag: inc Page_6_flag, Page 7 flag. Paae 8 Nla. PaCe_9_fla2. char which: II initialize the graphics system.
int gdriver -DETECT, gniode. errorcode: initgraph(&gdriver, &gmode. "c:\\borlandc\\bgi II read the result of initialization.
errorcode graphresulto.
SUBSTITUTE SHEET (RULE 26) if (errorcode grOk) an error occurred.
printf("Graphics error: grapherrormsg(errorcode)): printf("Press any key to halt:*); getcho: exit( 1); I/if (!ResetMouseO)) /1 printf("No Mouse Driver"), II set the current colours and line style.
II set BLACK (normally palette 0) to palette 5 (normally MAGENTA) II to correct a colour setting problem mnate to C
BLACK);
II activate the mouse to emulate a touch screen.
//ActivMouseO: /IShowMousco; construct and initialize buttons.
Button logo(getmaxxo/2. 100.260.130.1. "(AOCI LOGO)%." 4,1): Button auto-control 1(200.400.160.50.2."AUTO'."CONTROL".2 1); Button manual controll1(400.400.160.50.2."MANUAL"."CONTROL".2 1); Button mutel1 (568,440.110.50.1. "MUTE". 1); *//Button proceed (getmaxx2440.160.50. 1, PROCEED.".4, 1); Button c -vision(getxaxxo/2.350,450.100,l."3.D RBTRO"."".8.l); Button main menu(245.20.460.30.1l."M A I N M EN Button time -date2(245,460.460.30. 1,.Date; Time: Elapsed: Button video screen(245.217.460345. Button video message 1(245.217.160.50.2. "Video Not". *DetCEed".2,1); Button auto onoft2(555.20.130.30.2."AUTO CONTROL"."ON I OFF".5.2); Button manual control2(555.60.130.30. l."MANUAL CONTROL-,.- Button name tags2(555. 100.130.30.. l"OBJECT Button voice tags2(555.140.130.30.2. "TRIANGULATE!". "DIST. CALC. Button custom session2(555.180.130.30.1. *CUSTOM SESSION"."".5.2): Button memory framing2(555.220.130.30.1."MEMORY FRAMING"."".S.2): Button remote -commands2(555.260.130.30.2. *REMOTE ENDS *."COMMANDS".5.2): Button av-options2(555.300. 130.30.2. "AUDIO/VISUAL". "OPTIONS Button codec control2(555.340. 130.30.1. 'CODEC CONTROL". Button mcu control2(555,380.130,30. 1."MCU CONTROL","".5.2): Button dial connects2(555.420. 130.30.2. "DIAL-UP". "CONNECTIONS" 4: Button rmute2(555.460.130.30, l."MUTE%.5,2) Button ind id3(245,20.460.30.1,."PERSONAL IDENTIFICATION"."".2, 1); Button frame c-am3(555,20.130.30.1."FRAME CAMERA". Button camjpreset3(555.60,130.30, 1."CAMERA PRESET".'".52); Button autofollow3(555. 180,130.30.1. "AUTOFOLLOWING" Button return3(555.420.130.3.0,2."RETURN TO"."LAST MENU".S.2):, Button touch face3(130,418.230.35,2.'DEFINE AN OBJECT"."AND THEN TOUCH:",5,2): Button type id3(308.4 18.105.35.2. "ACQUIRE". "OBJECT" Button write id3(423.4 18.105.35.2. "LOSE" ."OBJECT" Button cancl3(555.340,130,30. 1. *CANCEL CHOICE","",5,2); Button writing space(245,425.450. 100, l."(Writing Space)",".2,1): Button typing done(555.260.130,30,2,""IYPE AND THEN","PRESS HERE".,.2); Button writing done(555.260. 130.30,2, "WRITE AND THEN", "PRESS HERE" Button dial connects6(getznaxxol2.20.604.30. 1."DIAL-UP CONNECTIONS Button directory6(getmaxxol2.60.300.301 ."DIRECTORY"." Button manual dialing6(57420.84.30,2. "MANUAL", "DIALING".5.2); Button line 16(15 1,420.84.30, 1.-LINE SUBSTITUTE SHEET (RULE 26) Button line_-26(245.420,9430.1. 'LINE 2 Button dial tone6(339.42843o.1DLpj TONE"V",5.2); Button hang up 6 4 3 3.42.84.30,1"HANG Button scroll up 6 (104.260178.30,1. SCROLL DIRECTORY Button scroll-down6(29226.1783. 1,"CROL DIRECTORY Button dial his6(198.3084 3 0 2"DIAL THIS". NTMBER,5,2); Button add...entry6(104.340.178.30.1, ADD AN ENTRY".-.5.2); Button delete_enrry 6 (292.340,178,30, VDELETE AN ENTRY,":",5 2), Button keypad6(505.320.23 0 .15 1.1. (Keypad)"-. Page_ 1: ii this is the opening screen.
set the current fill style and draw the background.
setfillsryle(INTEpjjEAVE
_FILL.DARKGRAY):
bar3d(0.0.geu~naxxo) getmaxyo.O.o); logo.upo); c vision. upo; II proceed.upo; auto -conrroll.upo,: manual_control Lupo; mutel setextstyle(TRJPLEX_FONT. HORIZ -DIR, O HNEIC settextstyle(TRIPLEXFONT. HORIZ_DIR. 4): ourtextxy(getmaxxo12.235."WELCOME"): outtextxy(getmaxx0/2.265."TO"): Page_ 1_flag =TRUE: while (Page 1 flag) 1/ temporary keypad substitute for the touch screen.
*which getcho: if (which -vision.pressed) cvis ion. downo: goro Parce_2: else c-vision.upo; if (which '2 i f (mute 1. .pressed) mute I .downo: else mute Lupo; if (which= Page 1 flag FALSE: goto pgm termintate; Page_2: II this is the main menu.
setfilstyle(INTEpjAVEFLLDApJCGRAY).
suBSTrrUTE SHEET (RULE 26) bar3d(0,0,getxnaxx(),getmaxyQ 00) main ecnu.upo; video-screen.upQ: video-Message1I.downo: auto onoft2.upo; manual control2.upQ: name tags2.upo; voice tags2.upo; custom-session2.upo; memory framing2. upo; remote commands2.upo; av -options2.upo; codec-control2.upQ; mcu control2.upo; dialconnecLs2.upQ; mute2.upo; Page_2_flag TRUE; while (Page_2_flag) temporary keypad substitute for the touch screen.
which getcho; if (which=') if (!auto_onoff2. pressed) auto_onoff2. downo; else auto-onoff2.upQ; if (which if (!manual cont rol2. pressed) manua Icc ntro12. downo: else manual control2.up(): if (which if (!nametags2.pressed) name tags2.downo; goto Page_3: else name tEags2.upo: if (which= =W'4 if (!voice-tags2.pressed) voice tags2.downO: goto Page 3.
else voice tags2.upQ; if (which= SUBSTITUTE SHEET (RULE 26) 39 if custom session2. pressed) custom isession.2.downo; eise customrisession2.upo-, if (which= 6') if' (!memoryframing2. pressed) memory framing2. downo: goto Page_3; else memory framing2.upo; if (which if remote commands2. pressed) remote-commands2. down() else remote_commands2.upo: if (which if (!av options2. pressed) avoptions2. downo: else avoptions2.upo: if (which if(!codec -conrol2 .pressed) codec-control2. downo: else codeccontrol2.upo: if (which if (!mcu -control2. pressed) mcu-control2. downo,; else mcu control2.upo: if (which =Wh) if dial1connects2. pressed) dial connects2. downo: goto Page 6: else dial connects2.upo: if (which if mute2. pressed) mute2.downO: else mute2.upo: if (which Page_2_flag, =FALSE; goto pgm terminate: Page_3: II this is the first "individual identification" menu.
II and includes the step into nametags.
SUBSTITUTE SHEET (RULE 26) SCtfillstYleINTELEAVE FIIL.DApJ(GpAY); bar 3 d(0.0.geupaxxo getmaxyo 0.) ifldid3.upO, vidco-screen.upo; video-messagel 1.downo; time -date2.upo, frame-cam3.upO; name tags2.upo; Voice tags2.upo: autofollow3.upo: rern3.upo; mute2.upo; Page_3_flag TRUE: while (Page 3 flag) temporary keypad substitute for the touch screen.
which =getcho; if (which= I ifWrrn cam3.pressed) frame-cam3.downO: else frame cam3.upo; if (which= if (!camjpreset3. pressed) campreset3. downo: else camjpreseL3.upo; if (which if 0 name-tags2. pressed) a name tazs2.downo: touch face3. upo; type_id3.upo; write id3 .upQ: cancel3.upo: type_or _write: which getchO.
/the cancel button has been pressed.
if (which= goco Page_3: type nametags.
if (which= goto Page_4; /I write nametags.
if (which= goto goto type or write: else name tagzs2.upO: if (which= W) SUBSTITUTE SHEET (RULE 26) if (lYoice Eags2.pressed) voice_Eaes2. downQ; II goto Page- 4; else voicetags2.upo;, if (which= if (!autofollow3.pressed) autofol low3. downo; IIgoto Page -4; else autofollow3.upo; if (which= if 0!return3. pressed) retutn3.downQ; goco Page 2; else return3.upo; if (which= if 0 muce2. pressed) mute2.downo: else mute2.upo; if (which=='S') Page_3flag =FALSE: goto pgmtlminate; Pg4:this is the namecags typing page.
setfillstyle(INTERLEAVE_-FILL.
DARPKGRAY):
bar 3 d(0.0.getmaxxo,gecmaxyo
O.O):
:n m id3.upQ: *video screenupo; video-message l.downo: frame-cam3.upo.
campreseL3.upo; name tag s2. downo; voice cags2.upo; autofollow3 .upo; rewurn3.upQ; mute2.upQ; keyboard.upo; typing done.upo: Page_4_flag TRUE: while (Page_4_flag) II temporary keypad substitute for the touch screen.
which getcho; if (which 7) if typing_done. pressed) typing done. downo; goto Page_3; SUBSTiTUTE SHEET (RULE 26) else typing done.upo: if (which= =W) if returnL3. pressed) rewurn.downo: goto Page_3: else return3.upo: if (which= 'c) if mute2. pressed) mute2.downO; else mute2.upo: if (which= Page_4_flagz FALSE: goto pgmterninate, this is the nametags writing page.
.setfillstyle(INTERLEAVE_-FILL.
DARKGR-AY);
bar3dC0.0.getmaxxc),getmaxy() indid3 .upo: video-Screen.up(): video -message 1. down(): frame cam3.UPO; camnpreset3.upQ: name tags2. downo; voice tags2.upo; autofollow3.upo; .retum3.upo; mute2.upo; writing space. upo; writing donexupo: Page_5_fla TRUE: while (PageS flag) II temporary keypad substitute for the touch screen.
which getchO); if (which= 7) if (!typing done.pressed) typing done. downo: goto Page_3: else typing done.upo; if (which= =Wb) if (!return3.pressed) return3.owno; goto Page_3; else return3.upo; SUBSTITUTE SHEET (RULE 26) 43 if (which V') if 0!mute2. pressed) mute2.down() else rnute2.upo: if (which= PageS _flag =FALSE: goto pgm terminate; Page_6; II this is the connections dialing and directory maintenance page.
setfiIistyle(INTERIEAVE -FILL. DA RKG KAY); bar3d(O,O~getmaxxo),germaxy() .0.0) dial -connects6.up(), directory6.upo; kcypad6.upo; scroll up6.upo:.
scroll_down6.upo; dial -this6.upO; add entrv6.upo; delete cntry6i.upo: manual dial ing6.upo: :line_16.upo; dial tone6.upo; hang up6.upo; rcturn3.upo; mute2.upo: Page_6_flag TRUE: while (Page_6_flag) II temporary keypad substitute for the touch screen.
which =getcho; if (which= =Wh) if return3. pressed) return3 .downo: goto Page 2; else return3.upo: if (which if mute2. pressed) mute2 .downQ; else mute2.upo; if (which Page_6 flag FALSE-, goto pgm terminate: pgmnternlnate: getcho; SUBSTITUTE SHEET (RULE 26) this is the closing sequence.
closegraphO; 0 0* 0 ~0 SUBSTITUTE SHEET (RULE 26)
ARPROCES.H
Image Processing Header File S P Area Processing Functions I' written in Turbo C 2.0 Area Process Function Prototypes CompletionCode Convolution(BYTE huge *lnlmage, unsigned Col. unsigned Row.
unsigned Width, unsigned Height, short *Kernel. unsigned KernelCols, unsigned KernelRows, unsigned Scale, unsigned Absolute. BYTE huge OutlmageBufPtr): CompletionCode Real Convolution(BYTE huge lInlmage.
unsigned Col. unsigned Row, unsigned Width, unsigned Height, double *Kernel. unsigned KernelCols.
unsigned KernelRows, unsigned Scale.
unsigned Absolute. BYTE huge *OutlmagzeBufPtr): CompletionCode MedianFilter(BYTE huge 1n1mage. unsigned Col. unsigned Row.
unsigned Width, unsigned Height.
unsigned NeighborhoodCols. unsigned NeighborhoodRows.
BYTE huge Outlmai eBufPtr): CompletionCode SobelEdgeDet(BYTE huge *lnmage.
unsigned Col. unsigned Row.
unsigned Width, unsigned Height.
unsigned Threshold, unsigned Overlay.
BYTE huge OutlmageBufPtr); 5.
S S
S
U U S U SUBSTITUTE SHEET (RULE 26) I' ARPROCES.C /~Image Processing Code 1 Area Processing Functions P written in Turbo C 2.0 #include <stdio.h> include stdlib.h #include conio.h include dos.h #include alloc.h #include process.h #include math.h #include graphics.h #include "misc.h" #include "pcx.h" #include "vga.h" #include "imagesup.h' *#include arproces.h" :9 Integer Convolution Function CompletionCode Convolution(BYTE huge lnalmage. unsigned Col. unsigned Row, unsigned Width, unsigned Height, short *Kernel, unsigned KernelCols.
unsigned KernelRows, unsigned Scale.
unsigned Absolute. BYTE huge OutlmageBufPtr) *register unsigned ColExtent. RowExtent.
register unsigned ImageCol. ImageRow. Kem-Col. KernRow: unsigned ColOffset. RowOffset. TempCol. TempRow: BYTE huge *OutputlmageBuffer: long Sum: short *KernclPtr: if (ParameterCheckOK(Col. Row, Col Width. Row+ Height. 'Convolut ion')) Image must be at least the same size as the kernel/ *:if (Width KernelCols Height =KernelRows) allocate far memory buffer for output image/ OutpuhlmageBuffer (BYTE huge farcalloc(RASTERSIZE unsigned long)sizeof(BYTE)): if (OurputImageBuffer
NULL)
restorecrtmodeo: printf("Error Not enough memory for convolution output buffer\n*): return (ENoMemory); Store address of output image buffer *OutlnugeBufPtr OutputlmageBuffer: Clearing the output buffer to white will show the boarder areas not touched by the convolution. It also SUBSTTUTE SHEET (RULE 26) provides a nice white frame for the output image.
MAXCOLS.MAXROWS.WHITE):
Col~ffset KernelCols/2; RowOffset KernelRows/2: I* Compensate for edge effects Col ColOffset;- Row RowOffset: Width (KernelCols. Height ~=(KernelRows 1); Calculate new range of pixels to act upon ColExtent =Col Width; RowExtent =Row Height; for (ImageRow Row; IrnageRow RowExtent; ImageRow TempRow =IrnageRow RowOffset: for (ImageCol Col: ImageCol ColExtent: ImageCol 9 eas TempCol ImageCol ColOffset.
Sum =OL; KernelPtr Kernel: :*for (KernCol KernCol KernelCols:. KernCol for (Kern.Row 0: KernRow KernelRows: KernRow Sum (GetPixelFromlmagc(lnlmage.
TempCol+KernCol. TempRow+KernRow)* 1* If absolute value is requested/ if (Absolute) Sum =labs(Sum); I* Summation performed. Scale and range Sumn/ Sum (long) Scale; Sum (Sum MINSAMPLEVAL) MINSAMPLEVAL:Sum: Sum (Sum MAXSAMPLEVAL) MAXSAMPLEVAL:Sum: PutPixellnlmage(outputlmagenuffer imageCol.ImageRow.(BYTE)Sum): return(EKernelSize).
return(NoError); Real Number Convolution Function. This convolution function is only used when the kernel entries are floating point numbers instead of integers. Because of the floating point operations envolved. this function is substantially slower than the already CompletionCode RealConvolution(BYTE huge 5 IrnImge.
unsigned Col. unsigned Row.
unsigned Width, unsigned Height, double *Kernel. unsigned KernelCols.
SUBSTITUTE SHEET (RULE 26) unsiened KernelRows. unsigned Scale.
unsigned Absolute. BYTE huge OutlniageBufPtr) register unsigned ColExtent. RowExtent; register unsigned lInageCol. ImageRow. KernCol. KernRow; unsigned ColOffset. RowOffset. TcmpCol. TempRow; BYTE huge *OutputllnageBuffer: doublc Sum, double *Kemelhr; if (ParamcterChcckOK(Col .Row.Col +Width.Row +Height. "Convolution')) Image must be at least the same size as the kernel if (Width KernelCols Height KernelRows) /I allocate far memory buffer for output image OutputlmageBuffer (BYTE huge farcalloc(RASTERSIZE. (unsigned long)sizeof(BYTE)): if (OutputlrnageBuffer
==NULL)
restorecrtmodeQ: 0 printfU"Error Not enough memory for convolution output buffer\n"): :return (ENoMemory); P* Store address of output image buffer 0 OutlmageflufPtr =OutputlmagcBuffer, Clearing the output buffer to white will show the too:$*boarder areas not touched by the convolution. It also provides a nice white frame for the output image.
ClalaeraOtulm ~fe.ICLU .MINROWNUM.
MAXCOLS .MAXROWS. WHITE): ColOffset KcrnelCols/2: 6000 RowOffset =KernelRows/2: P* Compensate for cdge effects Col ColOffset: Row RowOffset: 0 Width -=(KernelCols
I);
Height -~(KernelRows 1); P* Calculate new range of pixels to act upon/ ColExtent Col Width: RowExtent =Row Height; for (IniageRow Row: ImageRow RowExtent: ImageRow TempRow ImageRow RowOffset.
for (ImageCol Col: IrnageCol CalExtent: ImageCol TempCol ImageCol ColOffset, Sum 0.0; KernelPtr Kernel; for (KernCol 0: KernCol KernelCols: KernCol for (KernRow 0: KernRow KernelRows: KernRow Sum (GetfixelFromlmage(Inlnuge.
TemnpCol +KcrnCoI. TempRow +KernRow) SUBSTITUTE SHEET (RULE 26) 49 (*KernenPr+ If absolute value is requested if (Absolute) Sum fabs(Surn): Summation performed. Scale and range Sum Sum 1=(double)(l Scale); Sum =(Sum MINSAMPLEVAL) MINSAMPLEVAL:Sum: Sum =(Sum MAXSAMPLEVAL)? MAXSAMPLEVAL:Sum; PutPixellnlmage(OUEPtputiaeBuffer.1maeCol.ImageRow,(BYTE.)Sum); else rerurn(EKernelSize): return(NoError): Byte compare for use with the qsort library function call in the Median filter function.
int £lytetompare(BYTE *Entryl, BYTE *Entry2) *if (Entryl *nr2 return(. 1); else if (Entryl *Entry2) return(l); else return(O).
CompletionCode MedianFilter(BYTE huge *Ilae unsigned Col. unsigned Row.
unsigned Width, unsigned Height.
unsigned NehborhoodCols. unsigned NeighborhoodRows.
BYTE huge *OutlmageBuf~tr) register unsigned ColExtent. RowExtent; register unsigned ImageCol. ImageRow. NeighborCol. NeighborRow: unsigned ColOffset. RowOffset. TempCol. TempRow. PixelIndex: unsigned TotalPixels. MedianIndex; BYTE huge *OutputlmageBuffer; BYTE *Pixel Values; if (ParameterCheckOK(Col.Row.Col +Width.Row +Height. "Median Filter")) Image must be at least the same size as the neighborhood if (Width NeighborhoodCols Height NeighborhoodRows) allocate far memory buffer for output imageI OutputlmageBuffer (BYTE huge farcalloc(RASTERSIZE.(unsigned long)sizeof(BYTE)); if (OutpuulmageBuffer
=NULL)
SUBSTITUTE SHEET (RULE 26) restorecrunodeQ: printf("Error Not cnough memory for median filter output buffer\n"); return (ENoMeinory);.
I' Store address of output image buffer *OutlnageBufPir OutputlmageBuffer; Clearing the output buffer to white will show the boarder areas not touched by the median filter. It also provides a nice white frame for the output image.
ClearlmageArea(OuEpuIMaeBuffer.MINCOLNUMMINROWN-tj.
MAXCOLS.MAXROWS, WHITE); I' Calculate border pixel to miss I/ ColOffset =NeighborhoodCols/2; RowOffset =NeighborhoodRows/2: I* Compensate for edge effects Col ColOffset: Row =RowOffset: *:Width -=(NeighborhoodCols 1): :Height -=(Neighborhood Rows 1).
/0 Calculate new range of pixels to act upon/ ColExtent =Col Width.
RowExtent =Row Height; TotalPixels =(Neighborhood Cols *Ne ighbo rhood Rows): MedianIndex (NeighborhoodCols *Neighborhood Rows)/2: P allocate memory for pixel buffer *1 PixelValues (BYTE calloc(TotalPixels.(unsigned)sizeof(BYTE)): if (PixelValues ==NULL) restorecrimodeO: printfU"Error Not enough memory for median filter pixel buffer\n"): return (ENoMemory); *:for (ImageRow =Row: ImageRow RowExtent: ImageRow -I) TempRow ImageRow RowOffset: for (ImageCol Col. ImageCol ColExtent: ImageCol TemnpCol ImageCol ColOffset; PixelIndex 0: for (NeighborCol 0: NeighborCol NeighborhoodCols: NeiehborCol for (NeighborRow 0: Neighbor-Row Neig2hborhood Rows: NeighborRow Pixel Values [PixelIndex +j GetPixelFromimage(Wnmage.TempCol Neighboi-Col.
TempRow +NeiighborRow): Quick sort the bright* ness values into ascending order and then pick out the median or middle value as that for the pixel.
qsort(PixelVal ues. Total Pixels.sizeof(BYTE).ByteCompare): SUSTITUTE SHEET (RULE 26) Pixel Valuesf~vedianlndex]): else return(EKernelSize): frec(Pixel Values); I* give up the pixel value buffer return(NoError); Sobel Edgze Detection Function CompletionCode SobclEdgeDet(BYTE huge *frdrnsge, unsigned Col. unsigned Row, unsigned Width. unsigned Height, unsigned Threshold, unsigned Overlay, BYTE huge OutlmaeeBufPtr) register unsigned Col~xtent. RowExtent: :register unsigned ImageCol. ImageRow.
unsigned PtA. PtB, PtC. PtD, PLE. PtF, PtG, PU-I. PtI: unsigned LineAElAveAbove. LineAElAveBclow. LineAElMaxDif: **unsigned LineBEHAveAbove. LineBEHAveBelow, LineBEHMaxDif-.
unsigned LineCEGAveAbove, LineCEGAveBelow. LineCEGMaxDjf; unsigned LineDEFAveAbove. LineDEFAveBelow. LineDEFMaxDif: unsigned MaxDif: BYTE huge 'OurputimageBuffer: if (ParameterCheckOK(Col, Row. Col Width, Row Heigiht. Sobel Edge Detector")) allocate far memory buffer for output image OutputlmagcBuffer (BYTE huge farcalloc(RASTERSIZE (unsigned long)stzeof(BYTE)): if (OutputimagieBuffer
==NULL)
restorecrtmodC(), printf("Error Not enough memory for Sobel output bufferfn"); return (ENoMemory);.
Store address of output image buffer/ *Qut~mageBufPtr OutputimageBuffer: Clearing the output buffer
MAXCOLS.MAXROWS.BLACK):
I' Compensate for edge effects of 3x3 pixel neighborhood/ Col+ =1; Row 1: Width Height Calculate new range of pixels to act upon1 S UBSSTITUTE S HEET (R ULE 26) ColExtent =Col Width: RowExtent =Row Height: for (ImageRow =Row: IrnageRow RowExtent: ImageRow for (ImageCol =Col. IrnageCol ColExtent: IrnageCol Get each pixel in 3x3 neighborhood PEA GetPixelFromlmage(Inlmge.imageCol I .ImageRow- I), PEB Ger.PixelFromimgc(Inimage.imageCoI .ImagcRow-l); PtC GetPixelFromIrage(In-nage.mgeCoI I .lmageRow- I); PtD GetPixeFromxgeqlnmagearageColIInigeRow PtE GetPixelFroilmage(Ipinuee,inmageCoI .lmageRow PEF -GetPixelFromlmage(Inrrnug.lmageCol+.ImgRow PtG GetPixelFromlnage(lnixnagelimageCol-.I.mageRow PtH GetPixelFrommage(InlageiageCoI .ImageRow Ptl GetPixelFromlmage(InImage IntageCol ImageRow Calculate average above and below the line.
Take the absolute value of the difference.
LineAEIAveBelow =(PID+PtG+PaH)/3: LineAElAveAbove (PtB+PtC PtF)/3, LineAElMaxDif abs(LineAflL~veBelow-LineAElAveAtbove); LineBEHAveBelow (PtA +PtD+PtG)/3; **LineBEHAveAbove (P(C+PtF+PdmI3; LineBEHMaxDif abs(LineBEHAveBelow-LineBEHAveAbove), LineCEGAveBelow (PEF+PtII+PtD)/3; LineCEGAveAbove (PtA +PtB+PtD)/3: LineCEGMaxDif abs(LineCEGAveBelow-LineCEGAveAbove); LineDEFAveBelow (PtG+PtH+PtW/3: LineDEFAveAbove (PtA+PtB+PtC)/3, LineDEFMaxDif abs(LineDEFAveBelow-LineDEFAveAbove); Find the maximum value of the absolute differences from tefour possibilities.
a'..MaxDif MAX(LineAEIMaxDif.LincBEHMaxDiD.
MaxDif =MAX(LineCEGMaxDifMaxif) MaxDif MAX(LineDEFMaxDif.Max~i0 the pixel of interest (center pixel) to white. If below the threshold optionally copy the input image tthe output image. This copying is controlled by the parameter Overlay.
if (MaxDif Threshold) PlutPixellnmage(outputlmageBuffer~imag~eCol.ImageRow.
WHIT):
else if (Overlay) PutPixelInlmage(OutputimageBuffer.ImageColxmaeR 0 w.Pt); rturM IN oEr or); SUBSTITUTE SHEET (RULE 26) pFRPOCES.H Image Procesing Header File Framec Processing Functitins P written in Turbo C 2.0 User defined image combination type typedef enum (And. Or.Xor.Add.Sub.Mult.Div. Min Max.Ave.Overlay) BitFunction; Frame Process Function Prototypes void Combinemages(BYTE huge *Slmage, unsigned SCol. unsigned SRow.
unsigned SWidth. unsigned SHeight.
BYTE huge *Dlmage, unsigned DCoI. unsigned DRow, enum BitFunction CombineType.
short Scale), a 5 5 SUBSTITUTE SHEET (RULE 26)
FPROCES.C
i Image Processing Code '~Frame Process Functions P written in Turbo C 2.0 #include stdio.h #include stdlib.h #includc conio.h #include dos.h A/nclude alloc.h A/nclude process.h include graphics.h A/nclude "misc.h' #include "pcx.h" #/include "vga.h" #include "imagesup.h" #/include "frprocess.h" Single function performs ail image combinations/ void Combinelmages(BYTE huge *Slmage.
unsigned SCol. unsigned SRow.
unsigned SWidth. unsigned SHeight.
**BYTE huge *Dlmage.
unsigned DCoI. unsigned DRow, enurn BitFunction CombineType, short Scale) register unsigned SlmageCol. SImageRow. DestCol; short SData. DData; unsigned SColExtent. SRowExtent: if (ParameterCheckOK(SCol.SRow.SCol SWidth.SRow SHeight."Combinelmages") ParameterCheckOK(DCol.DRow.DCol SWidth.DRow +SHeight."Combinelmages')) SColExtent =SCol+SWidth: *SRowExtent =SRow+SHeight: for (SImageRow SRow: SimageRow SRowExtent: SlmageRow Reset the destination Column count every row/ :DestCol DCoI; for (SImageCol SCol: SimageCol SColExtent: SlmageCol 1* Get a byte of the source and dest image data SData GetPixelFromlmage(Slrnage.SlmageCol.SlmageRow).
DData GctPixelFromlmage(Dlmage.DestCol.DRow): 1* Combine source and dest data according to parameter/ switch(CombincType) case And: DData SData; break, case Or: DData I=SData; break: case Xor: DData SData: break; SUBSTITUTE SHEET (RULE 26) case Add: DData SData; break, case Sub: DData -=SData; break; case Mult: DData SData; break; case Div: if (SData 0) DData 1=SData, break: case Min: DData MIN(SData.DData); break: case Max: DData MAX(SData.DData): break: case Aye: DData (SData+DData)/2: break, Case Overlav: DData* SData: *break: Scale the resultant data if requested to. A positive Scale value shifts the destination data to the right thereby dividing it by a power of two. A zero Scale value leaves the data untouched. A negative Scale value shifts the data left thereby multiplying it by a power of two.
if (Scale 0) DData abs(Scale): else if (Scale 0) DDa~a Scale: 1* Don't let the pixel data get out of range1 DData (DData MINSAMPLEVAL) MINSAMPLEVAL:DData: DData (DData MAXSAMPLEVAL) MAXSAMPLEVAL:DData: PutPixellnlmage(DlmageDestCol .DRow.DData): Bump to next row in the destination image DRow SUBSITrUTE SHEET (RULE 26)
GEPRQCES.H
Image Processing Header File I' Geometric Processing Functions /~written in Turbo C 2.0 1 Misc user defined types/ typedef cnum (HorizMirror.Verd~irror) MirrorType: Geometric processes function prototypes 1/ void Scalelmage(BYTE huge *Inbnage. unsigned SCol. unsigned SRow.
unsigned SWidth. unsigned SHeight.
double ScalcH. double ScaleV, BYTE huge *Outimage, unsigned DCoI. unsigned DRow, unsigned Interpolate), void Sizelmage(BYTE huge *lnmage. unsigned SCoI. unsigned SRow.
unsigned SWidib. unsigned SHeight.
BYTE huge *Outlrnage.
unsigned DCoI. unsigned DRow.
:unsigned DWidr~h. unsigned DHeight.
unsigned Interpolate), void Rotatelmage(BYTE huge lInlmage. unsigned Col. unsigned Row, S S unsigned Width, unsigned Height. double Angle, BYTE huge *Outlmage. unsigned Interpolate); void Transiatclmage(BY'jE huge lInlmage, unsigned SCoI. unsigned SRow, unsigned SWidth. unsigned SHeight.
BYTE huge *Outlmage, unsigned DCoI. unsigned DRow, unsigned EraseFlag); void Mirrorlmage(BYTE huge lInlmage, unsigned SCoI. unsigned SRow.
unsigned SWidth. unsigned SHeight.
enum MirrorType WhichMirror.
BYTE huge *Outlmage, 4 unsigned DCol. unsigned DRow); SUBSTITUTE SHEET (RULE 26) 57 GEPROCF-s.c Image Processing Code Geometric Processing Functions 1 written in Turbo C 2.0 #include stdio.h #include conio.h A/nclude dos.h #include <alloc.h> Iinclude process.h I/nclude math.h A/nclude <graphics.h> #/include "misc.h* #include "pcx.h* #include "vga.h" #/include magesup.h" void SCalelmage(BYTE huge lInlmage. unsigned SCol. unsigned SRow.
unsigned SWidth. unsigned SHeight.
double Scalel-. double ScaleV.
BYTE huge 5 Outlmage.
:unsigned DCol. unsigned DRow.
unsigned Interpolate) unsigned DestWidth. DestHeight; unsigned Pt.A. PtB. Pic, PtD. PixelValue; register unsigned SPixelColNum. SPixelRowNum. DestCol. DestRow: double SPixelColAddr. SPixelRowAddr, double ColDelta. RowDelta; double ContribFromAandB. ContribFromCandD; DestWidth =ScalelH SWidth *DestHeight =ScaleV *SHeight+ :if (ParameterChreckOK(SCol.SRow.SCoI SWidth. SRow+ .SHeight."Scalelmage") ParameterCheckOK(DCol.DRow.DCo Dest Width.DRow +DesiHeight.-Scalelmage")) Calculations from destination perspective/ for (DestRow 0: DestRow DestHeicht; DestRow SPixelRowAddr =DestRow/ScaleV: SPixelRowNum =(unsigned) SPixelRowAddr; RowDelta SPixelRowAddr SPixelRowNurn- SPixeiRowNum SRow, for (DestCol 0. DestCol DestWidth: DestCol SPixelColAddr =DestCol/ScaleH: SPixelColNum (unsigned) SPixelColAddr: ColDelta SPixelColAddr SPixelColNum: SPixelColNum =SCoI; if (Interpolate) SPixelColNurn and SPixelRowNum now contain the pixel coordinates of the upper left pixel of the targetted pixel's (point X) neighborhood. This is point A below: A B x SUBSTITUTE SHEET (RULE 26) C D We must retrieve the brightness level of each of the four pixels to calculate the value of the pixel put into the destination image.
Get point A brightness as it will always lie within the input image area. Check to make sure the other points are within also. If so use their values for the calculations.
If not. set them all equal to point A's value. This induces an error but only at the cdges on an image.
PtA GePxlrmmiellaeS~xlo~mSie~wu) if (((SPixelColNum+l1) MAXCOLS) ((SPixelRowNum+l1)
MAXROWS))
PtB Ge~xlrmmg~nlaeSie~lu I .SPixelRowNum); PtC Ge~xlrmmg~amg.~xlo~mSie~wu 1): PtD =GeLPixelFromImage(IplmageSPixe~oj~um L SPixelRowNum 1); else All points have equal brightness PtB =PtC PtD PtA: Interpolate to find brightness contribution of each pixel in neighborhood. Done in bath the horizontal and vertical directions.
Cotibrmad ocla(dul)t- t)+PA ContribFrom~andB ColDelta*((double)PtD PtA) PtA: Pixel Value 0.5 ContribFromAandB (ContribFromCandD ContribFrornAandB)sRowDelta: Pixel Value GetPixel Frommage(InImaeSPixelCm olumwpim): 1* Put the pixel into the destination buffer *1 Pu~xlnmg(u~ae eEo DCol.DestRow DRow. Pixel Value): void Sizelmage(BYmE huge *lrdmage. unsigned SCoI. unsigned SRow.
unsigned SWidth. unsigned SHeight, BYTE huge *Outlmage.
unsigned DCoI. unsigned DRaw.
unsigned DWidth. unsigned DHeight, unsigned Interpolate) double HScale, VScale: Check for parameters out of range1 if (ParameterCheckOK(S Col. SRow. SCol SWidth.S Row SHeigiht Siemge)& ParameterCheckOK(DCol.DRow.DCol DWidth.DRow DHeight. Sizelmage")) Calculate horizontal and vertical scale factors required to fit specified portion of input image into specified portion of output image.
SUBSTITUTE SHEET (RULE 26) HScalc (double)DWidth/(double)SWidth, VScale (double)DHejghtI(double)SHeight; Call Scalelnuge to do the actual work sraemg~nmg.~lSo.~dhSegtHcl.~ae Qutlrmage,DCol.DRowjinerpolate); void Rotatelmage(BYTE, huge *lrdmage, unsigned Col. unsigned Row, unsigned Width. unsigned Height, double Angle, BYTE huge *Outlmage. unsigned Interpolate) register unsigned ImageCol. IntageRow: unsigned CenterCol, CenterRow, SPixelColNum. SPixelRowNuni: unsigned ColExtent. RowExtent. Pixel Value: unsigned PtA. PtB. PtC. PtD: double DPixelRelativeColNum. DPixelRelativeRowNum: double CosAngle, SinAngle. SPixelColAddr. SPixelRowAddr: double ColDelta. RowDelta; double ContribFronlAandB. ContribFromCandD; if (ParameterCheckOK(Col .Row, Col Width. Row Height. Rotatelmage")) Angie must be in 0..359.9 11 while (Angle 360.0) Angle 360.0: 1* Convert angle from degrees to radians/ Angle ((double) 3 .1 4 159/(double) 180.0); Calculate angle values for rotation I/ CosAngle =cos(Angic): *SinAngle sin(Angic); Center of rotation/ CenterCol =Cal Width/2: ~.CenterRow =Row Height/2; CalExtent Cal Width: RowEXtent Row Height: All calculations are performed from the destination image perspective. Absolute pixel values must be convented into inches of display distance to keep the aspect value correct when image is rotated. After rotation, the calculated display distance is converted back to real pixel values.
for (ImageRow Row: ImageRow RowExtent: ImageRow DPixelRclativeRowNum (doubleilmageRow Center-Row: I' Convert row value to display distance from image center/ DPixelRelativeRowNum
LRINCHESPERPLXELVERT:
for (ImageCol Col: ImageCol ColExtent: ImageCol DPixelRelativeCo]Num (double)lmageCol CenterCol: Convert col value to display distance from image center DPixelRelativeColNum LR JCHESPERPDXELHORIZ: Calculate source pixel address from destination SUBSTITUTE SHEET (RULE 26) pixels position.
SPixelColAddr =DPixclRelativeColNum*CosAngle- DPixelRelativeRowNum*SinAngle; SPixelRowAddr =DPixclRelativeColNum*SinAngle+ DPixelRelativeRowNum*Cos~ngle; Convert from coordinates relative to imtage center back into absolute coordinates.
Convert display distance to pixel location SPixelColAddr
LRPIXELSPERINCHHORJZ:
SPixelColAddr =CenterCol: SPixelRowAddr
LRPIXELSPERINCHVERT:
SPixelRowAddr CenterRow; SPixelColNum =(unsigned) SPixelColAddr: SPixelRowNum =(unsigned) SPixelRowAddr.
CalDelta SPixelColAddr SPixelColNum: RowDelta SPixelRowAddr SPixelRowNum: if (Interpolate) a. SPixelColNum and SPixelRowNum now contain the pixel coordinates of the upper left pixel of the targetted pixel's (point X) neighborhood. This is point A below: *A
B
C D We must retrieve the brightness level of each of the four pixels to calculate the value of the pixel put into the destination image.
Get point A brightness as it will always lie within the input image area. Check to make sure the other points are .within also, If so use their values for the calculations.
If not, set them all equal to point A's value. This induces an error but only at the edges on an image.
0/ PEA GetPixelFromlmage(lnmag.SPixelColNumSPixel~ow~um).
if (((SPixelColNum 1) MAXCOLS) ((SPixelRowNum 1) MAXROWS)) PtB GctPixelFromlmage(Inlmage.SPixelColNu I.SPixelRowNum): PtC Gc~xlrmmg~llaeSi ~]NmSie~ u+ I); PtD GetPixelFromlmage(IDmage.SPixelCol~um I.SPixelRowNum else All points have equal brightness/ PtB=PtC=PtD-PtA: Interpolate to find hrighfnPcc rnnf,'bution of ec i in neighborhood. Done in both the horizontal and vertical directions.
Cotia' nB=Clet*(dul)t
E)+PA
Contrib~rom~andB ColDelta((double)PtD PtA) PtA; Pixel Value =0.5 Contrib~romAandB (ContribFromCandD -ContribFromAandfl)*RowDclta.
SUBSTITUTE SHEET (RULE 26) else Pixel Value Ge~xlrmmg~nmg.~xlo~mSie~wu) Put the pixel into the destination buffer *I Caution: images must not overlap void Transl atelmage (BYTE huge *Ir~mage, unsigned SCol. unsigned SRow, unsigned SWidth. unsigned SHeight, BYTE huge *Outlmage, unsigned DCoI. unsigned DRow, unsigned EraseFlag) register unsigned SlmageCol. SImageRow. DestCol: :unsigned SColExtent. SRowExtent: P Check for parameters out of range/ if (ParametcrCheckOK(SCol.SRow.SCoI SWidth.SRow +SHeight."Transiatelmage") ParameterCheckOK(DCoIDRow.DCoI +SWidth.DRow +SHeight, Translatelmage SColExtent SCol+SWidth: SRowExtent =SRow+SHeight; for (SlmageRow SRow; SlmageRow SRowExtent: SlmageRow P* Reset the destination Column count every row DestCol DCol: *.for (SlmageCol SCoI: SimageCol SColExtent: SlmageCol 0**U P Transfer byte of the image data between buffers1 D~w ;~w *0I rsreseiid blot out original image void Mirrorlmage(BYTE huge *Inlmage.
unsigned SCol. unsigned SRow.
unsigned SWidth. unsigned SHeight.
enum MirrorType WhichMirror.
B ugv v 'u u M-3ge.
unsigned DCoI. unsigned DRow) register unsigned SlmageCol. SlmageRow. DestCbl:, unsigned SColExtent. SRowExtent; Check for parameters out of range if (ParameterCheckOK(SCol.SRow.SCoI +SWidth.SRow +SHeight. "MirrorImage") SUBSTITUTE SHEET (RULE 26) 62 ParalneterCheckOK(DCoDRow.DCol +SWidth.DRow SHeight. "Mirrorlmage")) SColExtern SCoI+SWidt: SRowExtent SRow+SHcight: switch(WhichMjrror) case HorizMirror: for (SImageRow =SRow: SlmageRow SRowExtcnt: SlmageRow+ P* Reset the destination Column count every row/ DestCoi DCol SWidth: for (SImageCol SCol: SImageCol SColExtent. SlmageCol Transfer byte of the image data between buffers1 PutPixellnmage(Outmage.-DestCo.D~ow GctPixelFromlmage(1nlmage.SlmageCol.Slmag~eRow)); Bump to next row in the destination image 0/ DRaw break: case VertMirror: DRow (SHeight-l): for (SlmageRow =SRow: SImageRow SRowExtent: SlmageRow P* Reset the destination Column count every row DestCol =DCoI: for (SImageCol SCol: SlmageCol SColExtent: SimageCol P* Transfer byte of the image data between buffers/ PutPixeflnlmage(outlmage.DestCoI +4 .DRow.
GetPixelFromimage(1nlmage.Slmage~ 0 o .SlmageRow)); f* Bump to next row in the destination image DRowbreak: a a a a a a a. a. a a.
a a. a a a.
SUBSTITUTE SHEET (RULE 26) 1* IMAGESUP.H1 Image Processing Header-File/ Image Processing Support Functions 1 written in Turbo C 2.0 This file includes the general equates used for all of the image processing code in part two of this book. Throughout these equates, a 320x.200 256 color image is assumed. If the resolution of the processed pictures change, the equates MAXCOLS and MAXROWS must changze accordingly.
Pixel Sample Information and Equates/ #dcfine MAXSAMPLEBITS 6 6 bits from disitizer/ #define MINSAMPLEVAL 0 Min sample value 0 Max num of sample values/ #define MAXQUANTLEVELS (I MAXSAMPLEBITS) Max sample value 63 #define MAXSAMPLEVAL (MAXQUANTLEVELS-l) Imagze Resolution Equates/ *#define MINCOLNIJM 0 Column 01 #dcfine MAXCOLS LRMAXCOLS 320 total columns *#define MAXCOLNUM (MAXCOLS-l) Last column is 319 #define MINROWNUM 0 1* Row 0 a 0 #define MAXROWS LRMAXROWS 200 total rows/ #define MAXROWNUM (MAXROWS-l) P Last row is 199*/ :#define RASTERSIZE ((long)MAXCOLS *MAXROWS) #define MAXNUMGRAYCOLORS
MAXQUANTLEVELS
P* histogram equates/ #define HISTOCOL 0 #define HISTOWIDTH 134 #define HISTOHEIGHT 84 #define BLACK 0 #define WHITE 63 *#define AXISCOL (HISTOCOL+3) #/define AXISROW (H-ISTOROW #/define AXISLENGTH MAXQUANTLEVELS*2-l #define DATACOL AXISCOL #define DATAROW AXISROW-l #define MAXDEFLECTION (HISTOHEIGHT-lO) External Function Declarations and Prototypes/ void Copylmagc(BYTE huge *SourceBuf. BYTE huge 'DestBuf): B YTE Ge~xirnljg( E huge -image. unsigned Col. unsigned Row): CompletionCode PutPixellnlmage(BYTE huge *Image, unsigned Cal.
unsigned Row. unsigned Color): CompleuionCode DrawHLine(BYTE huge *Image. unsigned Col. unsigned Row, unsigned Length. unsigned Color): SUBSTITUTE SHEET (RULE 26) 64 CompletionCode Draw VLine(BYTE huge *Image. unsigned Col. unsigned Row.
unsigned Lengih. unsigned Color); void ReadimageAreaToBuf (BYTE hugc *Image. unsigned Col. unsigned Row, unsigned Width, unsigned Height, BYTE huge *Buffer); void WritelmageAreaFromBuf (BYTE huge *Buffer. unsigned BufWidth.
unsigned ButHeight. BYTE huge *Image, unsigned ImageCol, unsigned ImageRow): void ClearlmageArea(BYTE huge lImage.unsigned Col. unsigned Row, unsigned Width, unsigned Height.
unsigned PixelValue); CompletionCode ParamcterCheckoK(unsigned Col. unsigned Row, unsigned ColExtent. unsigned RowExtent.
char *ErrorStr): *ego* 0.4 040 SUBSTITUTE SHEET (RULE 26) 1* IMAGESUP.C
S
Image Processing Support Funictions /~written in Turbo C 2.0 #include stdio.h #include process.hIt> #include conio.h #include dos.h #include alloc.h #include mem.h #include graph ics.h #include "misc.h" #include "pcx.h" #include "vga.h" #include 'imagcsup.h* extern struct PCX-File PCXData: extern unsigned ImageWidth: unsigned ImageHeight: Image Processing Support Functions See text for details.
Copy a complete image from source buffer to destination buffer Os:.void Copylmage(BYTE huge *SourceBuf. BYTE huge *DestBuf) movedata(FPSEG(SourceBuf).FPOFF(SourceBuf), 00 0 ~FP SEG(DestBuf)FP-OFF(DesLuf) (unsigned) RASTERSIZE): .00* 0 NOTE: to index into the image memory like an array, the index value must be a long variable type, NOT just cast to long.
0. BYTE GetPixelFromimage(BYTE huge 'Image, unsigned Col. unsigned Row) unsigned long PixelBufOffset: if((Col ImageWidth) (Row Imagel-eight)) PixelBufOffset Row; I* done to prevent overflow PixelBufOffset ImageWidth: PixclBufOffset =Col: return(Inlage[PixelBufOffset]); pritf~e~ielrolmgeError: Coordinate out of range nn printf( Col Row =%d\n".Col.Row):.
return(FAL-SE); CompletionCode PutPixellnlmage(BYTE huge *Image. unsigned Cal, unsigned Row. unsigned Color) SUBSTITUTE SHEET (RULE 26) unsigned long PixelBufoffset: iW(Col ImnageWidth) (Row Imragel-eigh) PixelBufOffset =Row; done to prevent overflow PixelBufOffsct Image Width; PixclBufOffset Col: Imagec[Pixelnufoffsetc Color; rccurni(TRUE); else printf("PutPixelinimage Error: Coordinate out of rangc\n"): printf(" Col Row %d\n".Col.Row); return(FALSE): NOTE: A length of 0 is one pixel on. A length of I is two pixels on. That is why length is incremented before being used.
.CompletionCode DrawHLine(BYTh- huee 'Image, unsigned Col. unsigned Row.
:unsigned Length. unsigned Color) if ((ol ImageWidth) ((Col Length) lmageWidth) (Row Imagel-eight)) Length while(Length..) PU*ie* *ae maeo .Row.Color); return(TRUE); else a aprintf("DrawHLine Error: Coordinate out of range\n-).
printfU- Col Row Length %d\nf.Col.Row.Length): rczum(FALSE).
**CompletionCode DrawVLine(BYTE hugce *Image. unsigned Col. unsigned Row.
unsigned Length. unsigned Color) if ((Row Imagel-eight) ((Row+Length) Imagel-eight) (Cal ImageWidth)) Length while(Length-) PutPixcllnlmage(Image. Col, Row Color): retumn(TRUE); else printfC'DrawVLine Error: Coordinate out of rangze\n'): printf(" Cal %d Row %d Length d\n" .Col. Row, Length): rerurn(FALSE); void ReadlrnageAreaToBuf (BYTE huge *Image, unsigned Col. unsigned Row.
SUBSTITUTE SHEET (RULE 26) unsigned Width, unsigned Height. BYTE huge *Buffer) unsigned long PixelBufoffset
OL:
register unsigned ImageCol. ImageRow; for (ImageRow =Row: ImageRow Row Height. ImageRow for (IntagcCo Col: ImageCol Col +Width: IrnageCol+ Buffer[PixelBufOffset+ +1 GetPixeiFromimagecimaelrgecolImageRow); void WritelmageAreaFrom~uf (BYTE huge *Buffer. unsigned BufWidth, unsigned Buffleight. BYTE huge *Image, unsigned ImageCol. unsigned ImageRow) unsigned long PixelBufOffset: register unsigned BufCol. BufRow. CurrentImageCol: for (BufRow 0: BufRow BufHeight: BufRow CurrenimageCol ImageCol; for (BufCol 0: BufCol BufWidth: BufCol PixelBufOffset (unsigned long)BufRow*BufWiclth +BufCol **PutPixclnmage(ImageCurrentmaeColzmageRowBuff[ Pix ~f~ffej); Currentimagecol IrageRow void ClearlmageArea(BYTE huge lImage,unsigned Col. unsigned Row.
unsigned Width. unsigned Height.
unsigned PixelValue) :register unsigned BufCol. BufRow-: (BufRow BufRow Heieht: BufRow+ for (BufCol 0: BufCol Width: BufCol± I PutPixelInImaee(lmaqe.Buf~of +Col, BufRow Row. Pixel Value This function checks to make sure the parameters passed to the image processing functions are all within range. If so a TRUE is returned. If not, an error message is output and the calling program is terminated.
CompletionCode ParameterCheckoK(unsigned Col, unsigned Row.
unsigned ColExtent. unsigned RowExtent.
char *FunctionName) if ((Col MAXCOLNUM) ii(Row MAXROWNUM)II (CalExtent MAXCQLS) I (RowExtent MAXROWS9)) restarecrtmodeQ): printf(Tarameter(s) out of range in function: %s\n".FunctionName).
printf(* Col Row ColExtent %d RowExtent Col. Row. CalExtent. RawExtent): SUBSTITUTE SHEET (RULE 26) exit(EBadPai-ms): return(TRUE); S. S S
S
SUBSTITUTE SHEET (RULE 26) PTPROCES.H Image Processing Header File Point Processing Functions written in Turbo C 2.0 extern unsigned HistogramNMAXQUANTLEVEL-S]; f* Function Prototypes for support and histogram functions void InitializeLUT(BYTE *Lo:okUpTable); void PiTransformn(BYTE huge *lmageData, unsigned Col.
unsigned Row, unsigned Width.
unsigned Height. BYTE *LookUpTable); void GenHistogram(BYTE huge *jmagc~ata unsigned Col, unsigned Row, unsigned Width, unsigned Height); void DisplayHist(BYTE huge *ImaeData unsigned Col.
unsigned Row. unsigned Width.
unsigned Height); P Point transform functions/ void AdjlmageBrightness(BYTE huge *ImageData. short BrightnessFactor.
unsigned Col. unsigned Row, unsigned Width, unsigned Height); void Negatclmage(BYTE huge *lmageData. unsigned Threshold.
unsigned Col. unsigned Row, unsigned Width, unsigned Height); *void Threshoidlmage(BYTE huge *ImageData. unsigned Threshold, unsigned Col. unsigned Row, unsigned Width, unsigned Height).
void StretchlmageContrastBYT huge lImageData. unsigned *HistoData.
unsigzned Threshold.
unsigned Col. unsigned Row.
unsigned Width, unsigned Height): SUBSTITUTE SHEET (RULE 26)
PTPROCES.C
Image Processing Code S Point Process Functions written in Turbo C 2.0 #includ 5* 55sSS SS S 5= S *5 55S5 5 #include stdlio.h #include <stdlib.h #include <coni.h> #include <alloc.h> #include process.h #include <cgraphics.h #include 'misc.h" #include *pcx.h" #include "vga.h" #include "imagesup.h" Histogr am storage location unsigned HisbogramfMAXQUANTLEVELSI: Look Up Table (LUT) Functions Initialize the Look Up Table (LUT) for straight through mapping. If a point transform is performed on an initialized :LUT. output data will equal input data. This function is usually called in preparation for modification to a LUT.
void Initial izeLUT(BYTE *LookUpTable) register unsigned Index: for (Index 0: Index MAXQUANTLEVELS: Index++) LookUpTable[lndex] Index: This function performs a point transform on the portion of the image specified by Cal. Row. Width and Height. The actual transform is contained in the Look Up Table who address is passed as a parameter.
void PtTransform(BYTE huge *lmageData. unsigned Col. unsigned Row.
unsigned Width, unsigned Height. BYTE *LookUpTable) register unsigned ImageCol. ImageRow: register unsigned ColExtent. RowExtent; COlExtent Col +Width: SLJ1STTTUTE '-HEFT (ROLF 26) A 71 RowExtent Row+Height: if (ParamcterCheckOK(Col.Row.ColFxtent.RowExtent TtTransforng)) for (IrnageRow Row: ImageRow RowExtent: IniageRow for (ImageCol =Col. IrnageCol ColExtent: lInageCol PutPixellalmage(ImageData.ImageCoi .liageRow, L~o~~beGticFohg~mg~t.m-eo~m-eo)) I* start of histogram functions This function calculates the histogram of any portion of an image.
void GenI-istogram(BYTE huge *ImageDa~a unsigned Col. unsigned Row.
unsigned Width. unsigned Height) register unsigned ImageRow. ImageCol. RowExtent. ColExtent; register unsigned Index: clear the histogram array' for (Index Index MAXQUANTLEVELS: Index+ Histogram[Indexj 0: RowExtent =Row+Height; CalExtent =Col+Width: if (ParameterCheckOK(Col.Row ColExtent.RowExtent "GenHistogram")) calculate the histogram/ *for (ImageRow Row: ImageRow RowExtent: ImageRow for (IniageCol Gol: ImageCol CalExtent: ImageCol Histogram(GetPixelFromimage(Ima2eDaaImageCol.Image~ow)I 1: This function calculates and displays the histogram of an image or partial image. When called it assumes the VGA is already tn mode 13 hex.
void DisplayHist(BYTE huge lImageData. unsigned Col. unsigned Row.
unsigned Width, unsigned Height) BYTE huge *Buffer.
register unsigned Index. LineL-ength. XPos, YPos: unsigned MaxRepeat: Allocate enough memory to save image under histogram1 Buffer -(BYTE huge farcalloc(QIon)HISTOWIDTHsHSTOHEIGHTsjzeof(BYTE)); if (Rtjffr hTTT printfC"No buffer memory\n"); exit(ENoMemory); 1* Save a copy of the image S Buffer); SUBSTITUTE SHEET (RULE 26) Set VGA color register 65 to red. 66 to green an~d 67 to blue so the histogram can be visually separated from the continuous tone image.
SeLAColorReg(65,63.0,O); SetAColorReg(66.0.63 SetAColorReg(67,0.0,63); Calculate the histogram for the image GcnHistogram(ImageDar2. Col, Row. Width. Height); MaxRepeat 0; Find the pixel value repeated the most. It will be used for scaling.
for (Index Index MAXQUANTLEVELS: Index MaxRepeat (Histogram[lndexj MaxRepeat) .*Histogram[ Index): MaxRepeat: P/ Fill background area of histogram graph ClalaeraIaeaaHSOO.ISOO ,ITWDHHSOEGT6) Draw the bounding box for the histoeram Drw*n Iae~t.ISOO.HSO1
HSOHIH-.BLACK);
~DrawVLinc(ImageData.HISTOCOL+ HISTOWIDTH- I.HISTQROW.HISTOHEIGHT..I
.BLACK);
DrawHLine(ImaeData.HISTOCQL.HISTOROW HISTOHEIGHT- I-ISTOWIDTH- 1. BLACK): Drw~n(mg~~,ITCL ITRW IITWDH 1.BLACK): P/ Data base line DrawHLine(ImaeDaa.AXISCOL.AXISROW.AXISLENGTH.WHITE): DrawHLine(ImageData.AXISCOL.AXISROW 1. AXIS LENGTH. WHITE): Now do the actual histogram renderin2 into the 4 image buffer.
for (Index Index MAXQUANTLEVELS: Index++i) Linel-ength (unsigned)(((long) Histogram[Index] MAXDEFLECTION)/ (long) MaxRepeat); XPos DATACOL Index *2; YPos DATAROW LineLength: DrawVLine(ImageData.XPos.YPos.LineLength.6): Display the image overlayed with the histogram IJispiayimage~nBuf(imageData.Nu VtAUNL 1WAd- FOR-K±Y): After display, restore image data under histogram/ WritelmageAreaFromBuf(Buffer.HISTOWIDTH.HISTOHEIGHT ImageData,
HISTOCOL.HISTORQW);
farfrce((BTE far *)Buffer)- SUBSTTUTE SHEET (RULE 26) Various Point Transformation Functions void Adj ImgBrighss YTE huge *ImageData. short B rightness Factorunsigned Col. unsigned Row, unsigned Width. unsigned Height) register unsigned Index; register short NewLevel: BYTE LookUpTableMAXQUANTLEELS) for (Index =MINSAMPLEVAL. Index MAXQUANTLEVELS: Index++) NewLevel Index BrightnessFactor: NewL-evel (NewLevel MINSAMPLEVAL) MINSAMPLEVAL;New~evel: NewLevel (NewL-evel MAXSAMPLEVAL)? MAXSAMPLEVAL:NewLevel; LookUpTable [Index I Newl-evel; PtTransform(imageData. Col, Row, Width. He ight, Look UTable Thi fu ci n wl.e aea m g p x lb i e .T r s od i the vau*fi a ed t h r h e aainbgn
.I
Tis fneacinwil egtE e nimage a ixi nyie Threshold.
unige Co.usgndRw becomes63and igvled 63 th becom e IHresgholtsgrae void regte unsgedB Index:eat.unind hesod BYTE LookUpTabe[MAXQUANTLEVEL-S) 0 0 0 P Straight through mapping initially/ Initial izeLUT(LookUpTable); P* from Threshold onward, negate entry in LUT for (Index Threshold. Index MAXQUANTLEVELS. Index LookUpTable[Index] =MAXSAMPLEVAL Index: This function converts a gray scale image to a him-~ image -with each pixel either on (WHMr) or off (BLACK). The pixel level at which the cut off is made is controlled by Threshold. Pixels in the range 0..Threshold-l become black while pixel values between Threshold. .63 become white.
SUBSTITUTE SHEET (RULE 26) 74 void Thresholdlmage(BYTE huge lrnageDaia. unsigned Threshold.
unsigned Col. unsigned Row, unsigned Width, unsigned Height) register unsigned Index: BYTE LookUpTabe[MAQUANT~EE].) for (index MINSAMPLEVAL: Index Threshold: Index+ LookUpTable(Index)
BLACK:
for (index Threshold: Index MAXQUANTLEVELS; Index+ LookUpTable(Index]
WHITE:
PtTransform(ImageData, Col. Row, Width. Height. Look UpTable): void StretchlxnageConcrast (BYTE huge *ImageData unsigned *HistoData unsigned Threshold.
unsigned Col. unsigned Row, A egiter nsinedunsigned Width, unsigned Height) reitrusge Index. NewMin. NewMax: double StepSiz. Step Val: BYTE LookUpTabIc[MAXQUANTEVELS].
Search from the low bin towards the high bin for the first one that exceeds the threshold for (Index=Q0: Index MAXQUANTLEVELS: Index++) if (HistoData([index) Threshold) break: :NewMin Index: Search from the high bin towards the low bin for the first one that exceeds the threshold for (Index =MAXSAMPLEVAL: Index NewMin: Index--) if (HistoData (Index) Threshold) break: NewMax Index: StepSiz (doube)MAXQUANTLEVELS/(double)(NewMax-NwMin StepVal =0.0: values below new minimum are assigned zero in the LUT1 for (Index= 0: Index NewMin: Index++) LookUpTable(Index]
MINSAMPLEVAL:
/4 values above new maximum are assigned the max sample value for (Index NewMax 1: Index MAXQUANTLEVELS: Index LookUpTablef Index] MAXSAMPLEVAL: values between the new minimum and new maximum are stretched1 for (Index -NewMin: Index NewMax: Index++) SUBSTITUTE SHEET (RULE 26) Look UpTable (Index]I StepVal.
StepVal StepS iz; Look Up Table is now prepared to paint transform the image data.
PtTransformn(hnageDataColRow Widzh.Height.LookUpTable); 4* p
V
9* S S
S
S S 0* S S. .5 S S 5
S
4
S
S.
S.
SUBSTITUTE SHEET (RULE 26)

Claims (4)

1. A method of encoding a television broadcast signal comprising the steps of generating a depth signal for each pixel and adding the depth signal as a component of the broadcast signal.
2. A method of decoding a television broadcast signal encoded according to claim 1 comprising the step of extracting the depth signal component. 'So 10
3. A method of encoding a television broadcast signal as claimed in claim S 1 in which the step of generating the depth signal comprises a triangulation technique using two spaced cameras.
4. A method of encoding a television broadcast signal as claimed in claim 1 in which the step of generating the depth signal comprises the use of non-optical depth using two spaced cameras. A method of encoding a television broadcast signal substantially as herein described with reference to any one of the embodiments as that embodiment is 2015 1 illustrated in the drawings. DATED this Twenty-fourth Day of May, 1999 Visualabs Inc. Patent Attorneys for the Applicant SPRUSON FERGUSON R:\LIBL\00156.doc
AU31221/99A 1995-01-04 1999-05-24 3-D imaging system Abandoned AU3122199A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US368644 1995-01-04
US08/368,644 US5790086A (en) 1995-01-04 1995-01-04 3-D imaging system
AU42953/96A AU702635B2 (en) 1995-01-04 1995-12-28 3-D imaging system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU42953/96A Division AU702635B2 (en) 1995-01-04 1995-12-28 3-D imaging system

Publications (1)

Publication Number Publication Date
AU3122199A true AU3122199A (en) 1999-08-05

Family

ID=25626205

Family Applications (2)

Application Number Title Priority Date Filing Date
AU31222/99A Abandoned AU3122299A (en) 1995-01-04 1999-05-24 3-D imaging system
AU31221/99A Abandoned AU3122199A (en) 1995-01-04 1999-05-24 3-D imaging system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
AU31222/99A Abandoned AU3122299A (en) 1995-01-04 1999-05-24 3-D imaging system

Country Status (1)

Country Link
AU (2) AU3122299A (en)

Also Published As

Publication number Publication date
AU3122299A (en) 1999-07-22

Similar Documents

Publication Publication Date Title
AU702635B2 (en) 3-D imaging system
EP1057070B1 (en) A multi-layer display and a method for displaying images on such a display
EP0739497B1 (en) Multi-image compositing
US6750904B1 (en) Camera system for three dimensional images and video
US6477267B1 (en) Image conversion and encoding techniques
US7054478B2 (en) Image conversion and encoding techniques
US4925294A (en) Method to convert two dimensional motion pictures for three-dimensional systems
US5715383A (en) Compound depth image display system
JPH05210181A (en) Method and apparatus for integral photographic recording and reproduction by means of electronic interpolation
US20030146883A1 (en) 3-D imaging system
CN1231071C (en) Stereoscopic system
KR100351805B1 (en) 3d integral image display system
CN116990983A (en) Stereoscopic display device based on viewpoint morphology record
AU3122199A (en) 3-D imaging system
CN112949801A (en) Three-dimensional code based on micro-lens array, three-dimensional code generation method and three-dimensional code identification method
CN1113546C (en) Time division computerized stereo image display system
JPH0447377A (en) Computer three-dimensional graphic system
Chamberlin et al. Electronic 3-D imaging

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application