AU738534B2 - Computer stereo vision system and method - Google Patents

Computer stereo vision system and method Download PDF

Info

Publication number
AU738534B2
AU738534B2 AU73316/96A AU7331696A AU738534B2 AU 738534 B2 AU738534 B2 AU 738534B2 AU 73316/96 A AU73316/96 A AU 73316/96A AU 7331696 A AU7331696 A AU 7331696A AU 738534 B2 AU738534 B2 AU 738534B2
Authority
AU
Australia
Prior art keywords
data
flu
pictures
stored
shapes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU73316/96A
Other versions
AU7331696A (en
Inventor
Moshe Razon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of AU7331696A publication Critical patent/AU7331696A/en
Application granted granted Critical
Publication of AU738534B2 publication Critical patent/AU738534B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/10Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument
    • G01C3/20Measuring distances in line of sight; Optical rangefinders using a parallactic triangle with variable angles and a base of fixed length in the observation station, e.g. in the instrument with adaptation to the measurement of the height of an object
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0077Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0085Motion estimation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Description

PCTiL 96/00 14 PEAUS 0 6 FEB 199E -1- COMPUTER STEREO VISION SYSTEM AND METHOD Field of Invention This invention relates to computer vision and idertification seen in real time by means of cameras, a computer, etc. in general, and 3-D computer vision by means of 2 or more "stereo" cameras, in particular.
Background of the Invention The need for computer vision is an every day necessity for resolving problems and for use by man practically in all fields: From research, astronomy, robotics, industry, agriculture, manufacture, services, driving security, assistance to the blind, detection of phenomena, etc.
Today, as the volume of both irtenml and external computer memory is growing larger, the physical dimension of computer components is being cotinuously reduced, and various types of high speed processors exist. Consequently, if one could create a system and a method for computer vision that would identify anything it sees in real time, then everything would become easier and more simple.
Present Situation Present computer vision, which makes use of one or two cameras, focuses on the vision of individual, defined, known and mainly static object(s) and their idertification is lenghy, comparative (picture against picture), partial and focused only, and does not analyze and identify everything that is seen at the speed of shooting. It requires and uses many devices such as: sensors, lighting, measuring gauges, etc., and is cumbersome, limited, insufficiertly efficient and does not provide satisfactory solutions.
Oblectives of the Invention The purpose of this invertion is to provide a system and a method ermbling to analyze and identify all the forms viewed contemporaneously and at the pace of filming by means of cameras connected to a computer, means for computing of dimensions, motion and other perceptible features.
Furthermore, the purpose of this invention is to enable any automated device (robots, tools, computers, etc.) to see by means of varied and appropriate means of vision, any tangible thing and phenomenon, similarly to the way man can see and identify them, to teach them basic terms of artificial intelligence they need in order to communicate with their environment the way man communicates with language, AMENDED SHEET PCT/L 96/00 145 NPAMLJ OG6FEB 1998 -2to perform alxrxst any action, task andl wmvk that man does but more accurately, efficiently, flaster, better, etc., around the clock~ anywhere and in places which are physically difficult to access, or dangerous, inaccessible, borirg etc.
Furthenwre, the purpose of this invention is to allow storage, broadcasting and/or transmission via morrmal data tiarnssio lines, real-tirne 3-D pictures, so that a person can "'see" that picture from a remote location, by means of a decoding software program, and/or multi-media, glasses arnd/or special devices designed for 3-D vision, projection, presentation and even enlargemnt.
The objective of this invention is to allow for fle comtruction of devices according to it, while usig ecptupniext systerma, circuits and electronic compolurts, basic software etc., existing on flu imaket in one form or another, so that by their connection, combiiution, adaptation, extersion, etc., de-vices can be created and/or assembled according to this urvertion, iricluding the ability of adaptation to circumsrtancjes. Any average electrnics engineer, computer enineer, system analyzer, etc., will be able to design, assemble and construct devices according to this invertion.
One of the advantages of this invention is its sirtplicity, allowing it to be used with rninirml limitations, this invertion ordy can flfill all flu objectives.
The Invention The invention consists of an innomvative system and methodi for stereo computer vision: 1 Any material written as explanation to the software and which does not form part of the invertion process as a whle, is a sarmple proposal only. Its purpose is to illustrate the process of operation of computer vision, andi does not derogate from the essence of tle inveriion 2. The explanation given below is general, since there will be various types, models and sizes of comnputer vision system large, small, mediumn, Phillips, etc.), in accordance with the data that the computer vision will have to trasmit to the customer it serves, but the basic operation principle in all of them is identical.
3. Computer vision is a means of offering a service of "viewing'. Its task is to see a section in space, to collect data, definitions and terms in connection with it, AMENDED SHEET PCT/IL 96/00 145 PEAUS 0 6 FEB 1998 -3while preserving the picture of the view and/or while chnging the viewed picture into codes and data, to decipher it as necessary, to store in a register whatever needed and to transmit further the object it has seen or relevant data.
3-D data and multimedia may be stored, by storing the changing data at any shooting pace.
4. There is difference between the two systems and methods in the scanning and treatment, but in principle, they are identical.
In the system and method of picture storage, the comparison picture must be a spatial picture with part of the collected data referring to each and every dot such as color, Wr difference between the two cameras with regards to the same dot), etc, the pictures of the cameras are updated with the photograph at coordinates and the scanning of the camera picture as against the space picture relates to conelation and relies on the location of the space picture coordintes as compared to the camera picture.
With this method, the pixels of one of the cameras are adjusted by means of the devices and by adjustmnrt of the coordinates, in an orderly manner, at the pace of shooting and/or other, as against the pixels of a previous picture stored in the spatial memory register, as long as there is identity and correspondance everything continues, any new movemert and/or shape in the video registered picture creates lack of correspondence and updating and then the pixels of the two cameras are compared, pixel against pixel, the data of the identical pixels is recorded in the spatial memory for each color pixel, Wr difference, etc., the dots of non-correspondence in the picture stored in the spatial memory are registered as depth lines in the spatial picture and the dots of the second picture are registered separately in connection, and again, matching against the spatial memory begins again etc.
Further treatment is similar and continues according to the same principle of encoded data detection and comparison as detailed in the above preferred example, but with a difference attitude, adjusted to the manner of scanning and data collection The data collection process takes place in two ways: A. Comparing of the dots of one picture as against the dots of the other picture of the same pace and/or data as against a datum among the data of two consecutive pictures, for purpose of detection of the Wr, matching of coordinates, motion, correspondence, identification, etc. B. Collection/detection of data among the AMENDED SHEET PUT/L V b /U0U 1 4K) IPEMtS 0OGFEB 1998 -4existing data of flu same picture, such as fiarnus, geometrical lines, geometrical shapes (fiames), etc., andl based on all flu data and/or a part thureof conclusions are drawn, as well as definitions, etc., for further treatmnt, for artificial intelligence, for a nAtcbing and identification key, and for commnication.
6. If needed, part of the vision plases niay be used for data trarnissiori, for exainple in 3-D broadcast4 by dividing each type of data in. aigle imovemnt anid data conmion arnd the tiaimnhision band width arid/or alike many be reduced, arnd in another place flu. codes arnd data belonging, to them may be translated/deciphered and displayed and projected in various ways so as to obtain a 3-D picture, which my even be enlarged.
7. Since computier vision is supposed to serve a customer, arnd the customeur is supposed to perform limited operations, his wrk space is usually confined, therefore, fle compuiter vision system adapted for that customer will have to provide him with fle data he needs for fulfillig Is task 8. Comnputer vision arid/or flu robot (preferably) flRy use additional external aids (also for purpose of deciphering), such as flu mniomry system, the customrer's measuremenits, audio, touch (heat, strength pressure, wind blow, etc.), taste, smell, radiation, etc, which may be external to flu system or flu customer.
Between flu customer's computer vision system and the aids there will be reciprocal relations, compatibility, reference, consicderatimn reciprocity, data trar~nssion, etc., as may be required.
9. The system can be combined with a robot and own and/or separate compuitinig device wlich provides fle robot with data, etc.
Duning data collection, basic temEs in artificial irtelligence are detected. These basic terms are intended to assist flu deciphering system and flu robot compyuter to decipher pictures, phenomeuna, etc., as well as to inform and conmncate with flu environiment in. Much the robot operates, such as other robots alike and/or hunun being.
List of Flaures Fig-i is a schematic sketch of a preferred system including possible connections, in the form of a column diagram; AMENDED SHEET PCTIL 96/00 14 PAII US 06 FEB 1998 Fig-2 is a flow chart of a preferred system for comparative treatmert of data received from the cameras and their transmission to the nrnmuies, encoded with the detected data Fig-3 is a horizortal cross section that illustrates schematically the central viewing axes, the fields of sight and the parallel confining lirnes of the fields of sight and the optic images of the system described in Fig-i.
Fig,-A/3 (x4 enlarg) is the parallel optic images of Fig-3.
Fig4 is a hizortal cross section that shows schematically the fields of sight and the shapes viewed (of the system described in Fig-i) in various sizes and at various distances.
Fig.-A/4 (x4 enlargemert) represets pictures of the shapes viewed in Fig.-4, viewed simultaneously by both the righ and the left cameras of the system described in Fig.-l.
Photographing (Filming) and the Cameras 1. A pair of idertical cameras 4 (Fig-1), aligned, coordinated in shooting angle including variable and enlargemertL/reduction, creating for the cameras optical parallel fields of sight (Fig-3) concurrert on a common planie hereinafter referred to as the horizortal plane, in other words, the horizotal distance with regard to each field of sight line is always idertical in both cameras from (0:0) and up to and is a fixed distance of deviation M (Fig-3, A/3) at any photograpbing distance. The vertical novemert (perpendicular to the horizortal plarne) of the carras is constart and idertical.
The photographs taken by the careras are received in an input-menimory P-0 and P-1 (Fig-I) in accordance with the speed of photographing (and/or other) in form of digital data translated either by the cameras or by any other equipmert (Fig.-l) the alignmet will be physical upon ranufacturing installation and/or at any other given imomert (preferably by the robot).
2. Said two or rrmore cameras, in a single enclosure or in separate packaging, are similar to video cameras, including CCD cameras and including cameras with integrated means for converting data received in the pictures to digital data and including one or more of the following a) Adaptation to color photographing at various speeds and at any light such 4 ~as IR, visible and at any lighting conditions such as poor, by means of lig1t amiplificatiorA 1, AMENDED SHEET PCT/I V 6/ UU 1 '4 IPEA/US 06 FEB 1998 -8b) Enlargeert or reduction devices, including telescopic or microscopic mearns; c) Optic irmage (Fig-3, A/3) includirg the desired resolution, whether straiglt, convex, or concave.
Calculation of distances, dimensions: 1. Based on the datum Wr the difference between the location of a corresponding dot in the picture of the two cameras on the X axis (WIr=Xb-Xa) (Fig-2), and based on the fixed datum M the parallel deviation [fixed distance (Fig.-3) between parallel lines at any shooting distance] the size of the pixel represertation can be calculated& Q=M/Wr.
2. The calculation of theu distance L between the camreras and any dot is based on the idertity of the two cameras [idertical shooting argle in the careras (in normal situation horizotal o vertical po0)], on theu physical size (the width of the optic inmage the fixed resolution of the optic image XY, on Z erilargemert/reduction, M, and Wr and the constarts: KO, K1, K2, K3 in every viewing system.
Basic formulas arle for a single pixel, perimeter of the circle in pixels (optic image), (Qm) size of pixel represertation in theu congruence plane 1 (Qd) size of pixel represertation for standard rmeasuremet d in formnn A and pixels' Xd in d- 1. appcL/eX*Z) 2. U=Z(X*360/) 3. Qr-2M 4. Q&d/Xd Distance calculation formulas ro (Distance) From the circle radius up to the optic image.
ro=2(1*360/2**aoZ*l*KoZ*Ki Ko=360/2*7x*ao
K
1 =4*Ko r The radius up to any point.
6. r=UQ/2*7 RPA =(ZIwrX M*X*360/2*E*aL 0 )=(7I1r)K
K
3 =M*X*360/2*t*ao=M*K 2 0 AMENDED SHEET bcS ow'W)(MX 602 *cr~rK PCTIL 96/00 145 IPEA US 0 FEB 1998 -7- L The distance from the optic irage up to any poiut.
7. L=r-ro=(2zWr)K 3 -Z*Ki=Z[K/ Wr)K La The distance up to the idertified shape A (indirect calculation).
8. La=(Qd*U)2* -ro=(d/Xd)Z(X*360/2*x*a.o)]-Z*K= =Z(d/Xd)X*Ko-Z4*K-4K2(dXd)-K
K
2 =X*360/2* n*a =X*KO 'The congruence plane is the distance in the two carneras in which the parallel natching dot is situated in the sae location on the X axis.
If Wr=O then: Q=2M (the computer will know), distance calculation will be according to: L=Z(K 3
*M-KI)
3. The calculation of the size of dot representation allows to calculate size, length, width heiglt, surface area, etc.
4. The order of dots within any given frame and/or between adjacert franmes etc., and the ratio of change of their location between the X and Y axes allows to detect the angle, and geometrical lines and shapes, for examrple: a straigit line is a constart periodicity in the changp of X and Y and their signs do not change.
The angle of the line the ratio of change determines the (tangscotangs) angle of the line, for example: at every step in the periodicity the dots change by fixed signs), the angle is 450, etc.
The calculation of the movemerd velocity V is based on F (the shooting frequency) and T [T is the duration of a single frequency (Fig&-2) for which a termnn of tine exists in the computer].
6. Wr will also help in reproducing 3-D pictures and in timne.
7. In flow chart 101 (Fig-2) there is a factor U (with numnber 5 that I have chosen registered next to it), this is tluhe logical ratio of change for a change between two adjacet pairts (inthe chart it is 1.25).
Sample of a System and Preferred Method Operating According to the Invention All the samples, video clips, flow chart, etc., are only and example which although an excelled one, rnay include errors (as in typing, translation, etc.), inaccuracies, AMENDED SHEET PCT/IL 96/00 145 IPEAUS 0 6 FEB 1998 -8corrections, adjustments to circumstances, etc. It is emphasized in the process that every processor performs a certain operation(s), this is an example for the process phases and it is depends upon the processing speed (processors), and in the order of the process phases there may also occur certain modifications.
The Process of Picture Input 1. Any picture (including color picture) received from any camera translated into computer language, will enter to one of the memories PO;PI (Fig,-i) to its designated place, once to line 0 and once to line 1 (saving of space), and so on.
The line will be adapted to the length of X in the cameras' resolution.
2. Each codes table will be given general data like a header, part of which received from the robot computer and a part from the viewing computer, for example: in the cameras Z enlargemet/reduction and C the angle of inclination of the axes towards the horizon will be given by the robots (which controls the movement and the cameras). It is possible that all four angles of the optical image frame dots (resolution) in every picture will be matched to the robot movemert coordinates and to the cameras, namely, there will be data defiinng their location in space as a basis for matching and updating of the coordinates further on Likewise, there will be a datum of the system the number of beat t [flow chart 3. In the flow chart (Fig-2), P.C.U-1 (Fig.-1) processor handles the comparison of data (such as: pixel a as against pixel b (Fig.-2) received from the cameras, detects from among them new data that can be deduced from the cameras' data and registers themin a matching table for codes: Q1;Q2;Q3, with each such code being a number (as registered next to it) in a 4-bits large byte. The codes are arranged according to their appearance in the Y column and the X row (camera and the data for each code are arranged in the matching table in the same order and/or in any other familiar order.
The P.C.U-1 processor handles the P-0;P-1 every time in the free line and at a pace adjusted to the pace of shooting, and it inserts the data including the codes to PP-0;PP-1 (Fig-1l), every shooting cycle to another location.
a. In the flow chart (Fig.-2) rectangle 1 "start", rectangle 2 "read Ya, Yb", parallelogram 3 "go back".
b. Circle [101 detects the first identical dot in the data of the two cameras.
AMENDED SHEET PC17L 96/00 14 KOA(S 0 6 FEB 1998 c. Circle [102 detects on tlu X axis flu codes Q2 color arnd Q3 area arnd the relevant data- Q2:p;Wr,(Y:Xcn);t(Y:Xx) Q3:Wr,(Y:Xn),;u,(Y:XX) with- p being color, Wr fle diffenie with regards to the same dot in the two carneras, (pi, cn) indication of thelu irnum dot arnd indication of flu rnuxiirum dot with regards to tiat color/line depth/surface, f fle number of dots of flu same color, q flu rnumber of dots in an area, u flu nurrber of colors in that area arnd flu rninixnum and nuximum dots.
d Circle [103 detects on flu X axis flu code Qi flu depth line and data in relation to it: vfut: (Y:Xn) is flu rynurninpout, flu number of dots on, flu X axis (negative:. Xa carnura A, positive: Xb canuxa B, 3-D data), flu rrnnus sign indicates "eeping away', flu plus sign indicates "approaching' [basic temEs in artificial itelligence], Y:Xx flu nuximnum points (due to lack of space in flu chart, flu depth line have not been idertified).
e. Circle [104 illustrates flu correspondence of flu processor with flue carnuras.
f, Circle [105 illustrates flu distribution of flu data to flu various memoies for flurfiur handling.
4. The data of each pace, in whole or in part, such as: Q2:(Wr);p; Qi (and flu shooting pulses) ame given by mnoving the angie of each datumn arnd composition with flu code designated to be tanferred/broadcast, for examnple in TV channels 3-D broadcasting ari saving of tranmission bands, etc. In another place flu transmission will be received, separated, translated and it will be possible to enlarge it several tirms by rrnultiplying, all flu f data on flu X ami by any number, and to return on each columnn (row) Y flu same number of timus, thus flu picture will be projected and/or represerted in any other way as a 3-D picture and in tim.
Likewise, it will be possible to store these data and project/presert them in another timne or place, in any may whatsoever.
Sample of Frame Points Detection Method: 1. The processor(s) P.C.U-2 (Fig-i) handles the detection of data found in one of 3. RA4, the numnones PPOPP1 the data for wich have been comipleted, and its task is to detect in a fixed order flu fi-re dots for every color, depth line arid area, AMENDED SHEET PC17IL 96/00 145 WPANu 0-6FEB 199( based on dou encoding and flu existing encoding data, and to record a loader for tdo fmr data and the fi-mm dots location, flu co after thlu other. Frm previous scanrnpgJnaclirg we know tlu ninimai point on tlu X axis, fle number of points and flu nuximuirn pout. vi~e withi regard to axis Y tluse data are not available, therefore, during detection of the cortour points, wiuruver possible, these data must be indicated.
The possible contour points are fle follom l)YXxyT Yx, XY. 3).YxtXT~cn)f(q); 4).Y,Xr~cn)4f(q); 5).Y,X14.- 6).Yi~cn),X 7).Yincn),Xx; 8).Yr~cn),XT~cn)f(q) J7 2. Assurming that scanning is performed from left to rigit and from flu top downwards, and the direction of cortou identification. is arti-clockwise, the processor passes through flu codes table to colors, depth lines and areas or according to flu order of fleir location or according to type and/or in, any other way vhmtsoever. Every code encourtered fbr the first time is treated and flu contour of identical colors is detected, as well as identical codes and/or adacent codes in flu picture, and a provisional header and namne are registered for it (AAnABnACA), for examnple: for the color AA~Wr,S, for the depth line ABn;S, for flu area ACr;Wr,S. Each header will have a kind of colium in wich all flu adjacert contour dots will be registered, flu one after flu other, P-09 including flu datum for axis X and axis Y up to flu initial point (closing of the contour), with regards to fle depth lines or an area found within flu depth lines, fle point of teriniio (closing of flu contour) can also be the extreme axes of flu picture contour (camera In principle, the registrationt for each dot will be carried out after idertification of flu adjacent dot.
3. WPhenever flu X is mninrnum the nurnber of dots f(q) is copied/registered next to it until the maximum on that axis, with regards to that contour. Wheonever Y is xmaxrnini next to it wvill be registered (simultaneously or later when the Y of that X is inirruim), the rumber of dots up to the niixnin on that same axis, concerning that same fi-die.
4. In order to avoid that when fluther scanning the line and/or wn scanning other 4 lines the same code is not treated twice, whenever there is need to treat codes (the treatm~ent makes use of any identification code), in order to avoid AMENDED SHEET PCT71L 96/00 145 IPEAUS 0 6 FEB 1998 -11 disturbance during treatnent, the code will be changed into the consecutive code (as a loop), accordingly change to): QWID >>Q5 iu-1 >>Q1 l Qp I >>Q6 >>Q14- Q >>Q7 >>Q1I This nethod allows several processors woxk on the same data base in common, each of them perfoming its part, and when it finishes the next one continues, etc., and thus, by treatment according to the size of Wr two processors can process simultaneously, the one from the largest and the other from the smallest.
When scanning for purpose of detection of a contour for a color, the adjacent dot of the same color must be detected in the data base of that code, or the consecutive code. When scanning for purpose of detection of a contour of a depth line or for an area, the adjacent dot of the same code or of the consecutive code must be detected.
6. Whenever the dot is Xn(cn) the f(q) number of dots in the X axis is added to S [S is the total number of dots in that contour and the code changes in accordance with the list of codes and their consecutive codes.
Rules for Contour Detection: 1. According to the previous sections, the codes are scanned and with regards to each code the data are copied into a temporary location DDD. the Xx datum of the code will be the beginning dot of the conrtour, this datum is also the Yx of the contour, the dot will be recorded and will be (no. The datum f of Y will be missing and will be completed later. Afterwards, each point in the f(q) is (apparently) passed through and the dot is registered accordingly [in the first line it will be (no. one after the other up to the minimum point Xn(cn), the data are copied form the DDD and it will be (no. the Y will be marked and there will remain space for its t then the f(q) of the X axis is added to the S [see further In a straight line movement (horizontal, vertical), one of the axes remains constant (static), while the other advances step in constant The detection of an adjacent dot will be carried out on the right of the movement axis (at 900 angle), as follows: the movement axis remains static and the axis that was static advances by step (see further a, c, e, until the dot further AMENDED SHEET !C~fL ,96/00 14 1% 0EB 1998 -12down the stxaiglt line does not belong anyrnore to the sarne cortour. From this point (outside the contour) accordingly (see cortirnuae at b, d, t h).
In the rmovernent on a diagonial line [at adjacent points only fle following possibilities exist: 1350, 2).2250, 3).3 150, 4).450] the two axes (Y:X) advance each tinw by step. Detection. of an adjacent dot and on the rig# of thu moverrt direction tiere ar~e two, tlu one (at 450 angle) arnd the otlur (at 900 angl,e) [flrm bere on see k, Mn unil tle point furtlur down the diagonal line no) longer belongs to tlu same cotour. From this point (outside the contour) [see fturonj 1, n RRjiAkzator Motion. Localizaion IY1P I 1) 1(Yf)(X-1) a) b) d) (Y+1 XX- 1 d) e) )>g h) i) (Y-1 h) (Y+X-1 )>m k) (Yxtl XYr~cn)- 1 )>k 1) )>o 1 1XX+1)>m~ A) (XXX- 1 1)(X+1 XX-1 )>i o) (Yn- I )Xx± 1 1 XX- 1 )>o At each guestion mark r? if it is positive and if needed the reqinred datum mast be registered (in relation to dout dot and/or the previous one, (>gp to, rmm;yyes).
AMENDED SHEET u'cWuS OG FEB 1998 13 2. In a return nxiverrert on a Thu (v.th diagonal hus, dou previous recording of flu contour dots must be taken into consideration and to adjust the questions), the cortour dots will be registered twice, once forth andi once back and tlu necessary operations are camied out regarding thu code wodificationi, addition of a dot to thu sum minimnum, rnaxinunn, etc., all in accordance with flu previous rules.
3. Durn scanning, wil~e detecting flu contour for a depth line or area, or in order not to get out from tlu depth linu, thu code should be asked (in pnnviple, in riovenort on flu Xr~cn) dot on the scanning code arid in ImovelTelt on the Xx dots ontflu consecutive code) arnd sornus both one after flu other.
4. The contour of an area is found within arid adjacent to flu contour of a depth line, or between flu cortours of tVM depth linus arnd adjacent to thorn, and after detection of flu depth linu it is possible to detect flu contour dots table (internal and external).
During contour detection, when on tlu consecutive code, one should ask wbothur flu next dot is flu initial pout, in depth lines and areas found v&thfl depth lins, even if flu dot is not an end dot in flu picture data (it coruld be an end dot of a conto-ur).
6. The rules indicated lure are an examnple for contour detection and they do not include full details, somne of tlum ar incomplete arid therefore must be completed, arid if the color fiam contains in it additional frames of other colors, additional rules mst be added.
7. During this anid/or other processes, data and basic terms of artificial intelligence are detected for the deciphering and/or robot systerms that will help in decoding arid connection. For example, when one of flu axes X or Y in any flame is riot mrrked with flu rinumum arnd xinurr, it m~eans tlat we haive a straigl line (including the mninirnnrn arid rnaumm. end points). Morover, the basic terms will be defined based on the data as follows: 1) X is riot defined for those dots for wlich Yx is, so there is an "upper horizontal straight liru"; 2) X is riot defined for those dots for which Yn is, therefore there is a "lower horizortal straight line". 3) Y is not defined for those dots for whch Xx is, therefore there is a rigit vertical sinigt line". 4) Y is not defined for those dots for which XT~cn) is, therefore there is a "left vertical straight line".
8. At flu end of each treatmTent arid/or in the course thereof; the codes table, the data, the contours, the definitions argdmh basic terins for artificial intelligence .))sewjAMENDED SHEET PCT/iL 9 b /0O 14 IPE/LS 0OGFEB 1998 14 will be tirrnitted to the M-OM-1M-2 nmmoy (Fig-i) a4pporiate for each pulse, for furfiur treatrnent by P. processors.- Detect NIvemetnMbdton Geometrical Unes. Geometrical Shapes. Basic Terms in Artificial Intellfigence. etc.
The P. C.U-3 processor handles the novertlrrtion in a comparative way between two (consecutive paces) out of the three following memrrxies in regards of which fle M-OM-1lM-2 data have been completed, andl with matching of coordixtes.
1 One of the possibilities is: coordinates entered into thu fa-st. picture will constitute the basis for coordinates further on.
2. Tle, second possibility is: that tlu robot will carry out calculatioxm for fle coordirmtes according to its rmoverrmit and flu deviation of the cameras and will temporarily update the caruras' shooting data.
3. The computer vision device can also be taught, for examrple, flhat in flu first picture, fle horizon line's first poixt on a straigh east plaru and/or any point is the 0 point, up and ri&i being positive and down and left being negative, and each and every pixel at nmxxin enlargemnt is a coordinate line, and so it will update the coordixmtes from flu data of a previous picture to the data of a comparative picture.
4. Matching of the coordinates and their registration in the header, may be card out from flu coordinate data of a previous picture to the data of a picture on which one intenids to perform imvenuintlrrtion detection, by mautching of a singe (small) moxbile identified cortour which according to flu data o h robot and thu carneras imovemnt is found in flu data of both paces.
Based on fle data gathered up until this phase, a conmaative trealtent is perfoumd. between two memories regarding each cortour, depth line and color separately, between the change in the length of Y and X and flu corresponding values in the second previous memory, for imovemnt/imtion detection.
6. The changes in fle data may be detected nornally, according to their order of appearance from left to righ arnd from high to low and/or according to the order of their vicinity to the camera, namely, from the largest Wr up urtil the end of flu contour in the memory (the smallest Wr), and they may be detected by two or mo~re processors, should it be required- AMENDED SHEET 96/00 14 S 0 FEB 1998 7. The P.C.U-4 processor gathers cortours according to various criteria, such as rmodification of their distance from the environnmert, or a unifon movement direction, belonging to a defined detected geomretrical shape, etc, and according to the order of their vicinity to the camera and which have rot yet been idertified [they have ro mre in the spatial iremory MM-O (Fig)-11 nor prior identification (location of the mnme in the MM-O will be made according to the coordinates of the two cortours), and gives them definitions, if possible, such as: a) Adjacent cortours (areas, colors arnd depth lines) in which there has been no movermert whatsoever, the overall contour will be idertified as an additiomal contour (including headers), and it will be given a temporary name BAn. The significance with regards to artificial irtelligence is "imnimte".
b) Adjacert contours that had uniform movemert in size and direction their contour will be detected and registered as an additioral contour (including the header) and will be given a temporary mrme BBn The significance with regards to artificial intelligence is "rmobile".
c) The contour ofBBn inside which there was iovernert (or inside which any changes occurred) different from the usual rrovenmert in size and/or in direction will be given a temnpormy mrme BCn The significance with regards to artificial intelligence is "alive".
d) Adjacent cartours with (significart) Wr difference between them and their entire environmnert will be given the temporary mrme BDn The sigificance with regards to artificial itelligence is "floatin'.
If there is movemet within the cortour (and its dirernsion within the given range) it will be given the temporary mrrme BEn and the significance is "winged bird'.
e) In the evert that this processor has sore tirne left and/or any other additiomal processor(s) will handle without disturbance the sane rmerrory M-OM-1M-3, and it will detect definitiors in the contours' dots and between the cortours that form a complete ftame (shape), straiglt lines, concave lirnes, arched (convex) lines, diagoml lines (including angle), and any olther identifiable geometrical line, it will also detect geonmetrical shapes such as: rectangle, square, triangle, circle, ellipse, etc. and it will establish for them basic terms of artificial intelligence.
AMENDED SHEET PCT/IL 96/00 145 -18-~00 IE USC FEBi1991 f) Liius, contours, etc. for wich no geonetrical definitions: arn krxwi or the detection of vwiich is lergthy, will be nitched agirit a register of basic arnd known shapes/cortours MM-4 (Fig-i) adapted to any computer vision in. accordanc~e with its ftumtion and aim (in prncxiple, notl exceeding 256), by ma~tching tlu size of the contours to ftu basic coxtours saved in order to :fix a definition for tle above simpes/cortoius.
8. Duig and/or upon termination of flu matching arnd detection of flu movemnt, te various contours etc., flu deciphering system and/or thu robot will know marny acdditional basic temEs in artificial intelligence, such as fle following ores: a The length Y has reraiud comstart and has imoved (changed location) uads, flu sigriificanme ""ascerdeed".
b. Tlu length Y has rermined coistart and has movwed (changed location) downwards, flu sigiicanc~e "descended".
c. The length Y becam smaller, fle significance "got shxtei".
d. Thu length X became srmller, flu significance -C"got sbnxnur"l.
e. Tlu length Y became larger, flu significance -"4glet higuiP'.
f X and Y became larger, the significance "'e~qanded"'.
X and Y became smaller, the significance "slnunk'.
9. During treatmnrt and/or at any other tirn, mny basic terms in artificial irtelligence may be detected/derived as rrmy be needed, w~iich may help in decoding flu shapes anid/or the phenomrena and/or flu terms and/or conumunication etc., in rry ways, such as hnunmn language.
At tbB end of each treatnurt and/or during it, fle table of codes, data, contours, definitions and the basic terlrn of artificial intelligence will be sent to the memry MM-O (spatial menmory) according to tleir belonging and their location according to coordinates, for further treatmrert anid idertification by additional processors P.C.U-(5;6 (Fig-i).
11. The spatial memry MIM-O will cover a certain space, for examnple: at hrizontal line 1800 arid at vertical line 1200 and/or a full spatial rang, all as desired.
AMENDED SHEET PC IL 9 /00 14 IPEfNUS 0 6 FEB 1998 -17 Shape Identification Key. Identificaton The data, contours, characteristics, phenomena, definitions, conclusions, etc.
gathered during matching and detection are arranged in the various tables according to their belonging and location in the spatial memory MM-0 in a special order, adapted to the same order in which the data of the stored shapes are arranged, and they form key factors for matching and identification (data against data) and/or according to the "truth' table, etc.
The unidentified areas (cortours) are very few at each shooting rate and are usually at the mnrgm of the picture. The data of unidertified shapes, anged in a special order, constitute key factors, which allow, by all or part of the means, as well as by frequency, the "tnut' table, etc., like a word in a dictionary, when the classification is language and the order of the key componerts are the order of letters and the word as a whole is the key, to detect, match and idertify a picture/record while comparing it to other pictures/records and/or data compared to data of known and stored shapes aranged in the same special order, adjusted according to the key components, according to 'certain', 'between and between', 'reasonable' and 'possible'; 1. The identification is performed between the data of the viewed shapes register MM-0 and the data of the data register of the constart, known, stored shapes MM-1 (Fig-1), in the dimensions data MM-0, the size, angle and and shooting distance matching will be performed according to the distance and/or enlargement/reduction data and according to the standards of size and shooting/recording angle of the dimensions data of the stored shape.
2. The data for shapes stored in the MM-1 may be obtained from deciphering the data of each and every shape by preserting the shapes for computer vision according to size and distance adjusted to the matching key, and/or they may be defined by means of logical analysis, under the same conditions.
The shapes will be defined and ananged according to a certain order (dimension), such as color, geometrical shape, etc., that conforms to the deciphering key and they will be classified as 'certain', 'between and between', 'reasonable', and 'possible'.
3. There will be a movemenrt memory MM-2 that will store, for example, the data of all the changes of "alive" shapes only and/or up to a certain distance AMENDED SHEET PCrAiL 96/00 14 1PEA/US 36GFEB81998 only as desired, every pulse, every tenth pulse, every second, etc., for a certain time defined in seconds, hurs, etc., for purpose of nxwerent/imotion speed calculation.
4. it is possible that in this mnemory or in. auxoflur, all the changes of movemnt/moKtion at each pulse andl pulse vall be stored, and thuis it will be possible to obtain storage of 3-D data andtor miultimredia whi~ich it will be possible to display and/or project, etc., in aroflur time and place, after translation and decodirg arnd even enlargemnt.
it is also possible to have simflated memoxry MM-3 (Fig,-l) which will comprise flu contours of the space in wlich flu robot 7 (Fig-i) will function, insde tlu space contours the important s1mpes and flu space contours will be indicated in the location wfiere they are in form. of contour with few idertifying details, such as.- nme (with the addition of one or two data and/ or one or two words), andi every tine that tIe robot is in. some place it will know tlu location, and in any case of change of location, it will know that there has been a charge of location (which change and wiuere to?) in flu coitours, in flu location of the shapes, etc., and will update flu simulated space in accordance with fle row realit and with fthe sawme thod of registmation, and there will be a PM memory (Fig-i) for receiving extenr data.
Suimr 1. The conmputer vision system shall be appropriately protected against blinding ligit, flashing lights of any kind whatsoever ixxcluding laser and anywhere it may be situated against any other possible physical irgury, as far as possible.
2. The computer vision system will be compatible and will operate in accordance with flu user's inut needs.
3. The compuxter vision system will comprise software progrnm 6 (Fig-I), electronic and gpneral circles by which the system will function, and a software procgram, which will adjust the size armd photographing/fling angle of fle -viewed shape in accordance wifithefl distance and/or enlargemeit/reduction data and in accordance with flu standards of size and filming/recording, of flu stored shape's dimensions data, as well as any other software program for transferring the required data outwards such as for broadcasting, 3-D 7. presentation and multimedia, and any other software program that nigit be 7 necesary.AMENDED SHEET I L 6/OU 1L IPM S C -R 1998 -19 4. The received pictures, codes, data, calculations, pictures in the spatial-nmemry and any other infomation concerning the received pictures or maps or data stored in any of the system nmemonies may be sert out as are, including the data, only the codes and data, and/or input or spatial stereo pictures, to any user such as a robot, according to requiremnents, design and with any method, arnd the user will be able to draw data as he wishes and in accordance with tfl system desiga All the above mentioned operations will be caried out by processors and additional means of processing for matching detection, processing and calculation. They will use computer devices, components, one or more electronic circles and alike, and any combination thereof required for and adapted to the purpose of computer vision The work distribution among them will be such, that each component, part, electronic circle, system, etc., will perform its task without interfering with one aother and there will be full coordination and compatibility among all of them, for example, processor no. 1 infonms processor no. 2 by nmeans of a component, electronic circle, etc., that it has finished its part in operation A and that processor no. 2 may continue carrying out its part in complex operation A, etc.
Commercial Implementation 1. Computer vision may be used: a. As a viewer (watches, analyzes, decodes, reports, transmits, etc.).
b. As a viewer that collects data and preserves them in any data base or register, in any way (fully, partially, by anry form of classification or sorting, etc.).
2. Due to thel multiplicity of possibilities of use of computer vision, several standard computer vision systems may be used, where each standard system is adapted so as to provide certain services and perform certain tasks, and thus it will be available "on the shelter", while for special requirements, a computer vision system adapted to specific needs will be designed.
3. Each computer vision system will be assigned standard and specific abilities.
4. It is possible to adapt the computer vision system to th specific needs and requirements of any user who is a designer or a constructor during the stage of design, and to integrate it as part of the user, or as a separate unit that serves the user.
AMENDED SHEET

Claims (9)

1. A system for stereo compunter vision including a. A pair of identical, aligiud. coordiiued cameras at fixed distance M, vith coordimied shooting angle and enlargeent/reduction, creating for the phoographixg camneras optical parallel fields of sigit, coordinated, aligned, with identical field of sight line in bothicameras from and up to at a fixed distance M at. any photographing distance, and the pictures of said carneras .are received by means of input-mTemmly devices in the computer, translated to computer language; b. Based on step a, the pixels oft pictures that were received may be treated with. vario-us methods, such as: 1. the pixels of the pictures are matched by means of the devices the one against the other, data are collected/detected for a header, for encoding, including pulse/pace, enlargemnrtlreduction, color, tlu difference of dots Wr between tiu two cameras arnd the same points is equial, location, thu rnumber of adjacent ho~rizontal dots of the same color and/or distance or depth lines for 3-D, etc., 2. Thu pixels of one of the cameras are matched by flu devices through matching of coordinates by order accordirg to thu speed of shoting and/or other, as against flu pixels of a previous picture stored in flu spatial memory register, when any new nxvernert and/or shape create lack of ANA mtching and! then updating, the pixels of both pictures are niatcbed the one against the Other, thu data of identical pixels are recorded in tlu spatial memory for each color pixel, Wr difference, etc., flu nr-ratcliig pixels are recorded as depth linu dots in the space picture and tlu pixels Of thu other picture are recorded separately, in connection, etc.; c. Based on step b, the devices and flu datum Wr allow to calculate sizes, dimensions, distances, etc., and they also help in distributing the murk between processors and in reproducing 3-D pictures and in turne, etc.; Based on the previous steps, contours for colors, depth lines and areas are detected, as well as contours for groups of areas, depth lines that are adacert, and/or having a uniform movemenrt, which are close, etc, and they are added as data tables to existing data, arnd the size of vertical liues in ST each f-rm is detected, geonetrical liEs and/or geometrical shapes are detected, anid through the rmatching of the size and the angle of the cortour 0v o AMENDED SHEET /00 145 I~S0 06 FEB 1998 -21 with the contours of stored basic shapes, etc., tbrouigbout the detecicompanso definitiorE are derived, such as: straight line, circle, triangle, concave, irmiimate, floating, alive, etc., for artificial intelligence, wie exterrl auxiliary terms may be attauuid, such as: nuasurenurnts, audio, taste, smell, palpation, etc., that will help in identification and/or corrinun cation wi the envronnIrt e. Based on previous steps, the data of consecutive pictures are conipared by means of the devices, by matching coordinates and/or coordinates are matched to consecutive data by mtching a cortour to an identified static shape found in both paces, then the shooting deviation angle which allows the robot balancing is detected, imoventlnxotion of the horizoral and vertical lines in the various contours is detected, the extent of change, dircton agle, etc., is calculated, the daao l hnes, stored in paces at determined intervals allow, through matching of coordinates, to calculate the speed of rmererthmxition, part of the detected data may be stored in various ways, pace changes iray be stored one after the other, namy be stored, broadcast, tim ~nutted, etc., mn various ways, as 3-D data for projection/preseitation in any way, at any tmiTe andi any place after trainslatioIdecipberuig and even enlargeent; f. Based on previous steps, part or all the data collected by means of the devices may be stored in the spatial mremory, each datum according to its belonging and by nutchirg of coordinates, the data of unidertified shapes, arranged in a particular order, form key components, allow, by mneans of all or part of the devices and by incidence, "tnrut table, etc., and according to a recorded picture/record/datum size, angle and distance, to perfonn nutching/detection'identification of a picture/record/datum as against picturesfrecords/data of the. krmwn and stored shapes arranged in the same~ special order, adjusted to the key comnponents, according to 'certain, 'between and between' 'reasonable', and 'possible'; g. During detection, tle uniidentified contours and their data will be recorded by a temporary nrine and the identified shapes will be recorded by their own narne, artificial rnrnxry mray be added to fle enviroinmnt in which the robot operates in the foirm of contour lines with indication of the nrnes of the contours, with the essential shapes and a few identifying data Which will be occasionally updated by the robot and/or the vision system.L AMENDED SHEET -22-JPEAUS OG6FEB 1998
2. The compuiter vision system claiimed in claim 1, wherein said pair anid/or more carnems can be packaged in a single casinig or separately, am simrilar to video carreras, including, COD carrneras arnd including cameras in which there are combined means for converting data received from pictures taken by tI. carneras into digital data, anid includes or. or rnore of the followirig; a) Adaptaton for color plitogrphiig, at various speeds and at any light such as IR visible and at any lighting condiitions such as meager, by rneans of light amrplification; b) Erilargerint or reduction devices, including telescopic or microscopic rneans; c) Optic mopg including the desired resolution, whether straigit, convex, or concave.
3. A system of coiputer vision as in claim 1 and 2, in which Lie main devices for collecting arnd storing detected, compared and/or comnputed, defined, derived, viewed, etc., hnforrnation on data that can be defined as a datum (such as: codes table, codes of size, color, cortour(s), geomnetrical lines, georretrical shapes, definitions arid/or inral tern(s) and/or in artificial iutelligence etc.), which allow access to tU. data in order to identify stored shapes, as the matter rnay be, withi regprds to dimensions according to matcing of size arnd angle to the size anid arpJe of the stored data and by nutchirg of coordinates and classification according to tI. key comnponents in one or rrore hierarchic order that las been established in advance, as 'certain', 'between and between', treasonable' arid "Possible', whether interrml or external, they include one of U.e following. a hiput memorry for receiving the pictures of the camneras/U.e viewed pictures; b. Spatial mernory for one or two camreras and/or data for 3-D that will be integrated by rretching of coordinates, with the method of picture preservation, all U.e pictures obtained from fixed enlargernernt/reduction recording and other, in another mrney, and in addition to other data, separately arid in connection, for a second picture, with the rnethod of encoding, which includes the data of both cameras, U. data anid the tables will be joined together in relation to a certain section of a defined space, updated and adJusted at each arid every pulse, tU. pictures arid data will be AMENDED SHEET I L V6/00145 IE US O FEB1998 23 stored expressed in horizortal and vertical circumferential coordinates, and will be updated all the time; c. A register in the form of table(s) for stored shapes that are known, which will include pictures, maps, signs, (including data and tables related to them), data, codes, etc., adjacert, above, in connection, etc., to any shape that can be defined and/or given a rmre and/or idertified separately (such as a screwdriver, a finger, a wall, a background, a sign, a phenomenon, and anything else), registered in a particular, known, order adjusted to key elements, data which are dimensions will be adjusted to the photographingrecording standard depending upon the size of the shape, its dimensions and the photographingJshooting angle; d. A register for the detection of movement at various paces, the changes in pace are preserved and adjusted at defined and known time interval(s), and based upon the extert of change and the number of paces and shooting frequency, the size and speed of movenert can be calculated; e. A register for storing all the changes in the codes, data and horizortal and vertical dimensions at a pace orn after the other, will allow at another time and in another place to preset/project the pictures again, after translation/deciphering and even enlargement; f. A register for contours of basic shapes which cannot be defined and detected within a reasonable period of time, which will allow to detect geornmetrical shapes as compared to the cortour that has been detected, as against the basic cotours, and thus definitions will be derived as a supplemenrt to the key data; g. Memories, registers for collection and handling of table(s), encoding of their data, tables for contoms and their data, table(s) for terms, trigonometric and other definitions, basic terms of artificial intelligence, all as may be required, etc.; h. Registers for auxiliary data received from devices such as the robot, audio devices, sensing devices, etc.; i. Register for software programs with computing, detection, cormpaing, handling and identifying capacities, for perfomung any operation required for the functioning of the computer vision device, in order for the device to fulfill its function. AMENDED SHEET POT/I 96 00 14 IPAUS 0 6 FEB 1998
4. A mnethd for stereo computer vision including a- A pair of identical, aligned, coordinated canms at fixed distance M, with coordinated shooing angle arnd enlairgenent/eduction, creating for the photograpling carners optical parallel fields of sigt, coordinated, aligned, with identical field of sight line in both cameras from and up to at a fixed distance M at any photograpling distanc~e, and flu pictures of said carneras are received by means of irnt-mnry devices in the computer, translated to computer language, b. Based on step a, flu pixels of tlu pictures that were received many be treated with various nufflods, such as: 1. flu pixels of flu pictures are rmtclud by nuans of flu devices flu one against flu other, data are collected/detected for a header, for encoding. ncluding pulse/pace, enlargemnt/reduction, color, the difference of dots %W between flu tw camuras and flu sarne pouuts is equal, location, flu number of adjacent horlizontal dots of flu same color and/or distance or depth lines fbr 3-D, etc., 2. Tlu pixels of one of the carneras are nmtclud by fle devices through nutchirg of coordinates by order according to flu speed of shooting and/or other, as aginst tlu pixels of a previous picture stored in the spatial memory register, when any now mvemnt and/or shape create lack of natching and duen updating. flu pixels of both pictures are natched flu one aginst flu other, the data of identical pixels are recorded in the spatial memory for each color pixel, Wr difference, etc., flu nrn-natcbing pixels are recorded as depth line dots in. flu space picture and flu pixels of flu oflur picture are recorded separately, in connection, etc.; c. Based on step b, flu devices and flu datum Wr allow to calculate sizes, dimnensions, distances, etc., and they also help in distributing flu work between processors anid in reproducing 3-D pictures and in time, etc d- Based on the previous steps, contours for colors, depth lines and areas ane detected, as well as contours for groups of areas, depth lines flat are adjacent, arid/or having a unform movemnt, which are close, etc, anid they are added as data tables to existing data, and flu size of vertical lines in each frame is detected, geometrical lines and/or geometrical shapes are detected, and through flu mtchurg of the size and flu angle of flu contour with the cortours of stored basic shapes, etc., throughout tle detection/comnparison, definitions are derived, such as: straight line, circle, AMENDED SHEET PC17L 96/OO0 14 lPA/S REB 1998 25 triangle, concave, inanimate, floating, alive, etc., for artificial intelligence, wile extermi auxiliary terms may be attained, such as: measurernerts, audio, taste, smell, palpation, etc., that will help in identification and/or coninutm bnfih the envronnrt; e. Based on previous steps, the data of consecutive pictures are compared by meansE of the devices, by no~tching coordinates and/or coordinates are notched to consecutive data by nuatching a cortour to an idertified static shaipe found mn both paces, then the shooting deviation angle which allowis the robot balancing is detected, imoernert/rrxtion of the horizontal and vertical lines in the various contours is detected, the extent of change, direction, angle, etc., is calculated, the data of the changes, stored in paces at deternmred intervals allow, through inutching of coordinates, to calculate the speed of nxovernet/motioni, part of the detected data may be stored in various ways, pace changes many be stored one after the other, may be stored, broadcast, transmitted, etc., in various ways, as 3-D data for projection/presertation in any way, at any tirne and any place after translation/deciphering and even enlargemnrt; f.
Based on previous steps, part or all the data collected by rnears of the devices may be stored in the spatial rnrwnry, each datum according to its belonging and by xmtching of coordinates, the data of unidentified shaipes, araned in a particular order, form key comnponrts, allow, by means of all or part of the devices and by inc~idence, "tmutlf' table, etc., and according to a recorded picture/record/datumn size, arg,,Ie and. distance, to perform intching/detection~idertification of a picture/record/datum, as against pictures/records/data of the known and stored shaipes arranged in the sarne, special order, adjusted to the key comiporerts, according to 'certain', 'between and between', 'reasonable', and 'possible'; g, During detection, the unidentified contours and their data will be recorded by a temporary na and the identified shapes will be recorded by their own mna, artificial mrniory mray be added to the environmenrt in wich the robot operates in the formi of contour lines with indication of the narres of the contours, with the essential shapes and a few identifying data which will be occasionally updated by the robot and/or the vision system AMENDED SHEET PCTIL 96dIOO 14 tPEAJ 0 6FEB 1998 A nutkid for computer vision as in claim 4, wberein said pair and/or more Cameras can be packaged in a sinigle casing or separately, are Similar to video cameras, including CCI) came~ras anid including camras in whIch there are comined rnearis for conveiting data received frorm pictures taken by the Came~ras iuto digital data, and includes one or mre of the followig: I a) Adaptation for color plotographmg~ at various speeds and at any h9gi such as IR, visible and at any ligiting cornditions such as meager, by Inearr, Of U&gi amplification; b) Enargernert or reduction devices, including telescopic or nuicI~scOPIC rnearE; c) optic mge including the desired resolution, whether straigit convex or concave.
6. A retbod for computer vision as in claim 4 and 5, in which the main devices for collectig and storing detected, compared and/or computed, defined, derived, viewed, etc., infomiation On data tOat can be defined as a datumn (such as: codes table, codes of size, Color, cortour(s), geomietrical lines, geometrical shapes, defixitions ari/or nirrl. terrrs) and/Or in artificial intelligen~e .etc.), which allow access to the data in. order to idetifyr stored shapes, as the mtter may be, withi regards to diuemmiorE accordin to rtching of size and angle to the siz and angl,,e of the stored data and by mtching of coordinates and classification according to tiu key corilponierts in one or more hierarchc order that has been establishied in advance, as 'certain, 'between and betmen', (reasonaible' and 'possible', wiuflier iitenial or external, they include One Of the following a Inp)ut mTemor~y for receivig the pictures of the carnrras/the viewed pictures; b. Spatial memrory for one or twO camneras and/or data for 3-I) that will be itegrated by mtching Of coordinates, with the method of picture preservation all thIe pictures obtained from fixed enlargemnt/reduction recording and other, in another memo~ry, anr.d in addition to other data, separately and in connection, for a secondl picture, vwith thO e thod of ST encoding, wich includes the data of both carmras, the data and the tables will be Joined together in relation to a certain section of a defined space, z updated and adjusted at each arnd every pulse, the pictures and data will be AMENDED SHEET I'U/ILV6/UU IPE"S 1EB 1998 -27 stored expressed in hiorizontal and vertical cirmuniferential coordinates, and will be updated all the time; c. A register in the form of table(s) for stored shapes that are krxow, which will inc~lude pictures, naps, signs, (including data and tables related to them), data, codes, etc., adjacert, above, in cormction, etc., to any shape that can be defined and/or given a mirne and/or idertified separately (such as a screwdriver, a ffingr, a vr&L a background, a sign, a pheneram and anything else), registered in a particular, known, order adjusted to key elemnrts, data which are dirremsions will be adjusted to tIe pibotogaphing/recording standard depending upon the size of the shape, its dimrensions and the plograping/sooting argle; d. A register for the detection. of movernert at various paces, the charges in. pace are preserved and adjusted at defined and knovm time interval(s), and based upon the extent of change and flu mnnber of paces and shooting frequency, the size and speed of nioverrut can be calculated, e. A register for storing all fle changes in flu codes, data and hofMIzotal and vertical dimensions at a pace one after tle other, will allow at another time and in another place to present/project flu pictures again, after translation/deciphering and even enagenrt, f. A register for contours of basic shapes which cannot be defined and detected withini a reasonable period of tirne, wlich will allow to detect geometrical shapes as compared to the contour tiat has been detected, as againstthfl basic contours, and tOw definitions will be derived as a supplemnrt to flo key data; g. Memories, registers for collection and handling of table(s), encoding of their data, tables for contours and their data, table(s) for termsE, tngoIKmrrtnc and other definitions, basic ternn of artificial intelligence, all as mruy be required, etc.; ht Registers for auxiliar data received frm devices such as the robot, audio devices, sensing devices, etc.; i. Register for software programs with corrqputii, detection, convpamg handfing and idertifying capacities, for performing any operation reqired for the furnctionig of the compunter vision device, in order for fle device to P"I fulfill its fiunction- AMENDED SHEET PTI L 96/00 145 IP S 08 FEB 1998 -28
7. A computer vision system and method as in any of claims 1-6 wherein: a computer, computer components, components, electronic circuits, a processor, a computenzed data processing system and alike, one or more of them, form one or more of the said means or any combination thereof the said means, known software programs, new and compatible, with an ability to perform whatever is necessary; said means for converting data obtained from photographed pictures ito digital data; the said means for matching, comparison, detection and definition of color, code, characterization identity, of regular type and/or by artificial intelligence, etc., with regards to a viewed point and/or any adjacert points and/or contours generated in the cameras' pictures and/or stored in any register whatsoever, said means for preparation and detection of tables for codes, data, for contrours, terms, features, definitions, calculations, basic terms in artificial intelligence, etc.; said means for calculation, comparison and adjustmert of distance, dimensions, coordinates, angles, speed, etc.; said means for movement detection at time intervals, type of movement/motion; said means for handling of frequent and/or "true" table(s); said means for data collection for key composition; said means for comparison and matching of registers according to a constant, defined, known order, and/or according to a key, said means for storing various types of information; said means for receiving, transmitting and drawing of data; the said means include input, data protection; said means, the system, the method and alike.
8. A computer vision system and method as in any of claims 1-7 in which the input and stored information that has been collected before, during and after identification, including calculations, data, features, regular definitions and conclusions and/or basic terms for artificial intelligence which ar well known and familiar, that have been collected such as with regard to areas, shapes, is immediately or after a short while transmitted forward in form of data information, speech, picture, etc., in a regular manner and/or by stereo in form of 3-D and/or nmltimedia, and which the system preserves and/or provides and/or which are accessible upon request and/or automatically to the user, such as a robot, a device, a blind person by means of adequate interfacing means.
9. A computer vision system and method essentially including any of the 0 RA innovations herein or any combinations thereof as described, referred to, Sexplained, illustrated, shown or implied, in detail and in the above claims, or in the figures attached hereto. r AMENDED SHEET
AU73316/96A 1995-11-14 1996-11-12 Computer stereo vision system and method Ceased AU738534B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
IL115971 1995-11-14
IL11597195A IL115971A (en) 1995-11-14 1995-11-14 Computer stereo vision system and method
PCT/IL1996/000145 WO1997018523A2 (en) 1995-11-14 1996-11-12 Computer stereo vision system and method

Publications (2)

Publication Number Publication Date
AU7331696A AU7331696A (en) 1997-06-05
AU738534B2 true AU738534B2 (en) 2001-09-20

Family

ID=11068178

Family Applications (1)

Application Number Title Priority Date Filing Date
AU73316/96A Ceased AU738534B2 (en) 1995-11-14 1996-11-12 Computer stereo vision system and method

Country Status (8)

Country Link
EP (1) EP0861415A4 (en)
JP (1) JP2000500236A (en)
KR (1) KR19990067273A (en)
CN (1) CN1202239A (en)
AU (1) AU738534B2 (en)
BR (1) BR9611710A (en)
IL (1) IL115971A (en)
WO (1) WO1997018523A2 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE29918341U1 (en) * 1999-10-18 2001-03-01 Tassakos Charalambos Device for determining the position of measuring points of a measuring object relative to a reference system
KR100374408B1 (en) * 2000-04-24 2003-03-04 (주) 케이앤아이테크놀로지 3D Scanner and 3D Image Apparatus using thereof
ATE412955T1 (en) * 2000-05-23 2008-11-15 Munroe Chirnomas METHOD AND DEVICE FOR POSITIONING AN ARTICLE HANDLING DEVICE
CN1292941C (en) * 2004-05-24 2007-01-03 刘新颜 Rear-view device of automobile
CN100447820C (en) * 2005-08-04 2008-12-31 浙江大学 Bus passenger traffic statistical method based on stereoscopic vision and system therefor
TWI327536B (en) 2007-05-16 2010-07-21 Univ Nat Defense Device and method for detecting obstacle by stereo computer vision
WO2013067513A1 (en) 2011-11-04 2013-05-10 Massachusetts Eye & Ear Infirmary Contextual image stabilization
CN102592121B (en) * 2011-12-28 2013-12-04 方正国际软件有限公司 Method and system for judging leakage recognition based on OCR (Optical Character Recognition)
CN102799183B (en) * 2012-08-21 2015-03-25 上海港吉电气有限公司 Mobile machinery vision anti-collision protection system for bulk yard and anti-collision method
CN103679742B (en) * 2012-09-06 2016-08-03 株式会社理光 Method for tracing object and device
CN102937811A (en) * 2012-10-22 2013-02-20 西北工业大学 Monocular vision and binocular vision switching device for small robot
US20180099846A1 (en) 2015-03-06 2018-04-12 Wal-Mart Stores, Inc. Method and apparatus for transporting a plurality of stacked motorized transport units
US9757002B2 (en) 2015-03-06 2017-09-12 Wal-Mart Stores, Inc. Shopping facility assistance systems, devices and methods that employ voice input
US12084824B2 (en) 2015-03-06 2024-09-10 Walmart Apollo, Llc Shopping facility assistance systems, devices and methods
WO2016142794A1 (en) 2015-03-06 2016-09-15 Wal-Mart Stores, Inc Item monitoring system and method
US11158039B2 (en) 2015-06-26 2021-10-26 Cognex Corporation Using 3D vision for automated industrial inspection
KR101910484B1 (en) 2015-06-26 2018-10-22 코그넥스코오포레이션 A method for three dimensional (3d) vision inspection
CN106610522A (en) * 2015-10-26 2017-05-03 南京理工大学 Three-dimensional microscopic imaging device and method
CA2961938A1 (en) 2016-04-01 2017-10-01 Wal-Mart Stores, Inc. Systems and methods for moving pallets via unmanned motorized unit-guided forklifts
WO2018041408A1 (en) * 2016-08-31 2018-03-08 Sew-Eurodrive Gmbh & Co. Kg System for sensing position and method for sensing position
JP2018041247A (en) * 2016-09-07 2018-03-15 ファナック株式会社 Server, method, program, and system for recognizing individual identification information of machine
CN107145823A (en) * 2017-03-29 2017-09-08 深圳市元征科技股份有限公司 A kind of image-recognizing method, pattern recognition device and server
CN106940807A (en) * 2017-04-19 2017-07-11 深圳市元征科技股份有限公司 A kind of processing method and processing device based on mirror device of looking in the distance
CN114543684B (en) * 2022-04-26 2022-07-12 中国地质大学(北京) Structural displacement measuring method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4900128A (en) * 1988-11-01 1990-02-13 Grumman Aerospace Corporation Three dimensional binocular correlator
US5309522A (en) * 1992-06-30 1994-05-03 Environmental Research Institute Of Michigan Stereoscopic determination of terrain elevation
US5392211A (en) * 1990-11-30 1995-02-21 Kabushiki Kaisha Toshiba Image processing apparatus

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4601053A (en) * 1983-11-21 1986-07-15 Grumman Aerospace Corporation Automatic TV ranging system
JPS60200103A (en) * 1984-03-26 1985-10-09 Hitachi Ltd Light cutting-plate line extraction circuit
JPH07109625B2 (en) * 1985-04-17 1995-11-22 株式会社日立製作所 3D stereoscopic method
US4924506A (en) * 1986-07-22 1990-05-08 Schlumberger Systems & Services, Inc. Method for directly measuring area and volume using binocular stereo vision
JPS63288683A (en) * 1987-05-21 1988-11-25 株式会社東芝 Assembling robot
US4982438A (en) * 1987-06-02 1991-01-01 Hitachi, Ltd. Apparatus and method for recognizing three-dimensional shape of object
US5179441A (en) * 1991-12-18 1993-01-12 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Near real-time stereo vision system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4900128A (en) * 1988-11-01 1990-02-13 Grumman Aerospace Corporation Three dimensional binocular correlator
US5392211A (en) * 1990-11-30 1995-02-21 Kabushiki Kaisha Toshiba Image processing apparatus
US5309522A (en) * 1992-06-30 1994-05-03 Environmental Research Institute Of Michigan Stereoscopic determination of terrain elevation

Also Published As

Publication number Publication date
CN1202239A (en) 1998-12-16
AU7331696A (en) 1997-06-05
WO1997018523A2 (en) 1997-05-22
IL115971A (en) 1997-01-10
EP0861415A4 (en) 2000-10-25
JP2000500236A (en) 2000-01-11
KR19990067273A (en) 1999-08-16
IL115971A0 (en) 1996-01-31
EP0861415A2 (en) 1998-09-02
BR9611710A (en) 1999-12-28
WO1997018523A3 (en) 1997-07-24

Similar Documents

Publication Publication Date Title
AU738534B2 (en) Computer stereo vision system and method
CN107229908B (en) A kind of method for detecting lane lines
CN101408931B (en) System and method for 3d object recognition
Mohr et al. Projective geometry for image analysis
EP0363339A2 (en) Mobile robot navigation employing ceiling light fixtures
US6768813B1 (en) Photogrammetric image processing apparatus and method
CN110969663A (en) Static calibration method for external parameters of camera
CN103268621B (en) A kind of house realistic picture generates method and apparatus
CN102395994A (en) Omnidirectional image processing device and omnidirectional image processing method
ITMI942020A1 (en) NAVIGATION SYSTEM FOR AUTONOMOUS MOBILE ROBOT
CN106338287A (en) Ceiling-based indoor moving robot vision positioning method
CN104966318A (en) A reality augmenting method having image superposition and image special effect functions
Ho Close-range mapping with a solid state camera
CN100370226C (en) Method for visual guiding by manual road sign
CN101980292B (en) Regular octagonal template-based board camera intrinsic parameter calibration method
CN106846243A (en) The method and device of three dimensional top panorama sketch is obtained in equipment moving process
Coulombeau et al. Vehicle yaw, pitch, roll and 3D lane shape recovery by vision
CN107094232B (en) Positioner based on image recognition
CN207115438U (en) Image processing apparatus for vehicle-mounted fisheye camera
Traffelet et al. Target-based calibration of underwater camera housing parameters
Ye et al. Sensor planning for object search.
Alvertos et al. Omnidirectional viewing for robot vision
DE102020214251A1 (en) Method for providing monitoring data for detecting a moving object, method for detecting a moving object, method for producing at least one predefined point-symmetrical area and device
CN105956996A (en) Fisheye image correction method, device, and system based on secondary refraction projection model
Rieder Trinocular divergent stereo vision

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)
MK14 Patent ceased section 143(a) (annual fees not paid) or expired