WO2004042673A2 - Automatic, real time and complete identification of vehicles - Google Patents

Automatic, real time and complete identification of vehicles Download PDF

Info

Publication number
WO2004042673A2
WO2004042673A2 PCT/IL2003/000910 IL0300910W WO2004042673A2 WO 2004042673 A2 WO2004042673 A2 WO 2004042673A2 IL 0300910 W IL0300910 W IL 0300910W WO 2004042673 A2 WO2004042673 A2 WO 2004042673A2
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
image
vehicles
color
acquisition device
Prior art date
Application number
PCT/IL2003/000910
Other languages
French (fr)
Other versions
WO2004042673A3 (en
Inventor
Shaie Shvartzberg
Oz Kfir
Original Assignee
Imperial Vision Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imperial Vision Ltd. filed Critical Imperial Vision Ltd.
Priority to AU2003278585A priority Critical patent/AU2003278585A1/en
Publication of WO2004042673A2 publication Critical patent/WO2004042673A2/en
Publication of WO2004042673A3 publication Critical patent/WO2004042673A3/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Definitions

  • the present invention relates to identification of vehicles. More particularly, the invention relates to a method for real-time and automatic absolute identification of a vehicle's design model, vehicle color and the vehicle license plate number. The present invention also relates to a system using the identified design model, license number and color of the vehicle to check the status of the identified vehicle (stolen, faked etc.), and to commence a series of predetermined actions depending on the resulting status of the vehicle.
  • LPR License Plate Recognition
  • a picture of the vehicle is taken, from which its license plate number is extracted using known algorithm(s) that analyze characters in the picture.
  • LPR technology is largely implemented, for example, in toll roads, parking facilities, and by the police and military.
  • a major drawback of this technology is that it is incapable of discerning situations where a vehicle's license plate is swapped.
  • offenders can replace the sought-after license plate of their suspected vehicle, with a license plate of another, non-suspected vehicle, and not get caught.
  • Figs, la to lc A notion, regarding obtaining the silhouette of a captured vehicle, is shown in Figs, la to lc (prior art).
  • Fig. la shows one original image of a vehicle that was 'caught' in the Field of View (FOV) of the corresponding camera (not shown).
  • Fig lb and lc show the immediate silhouettes . that corresponds to the original image of the captured vehicle.
  • FOV Field of View
  • Fig lb and lc show the immediate silhouettes . that corresponds to the original image of the captured vehicle.
  • the latter technologies provide only general and inaccurate shape of vehicles ⁇ and, therefore, the exact design model of the vehicle cannot be obtained using these technologies.
  • a popular solution for coping with vehicle theft involves installation of a transmitter inside the vehicle, whose radiation is constantly monitored by a monitoring station. This way, the monitoring station is supposed to always have the exact, immediate location of the vehicle, and when it is stolen, the vehicle can be easily tracked down.
  • This approach suffers from significant drawbacks, which are associated with handling communication systems. For example, the signal may be intentionally, or unintentionally, blocked, and the transmitter, which is installed in a vehicle, may be easily removed, or neutralized, by thieves.
  • US 5,809,161 discloses an object-monitoring system, capable of detecting predetermined moving objects, from other moving or static objects. Image data acquired by a plurality of cameras, can be sent over a digital communication network to a central image processing system. However, only the license plate number, and only of those vehicles which are relatively large, can be automatically extracted (i.e., from the acquired image). The system disclosed in US 5,809,161 is particularly suited to monitoring and discriminating large vehicles in a multi-lane roadway, from other, smaller vehicles, which are ignored.
  • the recognition analysis of this system lacks important aspects, such as the vehicle's design model, its color, or a 3-D analysis of its shape, and consequently, oftentimes cannot spot vehicles that have undergone changes.
  • Another drawback of this system is it being stationary, and not designed to be attached to moving objects, such as police vehicles.
  • the present invention relates to a method and system for allowing an automatic and real-time identification of a vehicle, by identifying the design model, color and license plate number of the vehicle.
  • the present invention also relates to a system using the identification results for checking the status of the vehicle.
  • the present invention is directed to a method, according to which a vehicle capturing site is capable of identifying design model, color and license plate number of vehicles, by capturing corresponding images of the vehicles, by aiming a video acquisition device towards a preferred (monitored/covered) area (e.g., a road, a gate of a parking facility), where vehicles are expected to pass, in order to generate a video data in the form of still-picture, or a continuous video stream, which represents essentially all of the incidents occurring in the Field Of View (FOV) of the video acquisition device.
  • the video data is acquired, digitized, processed and analyzed by a computer unit, thereby obtaining the design model, color and license plate number of captured vehicles.
  • capturing a vehicle is meant herein obtaining or being able to obtain an image of the vehicle in a video acquisition device, such as a camera, which occurs when the video acquisition device is active and the vehicle comes within the field of view of the device.
  • the method for automatic and real-time identification of a vehicle comprises the steps: a) Providing a video acquisition device- that is located in known orientation, height and distance, with respect to an a space within which vehicles are intended to be captured, and is capable of generating a video data, in the form of still images, or stream of consecutive video frames-, that corresponds to an object(s) passing in front of the Field Of View (FOV) of the video acquisition device.
  • a) Providing a video acquisition device- that is located in known orientation, height and distance, with respect to an a space within which vehicles are intended to be captured, and is capable of generating a video data, in the form of still images, or stream of consecutive video frames-, that corresponds to an object(s) passing in front of the Field Of View (FOV) of the video acquisition device.
  • FOV Field Of View
  • the video acquisition device may be a digital or analog video camera; b) Providing a database (i.e., Car Model Database — CMDB), which contains data, and/or feathers, and or images, and/or structures, and/or descriptions, that correspond essentially to every existing vehicle's design model; and c) Providing a computer unit and corresponding processing software, for: c.l) allowing indicating to the computer unit that an object is in the FOV, for selecting, from the stream of consecutive video frames, an image picture that includes the image of the object; c.2) extracting an Area Of Interest (AOI) from the image picture, which includes the image of the object; c.3) extracting the image of the object from the AOI, by employing edge- detection algorithm on the pixels of the AOI; c.4) determining whether the image of the object is an image of a vehicle; and c.5) whenever the image of the object is determined to be an image of a vehicle, extracting, from the image of the vehicle, the design model, color and
  • a television monitor may be connected to the video acquisition device, for allowing locally, or remotely, monitoring the preferred covered area, and, thereby, allowing indicating to the computer unit the presence of a vehicle in the FOV.
  • the television monitor may also be utilized for carrying out calibration and maintenance procedures.
  • the computer unit may be initialized with parameters for optimizing the analysis of. the continuous consecutive video frames, selection of the image picture, extraction of the object from the image picture, determination whether the object is a vehicle, and extraction of the design model, color and license plate number of the vehicle.
  • the parameters may be related to at least the relative height, tilt angel and distance of the video acquisition device, with respect to the preferred area, for normalizing the captured vehicle, for allowing comparing between key features that are part of the normalized image of the captured vehicle to key features that are related to existing vehicle's design models and are contained within the CMDB, thereby allowing obtaining the captured vehicle's design model.
  • the computer unit may be initialized with additional parameters, such as parameters related to a calibration color plate, for optimizing the color determination process, and the direction of moving vehicles, for optimizing the vehicle's image extraction process.
  • the indication to the computer unit, regarding the presence of a vehicle, or another object, in the FOV, and the selection of the image picture to process is carried out automatically, by: Analyzing, automatically and in real-time, a sequence of corresponding successive video frames, contained in the stream of video frames, that correspond to an object being captured by the video acquisition device.
  • the analysis includes utilization of the generated stream of video frames, for detection of 'motion center' of corresponding object(s), with respect to the FOV of the video acquisition device.
  • the motion center is detected by employing motion- detection technique on the successive video frames.
  • the motion-detection technique utilizes a differentiation algorithm, according to which the characteristics of corresponding pixels of successive video frames are compared, and corresponding differences of characteristics are calculated, thereby detecting a moving object; and selecting, from the successive video frames, an image to be processed, the image is selected according to the image's occurrence instant, which is synchronized with the instant of detected motion of the vehicle, so as to ensure that the content of the (selected) image includes all, or at least most, of the object(s) that was captured by the video acquisition device.
  • the indication of a vehicle being in the FOV is 'manual', i.e., the operator of the video acquisition device indicates to the computer unit that a vehicle is in the FOV, in order to obtain the image, which includes a still-picture of the vehicle.
  • selecting the image picture is carried out by employing motion- detection algorithm that includes utilization of the identified instant at which the vehicle entered the FOV, expected vehicle's average speed and number of video frames per second.
  • a middle frame is chosen from selected consecutive frames, which middle frame is most likely to contain the whole image of the object, said middle frame being said selected image picture.
  • selecting the image is carried out by employing motion- detection algorithm that includes identification of the instants at which the vehicle enters and leaves the FOV, after which the image is selected from a frame contained between the corresponding (first and last) frames.
  • the motion-detection algorithm includes identifying a pixel, whose characteristics have been changed (i.e., thereby indicating movement), and tracking changes along pixels that form a trajectory that originates from this pixel.
  • extraction of the Area Of Interest (AOI) from the image is carried out by identifying the corresponding 'motion-center' of the image, by evaluating pixels whose characteristics are varied due to motion.
  • AOI Area Of Interest
  • the motion-detection stage may be utilized for obtaining the speed of the captured vehicle, by utilizing the apriori knowledge related to the relative location of the video acquisition device, with respect to the captured vehicle, and the number of video frames per second.
  • More than one objects may be captured by the same capturing device and essentially at the same time, and each one of the objects may be handled essentially in the same way as described above, i.e., by identifying a corresponding AOI for each captured vehicle, in the same, or predecessor, or successor, individual image, and by analyzing separately each corresponding individual AOI.
  • extraction of the image of the object from the AOI is carried out by employing segmentation process on the AOI.
  • the segmentation process includes: a) Filtering the image, so as to discard effects of ambient light conditions, dust, shadow, motion blur (i.e., caused by the movement of the captured vehicle), clouds etc., in order to obtain an essentially noiseless picture of the captured object; b) Obtaining the silhouette of the object and contours contained therein, by employing edge-detection algorithm; c) Whenever and wherein required, completing the silhouette and contours by adding missing sections, or 'dots', by employing corresponding ('Line tracking') algorithms; d) Obtaining a binary image of the silhouette and contours, by converting the reconstructed silhouette and contours to white lines on black background; and e) Extracting the area that is confined within the closed silhouette. The latter extracted area is the image of the object.
  • completing the silhouette and contours is performed by employing ("Line Tracking") interpolation algorithm(s).
  • the determination, whether an object is a vehicle is carried out by utihzation of a 'size-threshold' that depends on the relative location of the video acquisition device, with respect to the preferred monitored area, and the relative area of the image of the object, with respect to the area of the corresponding FOV.
  • the order of the extraction of the vehicle's parameters is as mentioned above (i.e., extracting first the vehicle's design model, then the color and license plate number).
  • the indicated order is only an option.
  • the order may be reversed, or, extraction of the parameters may be carried out in parallel, rather in series.
  • the order of the extraction of the vehicle's parameters is affected by computation resources and speed, and hardware considerations.
  • extracting the vehicle design model of the (related) vehicle is carried out by: a) identifying, in the image of object, the silhouette and contours of the vehicle; b) mathematically characterizing preferred key features that are part of, and/or resulting from, the silhouette and contours; c) obtaining a mathematical model of the related vehicle, by mathematically characterizing the overall surface area of the related vehicle, by utilizing a grouped characterized key features that allow calculation of the overall surface area.
  • the grouped key features are selected from the preferred key features; d) correlating the obtained mathematical model with data that .
  • CMDB is contained within the CMDB and corresponds to existing vehicle design models, for assigning a primary probability value for each one of the existing vehicle design models; and e) choosing the vehicle's model design that is assigned the highest primary probability value as the preferred vehicle design model of the related (i.e., captured) vehicle.
  • the mathematical model is a 3-dimensional (3D) model, and is obtained by applying trigonometric calculations on a corresponding 2- dimensional model.
  • the mathematical model is a 2- dimensional (2D) model.
  • obtaining the mathematical model further includes utilization of known “shape from X” techniques, wherein "X” denotes, e.g., “shading", “motion”, “contour”, “texture”, “silhouette”, “sequence of pictures", “gradient”.
  • X denotes, e.g., “shading”, “motion”, “contour”, “texture”, “silhouette”, “sequence of pictures", “gradient”.
  • the latter known techniques are used in computerized vision systems, for obtaining fast and accurate reconstruction of the three-dimensional (3D) description, or representation, of an object that is represented by two-dimensional (2D) image that was produced by, e.g., a video camera. These techniques allow computing the relative depth of points in the 2D image of the object, and "building' a surface representation of the acquired, or captured, object.
  • Probability threshold value and probability margin may be utilized, for enhancing the vehicle design model determination process. Accordingly, the process for extraction of the design model of the vehicle may further comprise: a) determining a probability threshold value (herein after the "threshold value"), and a probability margin; b) if every primary probability value is smaller than the threshold value, performing, per vehicle design model: b.l) choosing a key feature (herein after the "additional key feature”) from the preferred key features.
  • the additional key feature is preferably a key feature that is not of the group (i.e., of selected key features); b.2) correlating the additional key feature with data that is related to equivalent key feature of the vehicle design model, and is contained within the CMDB, thereby assigning additional probability value to the vehicle design model; b.3) assigning a probability weight to the vehicle design model.
  • the probability weight calculation is based on the primary probability value and on the additional probability value assigned to the corresponding vehicle design model; b.4) repeating steps b.l) to b.3) until at least one probability weight is obtained, which has a value higher than the threshold value; b.5) if the probability weight, which has the highest value, has a probability margin, with respect to the next highest probability weight, that is smaller than a predefined probability margin, performing another iteration by repeating steps b.l) to b.3), until a probability weight is obtained (herein after the "preferred probability weight"), which has the highest value, that is higher than the threshold value, and whose margin, ⁇ with respect to any one of the other probability weights, is larger than the predetermined margin.
  • the vehicle design model that is related to the preferred probability weight is determined as the design model of the captured vehicle; otherwise b.6) determining the design model related to the highest probability weight as the design model of the captured vehicle; otherwise if the margin, between the highest primary probability value and the next highest primary probability value, is smaller than the predefined margin value, performing step b.5).
  • the key features are selected from, or related to, at least lengths, surfaces, curves, angles between preferred lines, lines crossings and distances between prominent elements of the vehicle (e.g., the distance between the vehicle's wheels, or between head lights, or external mirrors), which are selected from the silhouette of the vehicle and from the contours of prominent areas of the vehicle, for example, the roof, and/or engine cover and/or doors and/or baggage cover and or lights and/or mirrors of the vehicle.
  • the key features being characteristic geometric features, are selected from, or related to, at least lengths, surfaces, curves, angles between preferred lines, lines crossings and distances between prominent elements of the vehicle (e.g., the distance between the vehicle's wheels, or between head lights, or external mirrors), which are selected from the silhouette of the vehicle and from the contours of prominent areas of the vehicle, for example, the roof, and/or engine cover and/or doors and/or baggage cover and or lights and/or mirrors of the vehicle.
  • the selected key features are mathematically characterized (e.g., in the form of mathematical equations, the coefficients of which are correlated with corresponding coefficients that are stored in the CMDB, in order to determine the vehicle's design model), in order to avoid performing a 'pixel-to -pixel' based comparison process, which would require much more computational power and would take a lot of time to accomplish.
  • the Identification of the silhouette of the vehicle's image and the contours of the vehicle's prominent elements is implemented by employing a segmentation process.
  • the segmentation process includes employing a 'differentiation operator' on the pixels in the picture contained within the AOI, after which pixels having relatively high value are registered. The latter pixels represent the border points between adjacent distinguishable areas, and a collection of corresponding pixels forms corresponding border lines, which represent the required silhouette and contour lines.
  • the silhouette and contours are normalized (i.e., scaled and, whenever required, rotated), in order to allow correlation between (the normalized) key features of the captured vehicle and 'full/real sized' key features belonging to actual vehicle's models.
  • the normalization includes scaling and, whenever required, rotation of the picture in the AOI.
  • the scaling factor is determined according to the known distance of the video acquisition device from captured vehicles, and the rotation angle is determined according to the known relative height and angle (i.e., orientation) of the video acquisition device with respect to the captured vehicles.
  • the Car Model Database is external to the computer unit, and communicates with the computer unit by utilizing bidirectional communication channel.
  • the CMDB resides within the computer unit.
  • the color of the vehicle is extracted by: a) Analyzing the characteristics of the pixels of the AOI, for allowing calculating averaged color values. Each one of the averaged color values corresponds to a different area of the vehicle (e.g., doors); b) Choosing the averaged color having the maximal value as the representative color of the vehicle; c) Utilizing known color reference, for measuring the effect of ambient factors on the known color reference; d) Modifying the representative color of the vehicle according to the measured effect of the ambient factors. The modified representative color is the true color of the vehicle.
  • modifying the representative color is implemented by utilizing color control techniques, such as AGC (Automatic Gain Control) technique or white balance.
  • AGC Automatic Gain Control
  • the dominant color i.e., vehicle's color
  • analyzing the pixels and calculating averaged color values are limited to selected pre-defined areas, where the probability to detect painted parts of the vehicle is relatively high, for example, the lower part of the doors, or the engine cover.
  • the License Plate Number of the vehicle is obtained by employing the known License Plate Registering (LPR) technique, which utilizes the Optical Character Recognition (OCR) algorithm.
  • LPR License Plate Registering
  • OCR Optical Character Recognition
  • the video acquisition device is stationary, with respect to the preferred covered area, and the color reference is a standard calibration colored plate, which is located in a fixed location in a way that the calibration colored plate continuously appears in a pre-determined size and location in the FOV of the video acquisition device.
  • the effect of the ambient factors e.g., light, shadows, dust
  • the representative color of the vehicle is modified according to the measurement results (i.e., relating to the color reference) at the time of the actual capturing of the vehicle's image.
  • the modified representative color of the vehicle is the true color of the vehicle.
  • the video acquisition device is non- stationary, i.e., it is attached to, e.g., a police car that travels along a road, seeking for vehicles.
  • the image of the captured vehicle is a still picture
  • the color reference is obtained by periodically facing a standard calibration colored plate in front of the video acquisition device, and measuring the effect of the ambient factors thereon. The more frequently the effect of the ambient factors (i.e., on the standard calibration colored plate) is measured, the more accurate the modification of the representative color of the vehicle is, and, consequently, the more accurate is the vehicle's color determination process.
  • a color reference in the form of a relatively small standard calibration colored plate, may be ' continuously positioned in a fixed location in front of the video acquisition device, in order to allow continuous measurement of the effect of ambient factors, while allowing the video acquisition device to capture images of vehicles.
  • a distance measuring sub-system is provided, the task of which is to continuously measuring the distance between the (moving) video acquisition devices to the (moving) vehicle whose image is to be captured.
  • the measured distance and the (known) height of the video acquisition device i.e., being installed on, e.g., a police car) allow the computer unit optimizing the normalization of the image captured by the video acquisition device.
  • the method further comprises providing a Work Model Database (WMDB), which includes data (hereinafter referred to as the "vehicles profiles") related to vehicles of interest (e.g., vehicles registered as stolen, a vehicle that is allowed to enter restricted area).
  • WMDB Work Model Database
  • Each one of the vehicles profiles includes a unique combination of license plate number, color and model related to specific vehicle of interest, and, optionally, additional data, such as the vehicle's owner, the owner's details (i.e., residence, occupation, driving license, accidents the owner was involved in, etc.).
  • the WMDB may reside within the computer unit, or be external to the computer unit. Additionally, the content of the WMDB may be temporary, i.e., data, which is related to specific vehicle of interest, may be deleted or updated after the latter vehicle is identified and a corresponding response is commenced.
  • the method further comprises providing a transceiver unit.
  • the transceiver unit may be located at the capturing site, for allowing exchanging data between the computer unit residing within the capturing site, and a central control station.
  • the method may further comprise utilization of a plurality of capturing sites, in the same manner as described hereinabove, which are deployed in predetermined locations, and communication between the plurality of capturing sites and a central control station.
  • the method further comprises providing a central control station, for communicating with the plurality of capturing sites.
  • the central control station includes a Global Model Database (GMDB), which includes data that is related to essentially every registered vehicle and existing vehicle design model.
  • GMDB Global Model Database
  • the data related to design model of vehicles is stored in the GMDB in the form of corresponding vector characteristics, which represent key features and mathematical models that characterize existing vehicle design models.
  • the latter key features and mathematical models may be forwarded to chosen CMDBs (i.e., in the respective capturing site), for allowing extraction of design models of captured vehicles.
  • the central control station may further include a central computer and a transceiver, for allowing the central computer to exchange messages with each one of the computer units residing in the respective capturing sites.
  • the method further comprises the steps: a) Continuously operating the video acquisition device in each one of the vehicle capturing sites; b) Capturing every image of every vehicle that enters each one of the FOV of a respective video acquisition device, and performing the steps: b.l) extracting the details (i.e., license plate number, color and model) from the respective captured image(s); b.2) identifying and unconditionally transmitting, essentially in realtime, the extracted details of each vehicle to the central control station (e.g., police station); b.3) comparing, in the central control station, the vehicle's extracted details to "vehicles profiles" that are stored in the WDB; and b.4) responding according to the result of the comparison.
  • the central control station e.g., police station
  • the central control station e.g., police station
  • b.3 comparing, in the central control station, the vehicle's extracted details to "vehicles profiles" that are stored in the WDB; and b.4 responding according to the result of
  • the method comprises the steps: a) Transmitting a request, from the central control station to the plurality of vehicles capturing sites, to commence a response only upon identification of specific vehicles of interest; and b) Whenever a specific vehicle of interest is identified, by at least one of the capturing sites, commencing a corresponding response by the respective capturing site. .
  • the computer unit may carry out, or commence, one, or more, corresponding actions, depending on the type of match, selected from the group of:
  • commenced actions may be other than those specified hereinabove (i.e., actions a. to i.).
  • the commenced response may be, e.g., transmitting a corresponding message back to the central control station-, and or, opening, or closing, a gate, and/or activating a siren, and/or dialing a predetermined telephone No., etc.
  • the central control station may transmit a request only to preselected capturing sites (i.e., according to predetermined criteria).
  • the Car Model Database (CMDB), in each capturing site, may be automatically updated with data that corresponds to vehicles' design models, by communicating with the Global Model Database (GMDB).
  • GMDB Global Model Database
  • the WMDB, in each capturing site may be automatically updated with data that is related to the status of vehicles, by communicating with the GMDB.
  • the method may comprise providing a central Graphic User Interface (GUI), for allowing a person to interact with the central computer, and, through the central computer, with the vehicles capturing sites.
  • GUI Graphic User Interface
  • the interaction may include at least updating the content of the three databases (i.e., GMDB, CMDB and WMDB), exchanging messages between the central computer and the computer units residing in the respective capturing site, transmitting requests, inquiries and directives from the central computer to the capturing sites, presenting a picture, and related data, of wanted specific vehicle that are identified, etc.
  • GUI Graphic User Interface
  • each one of the vehicle capturing site may include an independent GUI, for allowing operation, calibration and maintenance of the respective vehicle capturing site.
  • the capturing site is independent; i.e., the capturing site is a 'stand-alone' facility, operating without communicating with a central control station.
  • the present invention also provides a method for allowing automatic and real-time identification of a design model of a vehicle, which comprises the steps: a) Defining a number of characteristic geometric features in the appearance of vehicles, the combination or concurrent presence of them being specific to given models of vehicles; b) Representing each characteristic geometric feature by a digital word (i.e., identifying word), and memorizing the correspondence between the characteristic geometric features and the corresponding, or related, identifying words.
  • a digital word i.e., identifying word
  • Each one of the digital (identifying) word may be obtained by employing any known digital compression techniques on data that represents the corresponding characteristic geometric feature; c) Creating a program for identifying the characteristic geometric features from an image, or images, of vehicles; d) Determining and memorizing the identifying words of all, or a sufficiently large number of known vehicles models; e) Continuously adjourning the memories of identifying words; f) Selecting an area for observation; g) Acquiring, by any kind of image generating apparatus, an image of the selected area in the absence of traffic, said image is memorized as background; h) Continuously, or at predetermined intervals, procuring an image of the selected area, by the image generating apparatus; i) Comparing each one of the procured images with the background to extract images that are not in the background and are assumed to be of vehicles; j) Selecting from the portion, by the program, the geometric features of the assumed vehicle, that are among the characteristic geometry features defined in step a); k) Representing the features by corresponding digital words
  • step d 1) Comparing the resulting digital words with digital words memorized in step d); and m) Determining the result of the comparison among the following: 1) the assumed vehicle has been determined to be in fact, or there is a high probability that it is in fact, a vehicle of a specific model; 2) the assumed vehicle has been determined to be in fact, or there is a high probability that it is in fact, one of the vehicle models of a group thereof; 3) same as 2), but with a given, low probability; 4) the digital words resulting from step 1 show that there is a random chance that the assumed vehicle is of a specific model or of a limited groups of specific models; 5) the digital words resulting from step 1 show that the assumed vehicle is not of a known model, or is not a vehicle at all.
  • the result of the comparison is expressed as the number of identifying words that represent geometric features of the vehicle, that are the same as memorized identifying words, or have a number of digits that are the same as those of memorized identifying words.
  • a corresponding program, or table may be utilized, for determining the aforesaid probabilities as a function of the aforesaid comparison result.
  • System for automatic and real-time identification of a vehicle comprising a plurality of vehicle capturing sites, for allowing an automatic and realtime identification of vehicles by identifying, or extracting, the design model, color and license plate number of the vehicles, and a central control station, capable of communicating with the capturing sites, for allowing checking the status of the vehicle by utilization of its identified, or extracted, model, color and license plate number.
  • Each one of the vehicle capturing sites includes at least a video acquisition device that is directed towards a preferred (monitored/covered) area (e.g., a road, a gate of a parking facility), where vehicles are expected to pass, in order to generate a video data in the form of still-picture, or a continuous video stream, which represents essentially all of the incidents occurring in the Field Of View (FOV) of the video acquisition device.
  • the vehicle capturing site further includes a computer unit and corresponding software, for acquiring, digitizing, processing and analyzing the video data, for obtaining the design model, color and license plate number of the vehicle.
  • the system for automatic and real-time identification of a vehicle comprises a central control station and a plurality of capturing sites, each of which comprising: a) Video acquisition device, which is located in known orientation, height and distance, with respect to captured vehicles, and is capable of generating a video data, in the form of still images, or stream of consecutive video frames, that correspond to objects passing in front of the Field Of View (FOV) of the video acquisition device.
  • the video acquisition device may be a digital or analog video camera;
  • Database i.e., Car Model Database - CMDB
  • Computer unit and corresponding processing software i.e., Car Model Database - CMDB
  • the computer unit further comprises means for: c.l) allowing indicating to the computer unit that an object is in the FOV, for selecting, from the stream of consecutive video stream, an image picture that includes the image of the captured object; c.2) extracting an Area Of Interest (AOI), from the image picture, which includes the image of the captured object; c.3) extracting the object from the AOI, by employing edge -detection algorithm on the pixels of the AOI; c.4) determining whether the image of the object is an image of a vehicle; and, whenever the image of the object is determined to be an image of a vehicle, c.5) extracting the design model, color and license plate number of the related vehicle.
  • AOI Area Of Interest
  • the system may further comprise, for each capturing site, a television monitor, which may be connected to the video acquisition device, for allowing locally, or remotely, monitoring the preferred covered area, and, thereby, allowing, indicating to the computer unit the presence of a vehicle in the FOV.
  • the television monitor may also be utilized for carrying out calibration and maintenance procedures.
  • the computer unit in each capturing site, may be initialized with parameters for optimizing the analysis of the continuous consecutive video frames, selection of the image picture, extraction of the object from the image picture, determination whether the object is a vehicle, and extraction of the design model, color and license plate number of the vehicle.
  • the parameters may be related to at least the relative height, tilt angel and distance of the video acquisition device, with respect to the preferred area, for normalizing the captured vehicle, for allowing comparing between key features that are part of the normalized image of the captured vehicle to key features that are related to existing vehicle's design models and are contained within the CMDB, thereby allowing obtaining the captured vehicle's design model.
  • the computer unit may be initialized with additional parameters, such as parameters related . to a calibration color plate, for optimizing the color determination process, and the direction of moving vehicles, for optimizing the vehicle's image extraction process.
  • the indication to the computer unit, regarding the presence of a vehicle, or another object, in the FOV, and the selection of the image picture that should be processed, is carried out automatically.
  • the system may further comprise, for each computer unit,: a) means for processing and analyzing automatically and in real-time a sequence of corresponding consecutive video frames contained in the stream of video frames, which correspond to an object being captured by the video acquisition device.
  • the analysis includes utilization of the generated stream of video frames for detection of 'motion center' of corresponding object(s), with respect to the FOV of the video acquisition, device.
  • the motion center is detected by employing motion- detection technique on the successive video frames.
  • the motion-detection technique utilizes a differentiation algorithm, according to which the characteristics of corresponding pixels of successive video frames are compared, and corresponding differences of characteristics are calculated, thereby detecting a moving object; and b) means for selecting, from the consecutive video frames, an image to be processed, the image is selected according to the image's occurrence instant, which is synchronized with the instant of detected motion of the vehicle, so as to ensure that the content of the (selected) image includes all, or at least most, of the object(s) that was captured by the video acquisition device.
  • a motion-detection algorithm for each computer unit, for allowing selecting the image picture (i.e., that includes the object), which utilizes the identified instant at which the vehicle enters into the FOV, the expected vehicle's average speed and the number o ⁇ video frames per second.
  • the total video frames are calculated, and essentially the middle frame is selected, which is most likely to contain the whole image of the captured vehicle.
  • a motion- detection algorithm for each computer unit, for selecting the image picture, which identifies the instants at which the vehicle enters and leaves the FOV, after which the image picture is selected from frames contained between the corresponding (first and last) frames.
  • a motion- detection algorithm for each computer unit, for selecting the image picture, which includes identifying a pixel whose characteristics were changed (i.e., thereby indicating movement), and tracking changes along pixels that form a trajectory that originates from this pixel.
  • the system further comprises, for each computer unit, means for identifying the corresponding 'motion-center' of the image, by evaluating pixels whose characteristics are varied due to motion, for extracting the Area OF Interest (AOI) from the image picture.
  • AOI Area OF Interest
  • the motion-detection stage may be utilized for obtaining the speed of the captured vehicle, by utilizing the apriori knowledge related to the relative location of the video acquisition device, with respect to the captured vehicle, and the number of video frames per second.
  • the indication of a vehicle being in the FOV is 'manual', i.e., the operator of the video acquisition device indicates to the computer unit that a vehicle is in the FOV, in order to obtain the image picture, which includes a still-picture of the vehicle.
  • More than one objects may be captured by the same capturing site and essentially at the same time, and each one of the objects may be handled essentially in the same way as described above, i.e., by identifying a corresponding AOI for each captured vehicle, in the same, or predecessor, or successor, individual image, and by analyzing separately each corresponding individual AOI.
  • the system comprises, for each computer unit, extraction means, for extracting the (captured) object from the AOI.
  • the extraction means further comprises segmentation means, which includes: a) Means for filtering the image, so as to discard effects of ambient light conditions, dust, shadow, motion blur and clouds, in order to obtain an essentially noiseless picture of the captured object; b) Edge-detection algorithm, for obtaining the silhouette of the object and contours contained therein; c) Means for completing the silhouette and contours by adding missing sections, and/or 'dots', by employing corresponding ('Line tracking') algorithms; d) Means for generating binary image of the silhouette and contours; and e) Means for extracting the area confined within the closed silhouette.
  • the extracted area is the image of the (captured) object.
  • completion of the silhouette and contours are performed, by each computer unit, by employing ("Line Tracking") interpolation algorithm (s).
  • the determination means i.e., whether the image of the captured object is an image of a captured vehicle
  • the determination means utilizes a 'size-threshold' that depends on the relative location of the video acquisition device, with respect to the preferred monitored area.
  • the means for extracting the vehicle design model further comprises, for each computer unit: a) Means for identifying, in the image of object, the silhouette and contours of the vehicle; b) Means for mathematically characterizing preferred key features that are part of, and/or resulting from, the silhouette and contours; c) Means for generating a mathematical model of the related vehicle, by mathematically characterizing the overall surface area of the related vehicle, by utilizing a corresponding group of characterized key features.
  • the key features of the group are selected from the preferred key features; d) Means for correlating the generated mathematical model with data that is contained within the CMDB and corresponds to existing vehicle design models, for assigning a primary probability value for each one of the existing vehicle design models; and e) Means for choosing the vehicle's design model that is assigned the highest primary probability value as the preferred vehicle design model of the related (i.e., captured) vehicle.
  • the mathematical model is a 3-dimensional (3D) model, and is obtained from the 2D image of the captured vehicle by employing known techniques, such as the "structure from X' technique.
  • Probability threshold value and probability margin may be utilized, by each computer unit, for enhancing the vehicle design model determination process.
  • the means for extraction of the design model of the vehicle may further comprise: a) Means for determining a probability threshold value (herein after the "threshold value"), and a probability margin; b) Means for allowing selecting individual key features ("additional key feature”) from the preferred key features. The individual key features are correlated with data that is related to equivalent key features of vehicle design models that are contained within the CMDB.
  • the correlation process results in assigning corresponding additional probability values to the respective vehicle design models.
  • the selected key features are preferably key features that are not in the group (i.e., the group of selected key features); c) means for assigning a probability weight to each one of the vehicle design models. The probability weight calculation is based on the primary probability value and on the additional probability values assigned to each one of the vehicle design models; and d) means for assessing . the probability weights, for allowing determining the (preferred) vehicle design model as the design model of the captured vehicle;
  • the key features are mathematically characterized; i.e., the key features are represented by mathematical equations, the coefficients of which are intended to be correlated with corresponding coefficients that are related to equivalent key features of corresponding design models and stored in the CMDB.
  • the system comprises, for each computer unit, segmenting means, for allowing identification of the silhouette of the vehicle's image and of the contours of the vehicle's prominent elements.
  • the segmenting means may include a 'differentiation operator', which is employed on the pixels in the picture contained within the AOI, for registering pixels having relatively high value. The latter pixels represent the border points between adjacent distinguishable areas, and a collection of corresponding pixels forms corresponding border lines, which represent the silhouette and contour lines.
  • the system comprises, for each computer unit, normalization means, for normalizing the silhouette and contours (i.e., scaled and, whenever required, rotated), in order to allow correlation between (the normalized) key features of the captured vehicle and 'full/real sized' key features belonging to actual vehicle's models.
  • the normalization includes , scaling and, whenever required, rotation of the picture in the AOI.
  • the scaling factor is determined according to the known distance of the video acquisition device from captured vehicles, and the rotation angle is determined according to the known relative height and angle (i.e., orientation) of the video acquisition device with respect to the captured vehicles.
  • the CMDB is external to the respective computer unit and communicating with the latter computer unit by utilizing ' bidirectional communication channel. According to another aspect, the CMDB resided within the respective computer unit.
  • the system comprises, for each computer unit, color (i.e., of the vehicle) extraction means, which comprises: a) means for analyzing the characteristics of the pixels of the AOI, for allowing calculating averaged color values. Each one of the averaged color values corresponds to a different area of the vehicle (e.g., doors); b) means for choosing the averaged color having the maximal value as the representative color of the vehicle; c) Known color reference, for measuring the effect of ambient factors on the known color reference; d) Means for modifying the representative color of the vehicle according to the measured effect of the ambient factors on the known color reference. The modified representative color is the true color of the vehicle.
  • color extraction means comprises: a) means for analyzing the characteristics of the pixels of the AOI, for allowing calculating averaged color values. Each one of the averaged color values corresponds to a different area of the vehicle (e.g., doors); b) means for choosing the averaged color having the maximal value as the representative color of the vehicle; c) Known color reference
  • the representative color is modified by utilizing color control techniques, such as AGC (Automatic Gain Control) or white balance.
  • AGC Automatic Gain Control
  • the license plate number of the vehicle is obtained, by each computer unit, by employing the known License Plate Registering (LPR) technique, which utilizes the Optical Character Recognition (OCR) algorithm.
  • LPR License Plate Registering
  • OCR Optical Character Recognition
  • the video acquisition device i.e., the 'capturing site'
  • the color reference is a standard calibration colored plate, which is located in a fixed location in a way that the calibration colored plate continuously appears in a pre-determined size and location in the FOV of the video acquisition device.
  • the video acquisition device i.e., the 'capturing site'
  • the image of the captured vehicle is a still picture
  • the color reference is obtained by periodically facing a standard calibration colored plate in front of the video acquisition device, and measuring the effect of the ambient factors thereon. The more frequently the effect of the ambient factors (i.e., on the standard calibration colored plate) is measured, the more accurate the modification of the representative color of the vehicle is, and, consequently, the more accurate is the vehicle's color determination process.
  • a color reference in the form of a relatively small standard calibration colored plate, may be continuously positioned in a fixed location in front of the video acquisition device, in order to allow continuous measurement of the effect of ambient factors, while allowing the video acquisition device to capture images of vehicles.
  • a distance measuring sub-system is provided, the task of which is to continuously measuring the distance between the (moving) video acquisition device to the (moving) vehicle whose image is to be captured.
  • the measured distance and the (known) relative height of the video acquisition device allow the computer unit optimizing the normalization of the image captured by the video acquisition device.
  • a Work Model Database which includes data (hereinafter “vehicles profiles") related to vehicles of interest (e.g., vehicles registered as stolen, a vehicle that is allowed to enter restricted area).
  • vehicle profiles data related to vehicles of interest (e.g., vehicles registered as stolen, a vehicle that is allowed to enter restricted area).
  • vehicle profiles includes a unique combination of license plate number, color and model related to a specific vehicle of interest, and, optionally, additional data, such as the vehicle's owner, the owner's details (i.e., residence, occupation, driving license, accidents the owner was involved in, etc.).
  • the WMDB may reside within the respective computer unit, or be external to the latter computer unit. Additionally, the content of the WDB may be temporary, i.e., data, which is related to specific vehicle of interest, may be deleted or updated after the latter vehicle is identified and a corresponding response is commenced.
  • each one of the capturing sites may utilize a transceiver unit, for allowing the respective computer unit to exchange data with the central control station.
  • the plurality of capturing sites are deployed in predetermined locations.
  • the central control station comprises a Global Model Database (GMDB), which includes data that is related to the status of selected vehicles, and data that is related to essentially every existing vehicle design model, a central computer and a transceiver, for allowing the central computer to exchange messages with each one of the computer units residing in the respective capturing sites.
  • GMDB Global Model Database
  • each one of the video acquisition devices i.e., in each one of the respective vehicle .
  • capturing sites is continuously operating, and data related to the vehicles' extracted details (i.e., extracted license plate number, color and model) of every captured vehicle; in each one of the capturing site, is broadcasted, essentially in real-time, to the central computer unit, where the received data is compared to "vehicles profiles" that are stored in the WDB, after which a corresponding response, which depends on the result of the comparison process, is commenced.
  • the central control station transmits a request to the computer units of the vehicles capturing sites, to commence a response only upon identification of specific vehicles of interest, meaning that whenever a specific vehicle of interest, as - indicated by the central computer unit, is identified by at least one of the capturing sites, the corresponding capturing sites commence a corresponding response.
  • the computer unit may carry out one, or more, corresponding actions, depending on the type of match.
  • the corresponding actins may be selected from the group of: a. alerting an operator on the scene/capturing site; b. alerting a remote, central surveillance station; c. displaying on a local, and/or remote, screen every available detail related to the vehicle; d. issuing a printout of the vehicle's complete details; e. allowing the vehicle to enter a restricted area by opening a gate/barrier; f. ignoring the vehicle (when the vehicle is not classified as wanted); or g. automatically closing, or opening, a gate. h. activating a siren; or i. dialing a predetermined telephone No.
  • the central control station may transmit a request only to preselected capturing sites (i.e., according to predetermined criteria).
  • the Car Model Database (CMDB) in each capturing site may be automatically updated with vehicle's models, by communicating with the Global Model Database (GMDB).
  • CMDB Car Model Database
  • GMDB Global Model Database
  • the central control station may further include a central Graphic User Interface (GUI), for allowing a person to interact with the central computer, and, through the central computer, with the computer units in the respective vehicles capturing sites.
  • GUI Graphic User Interface
  • the interaction may include at least updating the content of the three databases (i.e., GMDB, CMDB and WMDB), exchanging messages (i.e., between the central computer and the computer units) and transmitting requests, inquiries and directives from the central control station to the computer units residing in the respective capturing sites, presenting a picture, and related data, of wanted specific vehicle that are identified, etc.
  • GUI Graphic User Interface
  • each one of the vehicle capturing sites may include an independent GUI, for allowing operation, calibration and maintenance of the respective vehicle capturing site.
  • a capturing site may be independent; i.e., a capturing site is a 'stand-alone' facility, operating without communicating with a central control station.
  • Figs, la to lc show an image of an exemplary captured vehicle, and the vehicle's 'straightforward' silhouette and manipulated side silhouette, respectively;
  • Figs. 2a and 2b show image pictures of the left-hand side and back side of a captured vehicle, respectively, according to a preferred embodiment of the present invention
  • Figs. 3a and 3b show the silhouette and contours that correspond to the image pictures shown in Figs. 2a and 2b after employing image segmentation process, according to the preferred embodiment of the invention
  • Fig. 4 shows a 'negative' picture of the silhouette and contours shown in Fig. 3b, which is used for extraction of key features, for determining the model of the vehicle;
  • Fig. 5 shows exemplary steps for extracting color, license plate number and model of a vehicle, according to a preferred embodiment of the present invention.
  • Fig. 6 schematically illustrates a basic system that comprises a capturing site and a control station.
  • Figs. 2a and 2b show exemplary digitized image pictures of the left-hand side and back side of a vehicle that was captured in the FOV of the camera (not shown), respectively, according to a preferred embodiment of the present invention.
  • Only one (digitized) image picture i.e., the image picture shown, for example, in Fig. 2a or in Fig. 2b) is required for obtaining the silhouette and contours of the vehicle, and, therefrom, the model of the vehicle.
  • the two figures i.e., Figs. 2a and 2b
  • the image picture of a vehicle may be obtained in a 'straightforward' manner if the video acquisition device (not shown) is located in a fixed position with respect to the captured vehicle.
  • a motion- detection algorithm is employed on the stream of video sequence, which is provided by the video acquisition device, . to identify the occurrence of a motion. The identified occurrence of the motion allows extraction of the image picture that includes the image of the (wanted) vehicle.
  • the digitized image picture is utilized for identification of the license plate number, color and model of the vehicle whose image was captured, as is described in connection with the respective figure(s).
  • Figs. 3a and 3b show the silhouette and contours after image segmentation process that correspond to the digitized image pictures shown in Figs. 2a and 2b, respectively, according to the preferred embodiment of the invention.
  • the silhouette and contours of the vehicle are obtained by employing image analysis and segmentation techniques on the corresponding pixels of the digitized image that is included in the Area of Interest (AOI) (not shown), which is extracted from the FOV.
  • AOI Area of Interest
  • Employing the image analysis and segmentation techniques allow identifying pixels ('points') that represent transitions (i.e., borders) between two adjacent elements, or objects, of the vehicle (e.g., between a window and the roof).
  • the identified border pixels form essentially broken lines, which represent in great accuracy the captured vehicle.
  • image analysis and segmentation techniques return information associated with the intensity characteristics of a digitized image, which may include one or more objects.
  • the latter techniques utilize edge detection algorithms to detect edges, which are places, or points, in the digitized image, that correspond to object(s) boundaries.
  • the more advanced edge detection techniques involve the use of color data to locate edges in a scene, as utilization of color differences between regions allows obtaining more precise edge detection. Further description of color edge detection may be found in, e.g., "A computational Approach to Edge Detection” (IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-8, No. 6, November, pp.679-698), and "CS 223B Project: Color Edge Detection Results", Kate Starbird / John Owens (Int. web site: http://graphics,stanford.edu/ ⁇ jowens/223b/results.html).
  • the corresponding edges are identified by identifying abrupt changes in the intensity characteristics of the digitized image.
  • the abrupt changes may be identified using one of the criteria, according to which the abrupt changes are associated with places wherein:
  • the first derivative of the intensity is larger in magnitude than a predetermined threshold
  • the second derivative of the intensity has a zero crossing.
  • the most powerful segmentation and edge-detection methods are the Sobel, Prewitt, Roberts, Laplacian of Gaussian method, zero-crossing and Canny method. These known techniques utilize several derivative estimators, each of which depending on whether the derivative operation should be sensitive to horizontal or to vertical edges, or both.
  • Fig. 4 shows a 'negative' picture of the silhouette and contours shown in Fig. 3b.
  • the silhouette and contours shown in Fig. 4 are the result of the vehicle being captured at right-angle with respect to the left-hand side of the vehicle. However, the vehicle may be captured from different angles and distances, in which case a 'normalization' process will take place, after which the resulting normalized image will appear essentially like the exemplary image shown in Fig. 3a, or in Fig. 3b.
  • the normalization process is associated with scaling (i.e., distance compensation) and rotation (i.e., angle compensation) of the obtained silhouette and contours, in order to allow correlation between (the normalized) key features that are part of, and/or resulting from, the silhouette and contours of the captured vehicle, and 'full real sized' key features belonging to actual models of ('real') vehicles, thereby allowing extracting the model of the vehicle.
  • the full/real sized key features are stored in a Car Model Database (CMDB) (not shown).
  • CMDB Car Model Database
  • the scaling factor is determined according to the known distance of the video acquisition device from captured vehicles
  • the rotation angle is determined according to the known relative height and angle (i.e., orientation) of the video acquisition device with respect to the captured vehicles.
  • the silhouette and contours are indicated as broken lines, or dots.
  • the ('original') broken lines i.e., dots
  • the ('original') broken lines i.e., dots
  • missing sections are added to the broken lines of the silhouette and contours, employing corresponding 'Line tracking' algorithms.
  • reconstruction of the silhouette and/or contours may be an option in some cases, due to the original broken lines being adequate for identifying the required key features that are included within the silhouette and contours.
  • Key features may comprise selected portions, sections or segments of the completed or of the original, silhouette and/or contours.
  • lines 41 and 44 may be selected, which represent the unique shape of the back right-hand side lamp and roof of the exemplary vehicle, respectively.
  • lines 41 and 44 are mathematically characterized; i.e., corresponding mathematical equations are derived, which represent lines 41 and 44.
  • the coefficients of the derived mathematical equations may be stored in a temporary storage array, in order them to be, whenever required, compared to corresponding data that is stored in the CMDB (not shown).
  • key features may be selected, which represent prominent areas, or surfaces, of parts, or elements, for example, the surface of bumper 46 and lamp 43.
  • key features may be selected, which represent lengths, for example, the maximal width (47) of the vehicle, and/or the width of the back window (48) and/or the distance between the (centers of the) wheels (49), and/or the height of the vehicle (50).
  • key features may be selected, which represent typical angles; for example, angles a and ⁇ . Additionally, or alternatively, key features may be selected, which represent ratios between two elements, or other key features, of the vehicle, for example, between lines 41 and 44, between surfaces 43 and 46, between angles a and ⁇ , etc. Of course, other, or additional, key features may be selected, for enhancing the process of model extraction.
  • a mathematical model is formed, which comprises a group of selected key features, for example, surface 46, length of the vehicle 47 and angle ⁇ ).
  • the mathematical model is, then, correlated with the vehicle models that are stored in the CMDB, after which (normally) one, or more, vehicle model(s) are found to, which matches the mathematical model.
  • the number and type of the key features i.e., forming the 'group'
  • the number and type of the key features are determined so as to allow only one correlation iteration; i.e., in most cases only one iteration would be required, between the mathematical model and the data stored in the CMDB, for finding only one vehicle model that perfectly matches the mathematical model.
  • the latter vehicle model would be determined to be the model of the captured vehicle.
  • the mathematical model may be a two dimensional (2D) model or three dimensional (3D) model.
  • the present invention also provides a system, which utilizes the model of a vehicle, which is obtained in the way described above, for obtaining a complete (i.e., license plate number, color and model) identification of the vehicle, as described in Fig. 5.
  • Fig. 5 shows a general block diagram of the present invention.
  • Video camera 501 generates a continuous video stream that corresponds to vehicle 516 that passes by in the FOV of Video camera 501.
  • the continuous video stream is forwarded to a computer unit (not shown); which utilizes a software (not shown) for detecting a motion that is related to vehicle 516, or, whenever applicable, with other moving objects (not shown).
  • Detecting a motion i.e., in the video stream
  • Step 502 may be implemented using either software or hardware tools.
  • step 502 a check is made, in step 503, whether the (detected) moving object is a vehicle.
  • the check is based on prior knowledge that is related to the location of video camera 501 with respect to the monitored, or inspected, area, and to the expected relative area of vehicles with respect to the area of the FOV of video camera 501. If no motion is detected, or if there is motion detected, but of an object other than a vehicle, the system continues to check for motion that is associated with a vehicle (503a).
  • an image picture i.e., which includes the captured vehicle
  • the segmentation process is employed on a 'narrowed' picture that includes the image of the captured vehicle, rather than on the whole image picture.
  • an Area Of Interest (AOI) is extracted from the image picture (in step 504), which includes the vehicle's image.
  • the AOI is extracted by utilizing motion considerations; i.e., by identifying pixels whose characteristics indicate that a motion has occurred, or by identifying the 'center of motion' of the moving vehicle.
  • a corresponding AOI is extracted for each one of the detected vehicles.
  • the digitized video ' content of (each one of) the AOI undergoes filtration, noise reduction, light condition correction and brightness control, in order to obtain essentially noiseless picture (step 505), after which the model, color ⁇ and license plate number of the vehicle are extracted (steps 511, 507 and 508, respectively).
  • the vehicle's silhouette and contours are first extracted from the corresponding AOI, by employing segmentation process (step 506), and preferred key features are identified in the silhouette and contours and mathematically characterized.
  • a grouped key features is formed, which consists of several key features that are selected from the preferred key features (step 506a).
  • the grouped key features are translated into a mathematical model (step 506b), which may represent either a 2D or 3D surface of the vehicle.
  • the mathematical model is correlated (step 506c) with the corresponding data stored in the CMDB (not shown), which is related to equivalent mathematical models of corresponding existing vehicle design models.
  • the mathematical model may be obtained by using, e.g., the known "structure from X" algorithm, or "shade from X” algorithm.
  • the 2D (or 3D) surface represents the detected vehicle.
  • correlation of the mathematical model with the equivalent mathematical models, that correspond to respective existing vehicle design models yields different probability values for different vehicle design models, and the vehicle model who yields the highest probability value is determined to be the model of the (captured) vehicle.
  • a predetermined probability threshold and probability margin may be utilized, in order to enhance and improve the model-decision process.
  • correlation of the mathematical model would yield two, or more, high probability values with very small margin, which would result in 'finding' two, or more, 'candidate' (i.e., matching) models.
  • step 510a a key feature (i.e., other than those forming the group) is selected from the key features identified in step 506, and correlated (step 510b) with data, which corresponds to equivalent key feature, that is stored in the CMDB.
  • the key features selected in step 510 are selected according to apriori knowledge that is to the matching models (e.g., wheels or light distances).
  • Steps 510 and 510b are repeated until one vehicle model is determined to be the vehicle's model, based on the accumulating results of the correlation iterations.
  • the model determination conforms to the rule, according to which the more key features are involved in the correlation process (i.e., more iterations are performed), the more decisive the decision is regarding the true design model of the vehicle.
  • the color of the vehicle is extracted, by first analyzing the characteristics of the pixels of the corresponding AOI, for allowing calculating averaged color values.
  • Each one of the averaged color values corresponds to a different area, or element, of the vehicle, for example, doors, bumper, lamps cover, etc.
  • an averaged color is chosen as the representative color of the vehicle, which has the maximal value.
  • a known color reference is utilized for measuring the effect of ambient factors on the color reference, and the representative color of the vehicle is modified according to the measured effect of the ambient factors on the known color reference.
  • the modified representative color is the true color of the vehicle.
  • the average color of the vehicle is extracted, taking into account all colored portions of the vehicle (i.e., ignoring windows and the like).
  • the color of a vehicle is deduced by comparing the sampled color, with a known reference color as appearing at the time of the sampling, and extrapolating the difference in the shading on the color sampled from the vehicle.
  • AGC Automatic Gain Control
  • the dominant color i.e., vehicle's color
  • analyzing the pixels and calculating averaged color values are limited to selected pre-defined areas, where the probability to detect painted parts of the vehicle is relatively high, for example, the lower part of the doors, or the engine cover.
  • step 507 the vehicle's license plate number is identified.
  • LPR License Plate number Recognition
  • OCR Optical Character Recognition
  • the license plate number may be extracted, or identified, by utilizing other known techniques.
  • the extracted vehicle details i.e., the design model, color and license plate number, step 512
  • a WMDB (513) from a Global Model Database (GMDB, 514)
  • the vehicles profiles may include additional data, such as details of the vehicle's owner, the serial number of the vehicle's engine, accidents in which the vehicle was involved, is the vehicle wanted for any reason (and specifying the reason), etc.
  • a first set of predetermined actions may be commenced (step 516). For example, the system may open a gate for allowing the vehicle to enter a restricted area. Another example would be identifying the entering and exiting instants at which a vehicle enters and exits a toll road. If there is only partial match (step 517), for example, a match is found only with regard to the color and license plate number, or there is no match at all (519), the vehicle may be determined as suspected as, e.g., stolen. Accordingly, a second predetermined set of actions may be commenced (step 518).
  • Fig. 6 schematically illustrates the general layout and functionality of the system, according to the preferred embodiment of the present invention.
  • Reference numeral 600 denotes an exemplary vehicle capturing site, whic comprises at least video camera 601, computer unit 603, WMDB 605 (an option), Car Model Database (CMDB) 611 and transceiver 606.
  • CMDB Car Model Database
  • Video camera 601 generates a video data, in the form of still images, or stream of video frames, that corresponds to an object(s) passing in front of the FOV of video camera 601, for example, vehicle 610.
  • the video data that is captured by video camera 601 is processed and analyzed 'on-site', by computer unit 603 that is located relatively close to video camera 601.
  • Video camera 601 is directed to a preferred direction, so as to monitor a preferred area, where vehicles are likely to pass.
  • computer unit 603 employs a corresponding software package 604.
  • the vehicles' color, license plate number and model are extracted from the image picture (not shown) of the captured vehicle by utilizing software package 604, essentially as described before.
  • the vehicle's model is extracted, or determined, by correlating key features (not shown), which are extracted from the corresponding silhouette and contours (see Fig. 4), and a corresponding mathematical model, that are (i.e., key features and mathematical model) obtained from the processed and analyzed image picture of the vehicle, to corresponding key features and mathematical models that are stored in CMDB 611 and characterize essentially every existing vehicle model.
  • WMDB 605 contains vehicle profiles of interest (e.g., of stolen vehicles), such as vehicle profile 609.
  • the vehicle profiles of interest may be updated, for example, whenever new relevant information should be added (e.g., indicating that a wanted vehicle was last seen with two people in it). Alternatively, or additionally, old vehicle profiles may be discarded, and new vehicle profiles added from/to WMDB 605 ⁇ respectively.
  • Transceiver 606 allows computer unit 603 to communicate the extracted details 607 to a remote server, via a communication network 623.
  • Reference numeral 600a denotes a central control station that includes at least a server 621 and transceiver 606a.
  • Server 621 includes at least GMDB 620 and a software package (not shown) the task of which is at least to control data flow via transceiver 606a and to compare incoming data (e.g., data forwarded from computer unit 603 residing in capturing site 600) to a data stored in GMDB 620.
  • GMDB 620 contains a plurality of vehicle profiles, such as exemplary vehicle profile 609, which are related to essentially every registered vehicle, some of which may be forwarded to WMDB 605. Alternatively, the vehicle profiles, contained in GMDB 621, may be associated only with a selected group of registered vehicles.
  • GMDB 620 may contain also key features and mathematical models that represent essentially every existing design model (i.e., of vehicles), some of which may be forwarded, whenever required, to CMDB 611.
  • the key features and mathematical models required for operating capturing site 600 may be uploaded locally, at capturing site 600.
  • the content of the corresponding vehicle profile must include at least the corresponding design model, color and license plate number of the specific registered vehicle.
  • vehicle profile 609 includes an exemplary "license plate number: AA-BBB-CC", "(Design) Model: Mazda 323 1.6L”, “Color: red”.
  • vehicle profile 609 may include other identifying details such as, the vehicle's owner, the validity of its license, its accidents history, whether or not it is stolen or used by criminals/terrorists, etc, and is sufficient to completely identify any registered vehicle.
  • Computer unit 603 communicates with remote server 621 by utilizing transceivers 606 and 606a, which may communicate with each other vie communication network 623, which may be, e.g., a cellular network or the Internet, or the Local Area Network (LAN), or the Wide Area Network (WAN).
  • communication network 623 may be, e.g., a cellular network or the Internet, or the Local Area Network (LAN), or the Wide Area Network (WAN).
  • capturing stations such as capturing station 600 (e.g., capturing sites 600/1, 600/2 and 600/3) may be deployed to cover a large area of interest, according to wanted strategy, for cooperating with central control station 600a and while operating independently with respect to each other.
  • the system shown in Fig. 6 may operate in either of at least three modes.
  • the first mode involves unconditionally forwarding the extracted details of every vehicle, which appears in every FOV of every video camera, to remote server 621, which may, then, decide upon actions to be carried out.
  • the extracted details are compared, in server 621 of central control station 600a, to vehicle profiles that are stored in GMDB 620.
  • Each vehicle profile has essentially the structure of exemplary vehicle profile 609.
  • Server 621 may commence a series of actions according to the result of the comparison result(s).
  • a perfect match is found between the (license plate number, color and model of the) captured vehicle 610 and a corresponding vehicle profile in GMDB 620 (i.e., vehicle profile 609), and vehicle 610 is registered as a vehicle that has a permission to enter a restricted area.
  • capturing site 600 may be utilized (i.e., by central control station 600a) to relay a command, in the form of corresponding transmission, from server 621, to open automatic gate 610a, by forwarding a corresponding transmission from transceiver 606a to transceiver 606 via communication network 623, and from transceiver 606 to gate 610a, via wireless communication channel 61 Ob
  • server 621 may forward that fact to the corresponding capturing site, in order to allow the latter site to act accordingly (e.g., stop the vehicle for interrogation).
  • a system operating under the first mode may be incorporated into a 'toll road' system, in order to allow calculating the travel fee of vehicles, by continuously forwarding extracted details (e.g., extracted details 607) of vehicles whenever vehicles appear in the FOV of each one of the corresponding video camera (e.g., camera 601)..
  • the system may be incorporated into a parking system, for granting automatic entrance to pre-paid members, or, alternatively, for calculating parking fee for non-members.
  • the system may be utilized for evaluating average speed of a vehicle, by measuring the travel time of the vehicle between two selected capturing sites, the relative location of which is known.
  • the second mode involves forwarding from server 621 to corresponding capturing sites a data that relates to specific vehicles of interest.
  • vehicle profile 609 was forwarded from central control station 600a to WMDB 605 because the specific vehicle (i.e., vehicle 610) is wanted for being stolen, as can be seen in the (magnified) vehicle profile 609 (i.e., "Reason: stolen").
  • Fig. 6 the specific vehicle (i.e., vehicle 610) is wanted for being stolen, as can be seen in the (magnified) vehicle profile 609 (i.e., "Reason: stolen").
  • the image of exemplary vehicle 610 (having a plate number: 'AA-BBB-CC, color: 'red' and model: 'Mazda 323 1.6L') is captured by video camera 601, forwarded to, and processed and analyzed by, software package 604 in order to obtain the vehicle's 610 extracted details 607.
  • Software package 604 is also responsible for comparing between the extracted details 607 to corresponding vehicle profiles 608 that are stored in WMDB 605, in order to find a matching vehicle profile; i.e., extracted details 607 match the vehicle profile 609 in WMDB 605, since the combination of the license plate number (i.e., 'AA- BBB-CC), vehicle's color (i.e., 'red') and model (i.e., Mazda 323 1.6L), is identical to the combination of the license plate number, color and model that are stored in vehicle profile 609.
  • the license plate number i.e., 'AA- BBB-CC
  • vehicle's color i.e., 'red'
  • model i.e., Mazda 323 1.6L
  • the wanted vehicle i.e., vehicle 610
  • capturing site 600 i.e., by finding matching details in record 609 in WMDB 605
  • the fact of the latter vehicle being captured may is forwarded from computer unit 603 to central control station 600a, via communication network 623, in order for control station 600a to decide upon the actions to be carried out.
  • computer unit 603 decides upon the actions to be carried out.
  • Another example may involve two persoiis being witnesses to a 'heat and run' accident, and a policeman holding them for questioning.
  • One person may remember only the color of the heating vehicle (e.g., "RED"), and another person may identify only the 'general' model of the vehicle (e.g., FORD).
  • the policeman may, then, communicate the color (Red) and model (Ford) of the heating vehicle to central control station 600a, which may forward to selected capturing sites a message such as "Stop every vehicle whose color and model are Red and FORD, respectively, for being involved in a 'heat and run' scenario".
  • server 621 communicates a vehicle profile, such as vehicle profile 609, which contains essentially only five items; i.e., "Color: Red”, “Model: Ford”, “Wanted: yes”, “Reason: involved in a lieat and run' accident”, and the latter message.
  • vehicle profile 609 which contains essentially only five items; i.e., "Color: Red”, “Model: Ford”, “Wanted: yes”, “Reason: involved in a lieat and run' accident", and the latter message.
  • each capturing site is a 'standalone' facility, i.e., a capturing site, such as capturing site 600, is operable regardless of an existence of a central control station such as control station 600a.
  • a system operating according to the third mode may be utilized, for example, by, e.g., residents of a private apartment house, in which case the residents own the system, operate it and are responsible for the content of WMDB 605, which may include information about the vehicles of the residents and the residents' (optionally) friends or relatives.
  • the disclosed system has several advantages, one of which is that only one GMDB 620 is utilized, whose data resources are shared by all of the capturing stations, and each capturing station (e.g., capturing station 600) contains a local database (i.e., WMDB 605) having significantly smaller content.
  • a second advantage is ; that the communication requirements between, e.g., capturing station 600 and remote server 620 are minimal — communication is required only for updating WMDB 605, i.e., whenever a vehicle profile of a new vehicle is to be added to WMDB 605.
  • Another advantage is time efficiency; i.e., there is no need to search a huge database for every vehicle that is analyzed, but rather, only limited numbers of vehicle profiles are utilized.
  • Capturing site 600 may be stationary with respect to the monitored area, or be mobile, for example if installed in a police car, and it is capable of capturing stationary vehicles and moving vehicles.
  • video camera 601 In the first case (i.e., stationary vehicles), video camera 601 generates a still image picture of the corresponding vehicle.
  • video camera 601 In the second case (i.e., moving vehicles), video camera 601 generates a sequence of corresponding images, and a motion- detection algorithm is employed thereon, for allowing to extract an image picture of the captured vehicle.
  • the method disclosed in the present invention is utilized for identifying models of vehicles, the method may be utilized for different purposes.
  • the method could be adapted to identify persons, by capturing the image of the face, and/or other parts, of a person;; extracting the person's corresponding silhouette and contours, and utilizing identified key features in the silhouette and contours that may assist in discriminating between different individuals/persons.
  • key features may be, for example, the distance between the ears of the person, the shape of the person's nose, eyebrows and mouth, etc.
  • Such person- oriented identification system will be useful for, e.g., automatically opening a door of a secured room only whenever an authorized person who wishes to enter the secured room.
  • the photos of known, and/or potential, criminals are in possession of, e.g., a bank, and whenever such a person enters into the bank, the person- oriented identification system sends a corresponding alarm to the security personnel of the bank, and/or, to the nearest police station, which may include the photo of the suspected person.

Abstract

Method is provided, according to which a vehicle capturing site is capable of identifying design model, color and license plate number of vehicles, by capturing corresponding images of the vehicles, by aiming a video acquisition device towards a preferred place where vehicles are expected to pass, in order to generate a video data in the form of still-picture, or a continuous video stream, which represents essentially all of the incidents occurring in the Field Of View (FOV) of the video acquisition device. After acquiring the video data, it is digitized, processed and analyzed by a computer unit, for obtaining the design model, color and license plate number of the captured vehicles. The processing and analysis includes, among other things, use of edge-detection algorithms, which are utilized for extracting from the video data silhouette and contour lines that characterize the captured vehicles. Further, key features, which are extracted from the obtained silhouette and contour lines, are compared to a corresponding data stored in a database, and the design model, color and license plate number of the vehicles are determined according to the results of the comparison.

Description

AUTOMATIC. REAL TIME AND COMPLETE IDENTIFICATION OF
VEHICLES
Field of the Invention
The present invention relates to identification of vehicles. More particularly, the invention relates to a method for real-time and automatic absolute identification of a vehicle's design model, vehicle color and the vehicle license plate number. The present invention also relates to a system using the identified design model, license number and color of the vehicle to check the status of the identified vehicle (stolen, faked etc.), and to commence a series of predetermined actions depending on the resulting status of the vehicle.
Background of the Invention
The constantly-increasing number of vehicles traveling on the roads, as well as the increasing complexity of the tasks required from the traffic and security authorities have made it very difficult for them to enforce traffic laws and pin down suspected vehicles. Serious cases, such as catching terrorists or dangerous criminals on the run, consume many personnel and labor resources, while for preventing the most common violations, such as vehicle theft, or driving without lights or a driving license, there is often very little enforcement or none at all. Even when the police sets up road blocks, factors such as the density of traffic and the policemen's tiredness make it impossible to make a thorough inspection for every vehicle passing, as such an inspection requires considerable time, often requiring the policeman to make a lengthy communication session with a base station or a database to verify various details about the vehicle.
In general, there are two major sectors which would benefit from utilizing an improved real-time system for identifying vehicles; i.e., the civilian sector and the security sector (the latter could be sub-divided into military and police sectors). Kegarding the civilian sector, it would be . also beneficial to provide a system, which would allow identifying vehicles for granting permission to enter to, e.g., toll roads, parking facilities, public institutions etc. - Regarding the security sector, such an identification system would allow automatic control of cars entering border access points and assist in identifying cars that are 'booby-trapped'.
Currently, there are . several technologies being utilized for- identifying vehicles. One major trend of technologies focuses on identifying license plate number of vehicles. One such technology is, for example, the known License Plate Recognition (LPR) technology. According to this technology, a picture of the vehicle is taken, from which its license plate number is extracted using known algorithm(s) that analyze characters in the picture. LPR technology is largely implemented, for example, in toll roads, parking facilities, and by the police and military. A major drawback of this technology is that it is incapable of discerning situations where a vehicle's license plate is swapped. Thus, offenders can replace the sought-after license plate of their suspected vehicle, with a license plate of another, non-suspected vehicle, and not get caught.
Another trend of technologies focuses on identifying the silhouette, or shape/profile, of vehicles. Currently, there are technologies for obtaining data related to vehicle's design models. In general, these techniques utilize three-dimensional sensing technique, or a silhouette description of the vehicle. The first technique is intended for counting occupied parking spaces in a parking area, and the second for measuring traffic congestion. A notion, regarding obtaining the silhouette of a captured vehicle, is shown in Figs, la to lc (prior art). Fig. la shows one original image of a vehicle that was 'caught' in the Field of View (FOV) of the corresponding camera (not shown). Fig lb and lc, show the immediate silhouettes . that corresponds to the original image of the captured vehicle. However, the latter technologies provide only general and inaccurate shape of vehicles^ and, therefore, the exact design model of the vehicle cannot be obtained using these technologies.
A popular solution for coping with vehicle theft involves installation of a transmitter inside the vehicle, whose radiation is constantly monitored by a monitoring station. This way, the monitoring station is supposed to always have the exact, immediate location of the vehicle, and when it is stolen, the vehicle can be easily tracked down. This approach suffers from significant drawbacks, which are associated with handling communication systems. For example, the signal may be intentionally, or unintentionally, blocked, and the transmitter, which is installed in a vehicle, may be easily removed, or neutralized, by thieves.
Other methods utilize sophisticated algorithms to analyze photos of vehicles, for other visual characteristics as well. US 5,809,161 discloses an object-monitoring system, capable of detecting predetermined moving objects, from other moving or static objects. Image data acquired by a plurality of cameras, can be sent over a digital communication network to a central image processing system. However, only the license plate number, and only of those vehicles which are relatively large, can be automatically extracted (i.e., from the acquired image). The system disclosed in US 5,809,161 is particularly suited to monitoring and discriminating large vehicles in a multi-lane roadway, from other, smaller vehicles, which are ignored. However, the recognition analysis of this system lacks important aspects, such as the vehicle's design model, its color, or a 3-D analysis of its shape, and consequently, oftentimes cannot spot vehicles that have undergone changes. Another drawback of this system is it being stationary, and not designed to be attached to moving objects, such as police vehicles.
All of the above-mentioned solutions have not provided a satisfactory solution to the problem of providing an effective, automatic, real-time and complete identification of vehicles, and, in particular, none of the prior art technologies offers a solution for providing the exact design model of a wanted vehicle.
It is therefore an object of the present invention, to provide a system and method for accurately identifying the design model of inspected/wanted vehicles.
It is another object of the present invention to provide an automatic system for identifying vehicles, without installing any surveillance means in the vehicles.
It is still another object of the present invention to provide a system and method for identifying vehicles, which can notify in real time the authorities when a vehicle of interest is identified or detected.
It is still a further object of the present invention, to provide a system for identifying vehicles, which is autonomous and independent of the vehicles that are inspected.
It is still a further object of the present invention, to provide a system for identifying vehicles, which could be stationary, or non- stationary, while inspecting passing-by vehicles. Other objects and advantages of the invention will become apparent as the description proceeds.
Summary of the invention
The present invention relates to a method and system for allowing an automatic and real-time identification of a vehicle, by identifying the design model, color and license plate number of the vehicle. The present invention also relates to a system using the identification results for checking the status of the vehicle.
The present invention is directed to a method, according to which a vehicle capturing site is capable of identifying design model, color and license plate number of vehicles, by capturing corresponding images of the vehicles, by aiming a video acquisition device towards a preferred (monitored/covered) area (e.g., a road, a gate of a parking facility), where vehicles are expected to pass, in order to generate a video data in the form of still-picture, or a continuous video stream, which represents essentially all of the incidents occurring in the Field Of View (FOV) of the video acquisition device. The video data is acquired, digitized, processed and analyzed by a computer unit, thereby obtaining the design model, color and license plate number of captured vehicles. By "capturing" a vehicle is meant herein obtaining or being able to obtain an image of the vehicle in a video acquisition device, such as a camera, which occurs when the video acquisition device is active and the vehicle comes within the field of view of the device.
Preferably, the method for automatic and real-time identification of a vehicle comprises the steps: a) Providing a video acquisition device- that is located in known orientation, height and distance, with respect to an a space within which vehicles are intended to be captured, and is capable of generating a video data, in the form of still images, or stream of consecutive video frames-, that corresponds to an object(s) passing in front of the Field Of View (FOV) of the video acquisition device. The video acquisition device may be a digital or analog video camera; b) Providing a database (i.e., Car Model Database — CMDB), which contains data, and/or feathers, and or images, and/or structures, and/or descriptions, that correspond essentially to every existing vehicle's design model; and c) Providing a computer unit and corresponding processing software, for: c.l) allowing indicating to the computer unit that an object is in the FOV, for selecting, from the stream of consecutive video frames, an image picture that includes the image of the object; c.2) extracting an Area Of Interest (AOI) from the image picture, which includes the image of the object; c.3) extracting the image of the object from the AOI, by employing edge- detection algorithm on the pixels of the AOI; c.4) determining whether the image of the object is an image of a vehicle; and c.5) whenever the image of the object is determined to be an image of a vehicle, extracting, from the image of the vehicle, the design model, color and license plate number of the related vehicle
Optionally, a television monitor may be connected to the video acquisition device, for allowing locally, or remotely, monitoring the preferred covered area, and, thereby, allowing indicating to the computer unit the presence of a vehicle in the FOV. The television monitor may also be utilized for carrying out calibration and maintenance procedures. Preferably, the computer unit may be initialized with parameters for optimizing the analysis of. the continuous consecutive video frames, selection of the image picture, extraction of the object from the image picture, determination whether the object is a vehicle, and extraction of the design model, color and license plate number of the vehicle. The parameters may be related to at least the relative height, tilt angel and distance of the video acquisition device, with respect to the preferred area, for normalizing the captured vehicle, for allowing comparing between key features that are part of the normalized image of the captured vehicle to key features that are related to existing vehicle's design models and are contained within the CMDB, thereby allowing obtaining the captured vehicle's design model. The computer unit may be initialized with additional parameters, such as parameters related to a calibration color plate, for optimizing the color determination process, and the direction of moving vehicles, for optimizing the vehicle's image extraction process.
Preferably, the indication to the computer unit, regarding the presence of a vehicle, or another object, in the FOV, and the selection of the image picture to process, is carried out automatically, by: Analyzing, automatically and in real-time, a sequence of corresponding successive video frames, contained in the stream of video frames, that correspond to an object being captured by the video acquisition device. The analysis includes utilization of the generated stream of video frames, for detection of 'motion center' of corresponding object(s), with respect to the FOV of the video acquisition device. The motion center is detected by employing motion- detection technique on the successive video frames. The motion-detection technique utilizes a differentiation algorithm, according to which the characteristics of corresponding pixels of successive video frames are compared, and corresponding differences of characteristics are calculated, thereby detecting a moving object; and selecting, from the successive video frames, an image to be processed, the image is selected according to the image's occurrence instant, which is synchronized with the instant of detected motion of the vehicle, so as to ensure that the content of the (selected) image includes all, or at least most, of the object(s) that was captured by the video acquisition device.
Alternatively, the indication of a vehicle being in the FOV is 'manual', i.e., the operator of the video acquisition device indicates to the computer unit that a vehicle is in the FOV, in order to obtain the image, which includes a still-picture of the vehicle.
According to one aspect, selecting the image picture (i.e., that includes the object) is carried out by employing motion- detection algorithm that includes utilization of the identified instant at which the vehicle entered the FOV, expected vehicle's average speed and number of video frames per second. According to this aspect, a middle frame is chosen from selected consecutive frames, which middle frame is most likely to contain the whole image of the object, said middle frame being said selected image picture.
According to another aspect, selecting the image is carried out by employing motion- detection algorithm that includes identification of the instants at which the vehicle enters and leaves the FOV, after which the image is selected from a frame contained between the corresponding (first and last) frames.
According to another aspect, the motion-detection algorithm includes identifying a pixel, whose characteristics have been changed (i.e., thereby indicating movement), and tracking changes along pixels that form a trajectory that originates from this pixel.
Preferably, extraction of the Area Of Interest (AOI) from the image is carried out by identifying the corresponding 'motion-center' of the image, by evaluating pixels whose characteristics are varied due to motion.
The motion-detection stage may be utilized for obtaining the speed of the captured vehicle, by utilizing the apriori knowledge related to the relative location of the video acquisition device, with respect to the captured vehicle, and the number of video frames per second.
More than one objects may be captured by the same capturing device and essentially at the same time, and each one of the objects may be handled essentially in the same way as described above, i.e., by identifying a corresponding AOI for each captured vehicle, in the same, or predecessor, or successor, individual image, and by analyzing separately each corresponding individual AOI.
Preferably, extraction of the image of the object from the AOI is carried out by employing segmentation process on the AOI. The segmentation process includes: a) Filtering the image, so as to discard effects of ambient light conditions, dust, shadow, motion blur (i.e., caused by the movement of the captured vehicle), clouds etc., in order to obtain an essentially noiseless picture of the captured object; b) Obtaining the silhouette of the object and contours contained therein, by employing edge-detection algorithm; c) Whenever and wherein required, completing the silhouette and contours by adding missing sections, or 'dots', by employing corresponding ('Line tracking') algorithms; d) Obtaining a binary image of the silhouette and contours, by converting the reconstructed silhouette and contours to white lines on black background; and e) Extracting the area that is confined within the closed silhouette. The latter extracted area is the image of the object.
Preferably, completing the silhouette and contours is performed by employing ("Line Tracking") interpolation algorithm(s).
Preferably, the determination, whether an object is a vehicle, is carried out by utihzation of a 'size-threshold' that depends on the relative location of the video acquisition device, with respect to the preferred monitored area, and the relative area of the image of the object, with respect to the area of the corresponding FOV.
Preferably, the order of the extraction of the vehicle's parameters is as mentioned above (i.e., extracting first the vehicle's design model, then the color and license plate number). However, the indicated order is only an option. For example, the order may be reversed, or, extraction of the parameters may be carried out in parallel, rather in series. The order of the extraction of the vehicle's parameters is affected by computation resources and speed, and hardware considerations.
Preferably, extracting the vehicle design model of the (related) vehicle is carried out by: a) identifying, in the image of object, the silhouette and contours of the vehicle; b) mathematically characterizing preferred key features that are part of, and/or resulting from, the silhouette and contours; c) obtaining a mathematical model of the related vehicle, by mathematically characterizing the overall surface area of the related vehicle, by utilizing a grouped characterized key features that allow calculation of the overall surface area. The grouped key features are selected from the preferred key features; d) correlating the obtained mathematical model with data that . is contained within the CMDB and corresponds to existing vehicle design models, for assigning a primary probability value for each one of the existing vehicle design models; and e) choosing the vehicle's model design that is assigned the highest primary probability value as the preferred vehicle design model of the related (i.e., captured) vehicle.
Preferably, the mathematical model is a 3-dimensional (3D) model, and is obtained by applying trigonometric calculations on a corresponding 2- dimensional model. Optionally, the mathematical model is a 2- dimensional (2D) model.
Preferably, obtaining the mathematical model further includes utilization of known "shape from X" techniques, wherein "X" denotes, e.g., "shading", "motion", "contour", "texture", "silhouette", "sequence of pictures", "gradient". One exemplary reference, regarding these techniques, is "Statistical methods in image processing and computer vision" (Prof. Dr.- Ing. Rudolf Mester Institute for applied Physics, March 8, 2000, OUendorf Symposium, Technion Haifa). The latter known techniques are used in computerized vision systems, for obtaining fast and accurate reconstruction of the three-dimensional (3D) description, or representation, of an object that is represented by two-dimensional (2D) image that was produced by, e.g., a video camera. These techniques allow computing the relative depth of points in the 2D image of the object, and "building' a surface representation of the acquired, or captured, object.
Probability threshold value and probability margin may be utilized, for enhancing the vehicle design model determination process. Accordingly, the process for extraction of the design model of the vehicle may further comprise: a) determining a probability threshold value (herein after the "threshold value"), and a probability margin; b) if every primary probability value is smaller than the threshold value, performing, per vehicle design model: b.l) choosing a key feature (herein after the "additional key feature") from the preferred key features. The additional key feature is preferably a key feature that is not of the group (i.e., of selected key features); b.2) correlating the additional key feature with data that is related to equivalent key feature of the vehicle design model, and is contained within the CMDB, thereby assigning additional probability value to the vehicle design model; b.3) assigning a probability weight to the vehicle design model. The probability weight calculation is based on the primary probability value and on the additional probability value assigned to the corresponding vehicle design model; b.4) repeating steps b.l) to b.3) until at least one probability weight is obtained, which has a value higher than the threshold value; b.5) if the probability weight, which has the highest value, has a probability margin, with respect to the next highest probability weight, that is smaller than a predefined probability margin, performing another iteration by repeating steps b.l) to b.3), until a probability weight is obtained (herein after the "preferred probability weight"), which has the highest value, that is higher than the threshold value, and whose margin, ■ with respect to any one of the other probability weights, is larger than the predetermined margin. The vehicle design model that is related to the preferred probability weight is determined as the design model of the captured vehicle; otherwise b.6) determining the design model related to the highest probability weight as the design model of the captured vehicle; otherwise if the margin, between the highest primary probability value and the next highest primary probability value, is smaller than the predefined margin value, performing step b.5).
Preferably, the key features, being characteristic geometric features, are selected from, or related to, at least lengths, surfaces, curves, angles between preferred lines, lines crossings and distances between prominent elements of the vehicle (e.g., the distance between the vehicle's wheels, or between head lights, or external mirrors), which are selected from the silhouette of the vehicle and from the contours of prominent areas of the vehicle, for example, the roof, and/or engine cover and/or doors and/or baggage cover and or lights and/or mirrors of the vehicle. The selected key features are mathematically characterized (e.g., in the form of mathematical equations, the coefficients of which are correlated with corresponding coefficients that are stored in the CMDB, in order to determine the vehicle's design model), in order to avoid performing a 'pixel-to -pixel' based comparison process, which would require much more computational power and would take a lot of time to accomplish.
Preferably, the Identification of the silhouette of the vehicle's image and the contours of the vehicle's prominent elements is implemented by employing a segmentation process. The segmentation process includes employing a 'differentiation operator' on the pixels in the picture contained within the AOI, after which pixels having relatively high value are registered. The latter pixels represent the border points between adjacent distinguishable areas, and a collection of corresponding pixels forms corresponding border lines, which represent the required silhouette and contour lines.
Preferably, after being identified, the silhouette and contours are normalized (i.e., scaled and, whenever required, rotated), in order to allow correlation between (the normalized) key features of the captured vehicle and 'full/real sized' key features belonging to actual vehicle's models. The normalization includes scaling and, whenever required, rotation of the picture in the AOI. The scaling factor is determined according to the known distance of the video acquisition device from captured vehicles, and the rotation angle is determined according to the known relative height and angle (i.e., orientation) of the video acquisition device with respect to the captured vehicles.
According to one aspect of the present invention, the Car Model Database (CMDB) is external to the computer unit, and communicates with the computer unit by utilizing bidirectional communication channel. According to another aspect, the CMDB resides within the computer unit.
Preferably, the color of the vehicle is extracted by: a) Analyzing the characteristics of the pixels of the AOI, for allowing calculating averaged color values. Each one of the averaged color values corresponds to a different area of the vehicle (e.g., doors); b) Choosing the averaged color having the maximal value as the representative color of the vehicle; c) Utilizing known color reference, for measuring the effect of ambient factors on the known color reference; d) Modifying the representative color of the vehicle according to the measured effect of the ambient factors. The modified representative color is the true color of the vehicle.
Preferably, modifying the representative color is implemented by utilizing color control techniques, such as AGC (Automatic Gain Control) technique or white balance.
Knowing the location (i.e., orientation, height and distance) of the video acquisition device, with respect to place where vehicles are intended to be captured, and expecting the captured objects to be vehicles, including knowing the general shape of vehicles, provide a 'prior knowledge' as to the relative location of the larger areas where to search for the dominant color (i.e., vehicle's color) of the captured vehicle (e.g., the area of the lower part of the doors of the vehicle, if the video acquisition device is located to capture the left, or right-hand side of the vehicle).
Accordingly, in order to obtain fast responding system (i.e., regarding determination of the vehicle's color), analyzing the pixels and calculating averaged color values are limited to selected pre-defined areas, where the probability to detect painted parts of the vehicle is relatively high, for example, the lower part of the doors, or the engine cover.
Preferably, the License Plate Number of the vehicle is obtained by employing the known License Plate Registering (LPR) technique, which utilizes the Optical Character Recognition (OCR) algorithm. According to the preferred embodiment of the present invention, the video acquisition device is stationary, with respect to the preferred covered area, and the color reference is a standard calibration colored plate, which is located in a fixed location in a way that the calibration colored plate continuously appears in a pre-determined size and location in the FOV of the video acquisition device. By knowing the true color of the standard calibration colored plate (i.e., which is fed to the computer unit), the effect of the ambient factors (e.g., light, shadows, dust) thereon is continuously measured, and the representative color of the vehicle is modified according to the measurement results (i.e., relating to the color reference) at the time of the actual capturing of the vehicle's image. The modified representative color of the vehicle is the true color of the vehicle.
According to another aspect of the present invention, the video acquisition device is non- stationary, i.e., it is attached to, e.g., a police car that travels along a road, seeking for vehicles. According to the this aspect, the image of the captured vehicle is a still picture, and the color reference is obtained by periodically facing a standard calibration colored plate in front of the video acquisition device, and measuring the effect of the ambient factors thereon. The more frequently the effect of the ambient factors (i.e., on the standard calibration colored plate) is measured, the more accurate the modification of the representative color of the vehicle is, and, consequently, the more accurate is the vehicle's color determination process. Alternatively, a color reference, in the form of a relatively small standard calibration colored plate, may be' continuously positioned in a fixed location in front of the video acquisition device, in order to allow continuous measurement of the effect of ambient factors, while allowing the video acquisition device to capture images of vehicles. According to the latter aspect, a distance measuring sub-system is provided, the task of which is to continuously measuring the distance between the (moving) video acquisition devices to the (moving) vehicle whose image is to be captured. The measured distance and the (known) height of the video acquisition device (i.e., being installed on, e.g., a police car) allow the computer unit optimizing the normalization of the image captured by the video acquisition device.
The method , further comprises providing a Work Model Database (WMDB), which includes data (hereinafter referred to as the "vehicles profiles") related to vehicles of interest (e.g., vehicles registered as stolen, a vehicle that is allowed to enter restricted area). Each one of the vehicles profiles includes a unique combination of license plate number, color and model related to specific vehicle of interest, and, optionally, additional data, such as the vehicle's owner, the owner's details (i.e., residence, occupation, driving license, accidents the owner was involved in, etc.). The WMDB may reside within the computer unit, or be external to the computer unit. Additionally, the content of the WMDB may be temporary, i.e., data, which is related to specific vehicle of interest, may be deleted or updated after the latter vehicle is identified and a corresponding response is commenced.
Preferably, the method further comprises providing a transceiver unit. The transceiver unit may be located at the capturing site, for allowing exchanging data between the computer unit residing within the capturing site, and a central control station.
The method may further comprise utilization of a plurality of capturing sites, in the same manner as described hereinabove, which are deployed in predetermined locations, and communication between the plurality of capturing sites and a central control station.
Accordingly, the method further comprises providing a central control station, for communicating with the plurality of capturing sites. The central control station includes a Global Model Database (GMDB), which includes data that is related to essentially every registered vehicle and existing vehicle design model. The data related to design model of vehicles is stored in the GMDB in the form of corresponding vector characteristics, which represent key features and mathematical models that characterize existing vehicle design models. The latter key features and mathematical models may be forwarded to chosen CMDBs (i.e., in the respective capturing site), for allowing extraction of design models of captured vehicles. The central control station may further include a central computer and a transceiver, for allowing the central computer to exchange messages with each one of the computer units residing in the respective capturing sites.
According to one aspect of the present invention, the method further comprises the steps: a) Continuously operating the video acquisition device in each one of the vehicle capturing sites; b) Capturing every image of every vehicle that enters each one of the FOV of a respective video acquisition device, and performing the steps: b.l) extracting the details (i.e., license plate number, color and model) from the respective captured image(s); b.2) identifying and unconditionally transmitting, essentially in realtime, the extracted details of each vehicle to the central control station (e.g., police station); b.3) comparing, in the central control station, the vehicle's extracted details to "vehicles profiles" that are stored in the WDB; and b.4) responding according to the result of the comparison.
According to another aspect of the present invention, the method comprises the steps: a) Transmitting a request, from the central control station to the plurality of vehicles capturing sites, to commence a response only upon identification of specific vehicles of interest; and b) Whenever a specific vehicle of interest is identified, by at least one of the capturing sites, commencing a corresponding response by the respective capturing site. .
Whenever there is a match, partial match, or lack of match, between the vehicle's extracted details (i.e., model, license No. and color), and the pre- stored vehicles profiles, which are stored in the WMDB, the computer unit may carry out, or commence, one, or more, corresponding actions, depending on the type of match, selected from the group of:
a. alerting an operator on the scene/capturing site; b. alerting a remote, central surveillance station; c. . displaying on a local, aήd or remote, screen every available detail related to the vehicle; d. issuing a printout of the vehicle's complete details; e. allowing the vehicle to enter a restricted area by opening a gate/barrier; f. ignoring the vehicle (when the vehicle is not classified as wanted); or g. automatically closing, or opening, a gate, h. activating a siren; or i. dialing a predetermined telephone No. Of course, the commenced actions may be other than those specified hereinabove (i.e., actions a. to i.). The commenced response may be, e.g., transmitting a corresponding message back to the central control station-, and or, opening, or closing, a gate, and/or activating a siren, and/or dialing a predetermined telephone No., etc.
Optionally, the central control station may transmit a request only to preselected capturing sites (i.e., according to predetermined criteria).
The Car Model Database (CMDB), in each capturing site, may be automatically updated with data that corresponds to vehicles' design models, by communicating with the Global Model Database (GMDB). Likewise, the WMDB, in each capturing site, may be automatically updated with data that is related to the status of vehicles, by communicating with the GMDB.
Preferably, the method may comprise providing a central Graphic User Interface (GUI), for allowing a person to interact with the central computer, and, through the central computer, with the vehicles capturing sites. The interaction may include at least updating the content of the three databases (i.e., GMDB, CMDB and WMDB), exchanging messages between the central computer and the computer units residing in the respective capturing site, transmitting requests, inquiries and directives from the central computer to the capturing sites, presenting a picture, and related data, of wanted specific vehicle that are identified, etc.
Optionally, each one of the vehicle capturing site may include an independent GUI, for allowing operation, calibration and maintenance of the respective vehicle capturing site. According to one aspect of the. present invention, the capturing site is independent; i.e., the capturing site is a 'stand-alone' facility, operating without communicating with a central control station. }
The present invention also provides a method for allowing automatic and real-time identification of a design model of a vehicle, which comprises the steps: a) Defining a number of characteristic geometric features in the appearance of vehicles, the combination or concurrent presence of them being specific to given models of vehicles; b) Representing each characteristic geometric feature by a digital word (i.e., identifying word), and memorizing the correspondence between the characteristic geometric features and the corresponding, or related, identifying words. Each one of the digital (identifying) word may be obtained by employing any known digital compression techniques on data that represents the corresponding characteristic geometric feature; c) Creating a program for identifying the characteristic geometric features from an image, or images, of vehicles; d) Determining and memorizing the identifying words of all, or a sufficiently large number of known vehicles models; e) Continuously adjourning the memories of identifying words; f) Selecting an area for observation; g) Acquiring, by any kind of image generating apparatus, an image of the selected area in the absence of traffic, said image is memorized as background; h) Continuously, or at predetermined intervals, procuring an image of the selected area, by the image generating apparatus; i) Comparing each one of the procured images with the background to extract images that are not in the background and are assumed to be of vehicles; j) Selecting from the portion, by the program, the geometric features of the assumed vehicle, that are among the characteristic geometry features defined in step a); k) Representing the features by corresponding digital words;
1) Comparing the resulting digital words with digital words memorized in step d); and m) Determining the result of the comparison among the following: 1) the assumed vehicle has been determined to be in fact, or there is a high probability that it is in fact, a vehicle of a specific model; 2) the assumed vehicle has been determined to be in fact, or there is a high probability that it is in fact, one of the vehicle models of a group thereof; 3) same as 2), but with a given, low probability; 4) the digital words resulting from step 1 show that there is a random chance that the assumed vehicle is of a specific model or of a limited groups of specific models; 5) the digital words resulting from step 1 show that the assumed vehicle is not of a known model, or is not a vehicle at all. The result of the comparison is expressed as the number of identifying words that represent geometric features of the vehicle, that are the same as memorized identifying words, or have a number of digits that are the same as those of memorized identifying words. A corresponding program, or table, may be utilized, for determining the aforesaid probabilities as a function of the aforesaid comparison result.
System for automatic and real-time identification of a vehicle, comprising a plurality of vehicle capturing sites, for allowing an automatic and realtime identification of vehicles by identifying, or extracting, the design model, color and license plate number of the vehicles, and a central control station, capable of communicating with the capturing sites, for allowing checking the status of the vehicle by utilization of its identified, or extracted, model, color and license plate number.
Each one of the vehicle capturing sites includes at least a video acquisition device that is directed towards a preferred (monitored/covered) area (e.g., a road, a gate of a parking facility), where vehicles are expected to pass, in order to generate a video data in the form of still-picture, or a continuous video stream, which represents essentially all of the incidents occurring in the Field Of View (FOV) of the video acquisition device. The vehicle capturing site further includes a computer unit and corresponding software, for acquiring, digitizing, processing and analyzing the video data, for obtaining the design model, color and license plate number of the vehicle.
Preferably, the system for automatic and real-time identification of a vehicle comprises a central control station and a plurality of capturing sites, each of which comprising: a) Video acquisition device, which is located in known orientation, height and distance, with respect to captured vehicles, and is capable of generating a video data, in the form of still images, or stream of consecutive video frames, that correspond to objects passing in front of the Field Of View (FOV) of the video acquisition device. The video acquisition device may be a digital or analog video camera; b) Database (i.e., Car Model Database - CMDB) that contains data, and/or features, and/or images, and/or structures, and or descriptions, that correspond essentially to every existing vehicle's design model; and c) Computer unit and corresponding processing software. The computer unit further comprises means for: c.l) allowing indicating to the computer unit that an object is in the FOV, for selecting, from the stream of consecutive video stream, an image picture that includes the image of the captured object; c.2) extracting an Area Of Interest (AOI), from the image picture, which includes the image of the captured object; c.3) extracting the object from the AOI, by employing edge -detection algorithm on the pixels of the AOI; c.4) determining whether the image of the object is an image of a vehicle; and, whenever the image of the object is determined to be an image of a vehicle, c.5) extracting the design model, color and license plate number of the related vehicle.
The system may further comprise, for each capturing site, a television monitor, which may be connected to the video acquisition device, for allowing locally, or remotely, monitoring the preferred covered area, and, thereby, allowing, indicating to the computer unit the presence of a vehicle in the FOV. The television monitor may also be utilized for carrying out calibration and maintenance procedures.
Preferably, the computer unit, in each capturing site, may be initialized with parameters for optimizing the analysis of the continuous consecutive video frames, selection of the image picture, extraction of the object from the image picture, determination whether the object is a vehicle, and extraction of the design model, color and license plate number of the vehicle. The parameters may be related to at least the relative height, tilt angel and distance of the video acquisition device, with respect to the preferred area, for normalizing the captured vehicle, for allowing comparing between key features that are part of the normalized image of the captured vehicle to key features that are related to existing vehicle's design models and are contained within the CMDB, thereby allowing obtaining the captured vehicle's design model. The computer unit may be initialized with additional parameters, such as parameters related . to a calibration color plate, for optimizing the color determination process, and the direction of moving vehicles, for optimizing the vehicle's image extraction process.
Preferably, the indication to the computer unit, regarding the presence of a vehicle, or another object, in the FOV, and the selection of the image picture that should be processed, is carried out automatically.
Accordingly, the system may further comprise, for each computer unit,: a) means for processing and analyzing automatically and in real-time a sequence of corresponding consecutive video frames contained in the stream of video frames, which correspond to an object being captured by the video acquisition device. The analysis includes utilization of the generated stream of video frames for detection of 'motion center' of corresponding object(s), with respect to the FOV of the video acquisition, device. The motion center is detected by employing motion- detection technique on the successive video frames. The motion-detection technique utilizes a differentiation algorithm, according to which the characteristics of corresponding pixels of successive video frames are compared, and corresponding differences of characteristics are calculated, thereby detecting a moving object; and b) means for selecting, from the consecutive video frames, an image to be processed, the image is selected according to the image's occurrence instant, which is synchronized with the instant of detected motion of the vehicle, so as to ensure that the content of the (selected) image includes all, or at least most, of the object(s) that was captured by the video acquisition device. According to one aspect, a motion-detection algorithm is provided, for each computer unit, for allowing selecting the image picture (i.e., that includes the object), which utilizes the identified instant at which the vehicle enters into the FOV, the expected vehicle's average speed and the number oϊ video frames per second. According to this aspect, the total video frames are calculated, and essentially the middle frame is selected, which is most likely to contain the whole image of the captured vehicle.
According to another aspect, a motion- detection algorithm is provided, for each computer unit, for selecting the image picture, which identifies the instants at which the vehicle enters and leaves the FOV, after which the image picture is selected from frames contained between the corresponding (first and last) frames.
According to another aspect, a motion- detection algorithm is provided, for each computer unit, for selecting the image picture, which includes identifying a pixel whose characteristics were changed (i.e., thereby indicating movement), and tracking changes along pixels that form a trajectory that originates from this pixel.
Preferably, the system further comprises, for each computer unit, means for identifying the corresponding 'motion-center' of the image, by evaluating pixels whose characteristics are varied due to motion, for extracting the Area OF Interest (AOI) from the image picture.
The motion-detection stage may be utilized for obtaining the speed of the captured vehicle, by utilizing the apriori knowledge related to the relative location of the video acquisition device, with respect to the captured vehicle, and the number of video frames per second. Alternatively, the indication of a vehicle being in the FOV is 'manual', i.e., the operator of the video acquisition device indicates to the computer unit that a vehicle is in the FOV, in order to obtain the image picture, which includes a still-picture of the vehicle. }
More than one objects may be captured by the same capturing site and essentially at the same time, and each one of the objects may be handled essentially in the same way as described above, i.e., by identifying a corresponding AOI for each captured vehicle, in the same, or predecessor, or successor, individual image, and by analyzing separately each corresponding individual AOI.
Preferably, the system comprises, for each computer unit, extraction means, for extracting the (captured) object from the AOI. The extraction means further comprises segmentation means, which includes: a) Means for filtering the image, so as to discard effects of ambient light conditions, dust, shadow, motion blur and clouds, in order to obtain an essentially noiseless picture of the captured object; b) Edge-detection algorithm, for obtaining the silhouette of the object and contours contained therein; c) Means for completing the silhouette and contours by adding missing sections, and/or 'dots', by employing corresponding ('Line tracking') algorithms; d) Means for generating binary image of the silhouette and contours; and e) Means for extracting the area confined within the closed silhouette. The extracted area is the image of the (captured) object. Preferably, completion of the silhouette and contours are performed, by each computer unit, by employing ("Line Tracking") interpolation algorithm (s).
Preferably, the determination means (i.e., whether the image of the captured object is an image of a captured vehicle) utilizes a 'size-threshold' that depends on the relative location of the video acquisition device, with respect to the preferred monitored area.
Preferably, the means for extracting the vehicle design model further comprises, for each computer unit: a) Means for identifying, in the image of object, the silhouette and contours of the vehicle; b) Means for mathematically characterizing preferred key features that are part of, and/or resulting from, the silhouette and contours; c) Means for generating a mathematical model of the related vehicle, by mathematically characterizing the overall surface area of the related vehicle, by utilizing a corresponding group of characterized key features. The key features of the group are selected from the preferred key features; d) Means for correlating the generated mathematical model with data that is contained within the CMDB and corresponds to existing vehicle design models, for assigning a primary probability value for each one of the existing vehicle design models; and e) Means for choosing the vehicle's design model that is assigned the highest primary probability value as the preferred vehicle design model of the related (i.e., captured) vehicle.
Preferably, the mathematical model is a 3-dimensional (3D) model, and is obtained from the 2D image of the captured vehicle by employing known techniques, such as the "structure from X' technique. Probability threshold value and probability margin may be utilized, by each computer unit, for enhancing the vehicle design model determination process. Accordingly, the means for extraction of the design model of the vehicle may further comprise: a) Means for determining a probability threshold value (herein after the "threshold value"), and a probability margin; b) Means for allowing selecting individual key features ("additional key feature") from the preferred key features. The individual key features are correlated with data that is related to equivalent key features of vehicle design models that are contained within the CMDB. The correlation process results in assigning corresponding additional probability values to the respective vehicle design models. The selected key features are preferably key features that are not in the group (i.e., the group of selected key features); c) means for assigning a probability weight to each one of the vehicle design models. The probability weight calculation is based on the primary probability value and on the additional probability values assigned to each one of the vehicle design models; and d) means for assessing . the probability weights, for allowing determining the (preferred) vehicle design model as the design model of the captured vehicle;
Preferably, the key features are mathematically characterized; i.e., the key features are represented by mathematical equations, the coefficients of which are intended to be correlated with corresponding coefficients that are related to equivalent key features of corresponding design models and stored in the CMDB. Preferably, the system comprises, for each computer unit, segmenting means, for allowing identification of the silhouette of the vehicle's image and of the contours of the vehicle's prominent elements. The segmenting means may include a 'differentiation operator', which is employed on the pixels in the picture contained within the AOI, for registering pixels having relatively high value. The latter pixels represent the border points between adjacent distinguishable areas, and a collection of corresponding pixels forms corresponding border lines, which represent the silhouette and contour lines.
Preferably, the system comprises, for each computer unit, normalization means, for normalizing the silhouette and contours (i.e., scaled and, whenever required, rotated), in order to allow correlation between (the normalized) key features of the captured vehicle and 'full/real sized' key features belonging to actual vehicle's models. The normalization includes , scaling and, whenever required, rotation of the picture in the AOI. The scaling factor is determined according to the known distance of the video acquisition device from captured vehicles, and the rotation angle is determined according to the known relative height and angle (i.e., orientation) of the video acquisition device with respect to the captured vehicles.
According to one aspect, the CMDB is external to the respective computer unit and communicating with the latter computer unit by utilizing' bidirectional communication channel. According to another aspect, the CMDB resided within the respective computer unit.
Preferably, the system comprises, for each computer unit, color (i.e., of the vehicle) extraction means, which comprises: a) means for analyzing the characteristics of the pixels of the AOI, for allowing calculating averaged color values. Each one of the averaged color values corresponds to a different area of the vehicle (e.g., doors); b) means for choosing the averaged color having the maximal value as the representative color of the vehicle; c) Known color reference, for measuring the effect of ambient factors on the known color reference; d) Means for modifying the representative color of the vehicle according to the measured effect of the ambient factors on the known color reference. The modified representative color is the true color of the vehicle.
Preferably, the representative color is modified by utilizing color control techniques, such as AGC (Automatic Gain Control) or white balance.
Preferably, the license plate number of the vehicle is obtained, by each computer unit, by employing the known License Plate Registering (LPR) technique, which utilizes the Optical Character Recognition (OCR) algorithm.
According to the preferred embodiment of the present invention, the video acquisition device (i.e., the 'capturing site') is stationary, with respect to the preferred covered area, and the color reference is a standard calibration colored plate, which is located in a fixed location in a way that the calibration colored plate continuously appears in a pre-determined size and location in the FOV of the video acquisition device. By knowing the true color of the standard calibration colored plate (i.e., which is fed to the computer unit), the effect of the ambient factors (e.g., light, shadows, dust) thereon is continuously measured, and the representative color of the vehicle is modified according to the measurement of the color reference at the time of the actual capturing of the vehicle's image. The modified representative color of the vehicle is the true color of the vehicle..
According to another aspect of the present invention, the video acquisition device (i.e., the 'capturing site') is non- stationary, i.e., it is attached to, e.g., a police car that travels along a road, seeking for vehicles. According to the this aspect, the image of the captured vehicle is a still picture, and the color reference is obtained by periodically facing a standard calibration colored plate in front of the video acquisition device, and measuring the effect of the ambient factors thereon. The more frequently the effect of the ambient factors (i.e., on the standard calibration colored plate) is measured, the more accurate the modification of the representative color of the vehicle is, and, consequently, the more accurate is the vehicle's color determination process. Alternatively, a color reference, in the form of a relatively small standard calibration colored plate, may be continuously positioned in a fixed location in front of the video acquisition device, in order to allow continuous measurement of the effect of ambient factors, while allowing the video acquisition device to capture images of vehicles.
According to the latter aspect, a distance measuring sub-system is provided, the task of which is to continuously measuring the distance between the (moving) video acquisition device to the (moving) vehicle whose image is to be captured. The measured distance and the (known) relative height of the video acquisition device (i.e., being installed on, e.g., a police car) allow the computer unit optimizing the normalization of the image captured by the video acquisition device.
According to another aspect of the present invention, several of the capturing sites may be stationary, while other capturing sites are non- stationary. The system may further comprise, for each capturing site, a Work Model Database (WMDB), which includes data (hereinafter "vehicles profiles") related to vehicles of interest (e.g., vehicles registered as stolen, a vehicle that is allowed to enter restricted area). Each one of the vehicles profiles includes a unique combination of license plate number, color and model related to a specific vehicle of interest, and, optionally, additional data, such as the vehicle's owner, the owner's details (i.e., residence, occupation, driving license, accidents the owner was involved in, etc.).
The WMDB may reside within the respective computer unit, or be external to the latter computer unit. Additionally, the content of the WDB may be temporary, i.e., data, which is related to specific vehicle of interest, may be deleted or updated after the latter vehicle is identified and a corresponding response is commenced.
Preferably, each one of the capturing sites may utilize a transceiver unit, for allowing the respective computer unit to exchange data with the central control station.
According to a preferred embodiment of the present invention, the plurality of capturing sites are deployed in predetermined locations.
The central control station comprises a Global Model Database (GMDB), which includes data that is related to the status of selected vehicles, and data that is related to essentially every existing vehicle design model, a central computer and a transceiver, for allowing the central computer to exchange messages with each one of the computer units residing in the respective capturing sites. Accordihg to one aspect of the present invention, each one of the video acquisition devices (i.e., in each one of the respective vehicle . capturing sites) is continuously operating, and data related to the vehicles' extracted details (i.e., extracted license plate number, color and model) of every captured vehicle; in each one of the capturing site, is broadcasted, essentially in real-time, to the central computer unit, where the received data is compared to "vehicles profiles" that are stored in the WDB, after which a corresponding response, which depends on the result of the comparison process, is commenced.
According to another aspect of the present invention, the central control station transmits a request to the computer units of the vehicles capturing sites, to commence a response only upon identification of specific vehicles of interest, meaning that whenever a specific vehicle of interest, as - indicated by the central computer unit, is identified by at least one of the capturing sites, the corresponding capturing sites commence a corresponding response.
Whenever there is a match, partial match, or lack of match, between the vehicle's extracted details (i.e., design model, color and license plate No.), and the pre-stored vehicles profiles, which are stored in the WMDB, the computer unit may carry out one, or more, corresponding actions, depending on the type of match. The corresponding actins may be selected from the group of: a. alerting an operator on the scene/capturing site; b. alerting a remote, central surveillance station; c. displaying on a local, and/or remote, screen every available detail related to the vehicle; d. issuing a printout of the vehicle's complete details; e. allowing the vehicle to enter a restricted area by opening a gate/barrier; f. ignoring the vehicle (when the vehicle is not classified as wanted); or g. automatically closing, or opening, a gate. h. activating a siren; or i. dialing a predetermined telephone No.
Of course, other actions (than the actions a. to i. above) may be taken. Optionally, the central control station may transmit a request only to preselected capturing sites (i.e., according to predetermined criteria).
The Car Model Database (CMDB), in each capturing site, may be automatically updated with vehicle's models, by communicating with the Global Model Database (GMDB).
Preferably, the central control station may further include a central Graphic User Interface (GUI), for allowing a person to interact with the central computer, and, through the central computer, with the computer units in the respective vehicles capturing sites. The interaction may include at least updating the content of the three databases (i.e., GMDB, CMDB and WMDB), exchanging messages (i.e., between the central computer and the computer units) and transmitting requests, inquiries and directives from the central control station to the computer units residing in the respective capturing sites, presenting a picture, and related data, of wanted specific vehicle that are identified, etc.
Optionally, each one of the vehicle capturing sites may include an independent GUI, for allowing operation, calibration and maintenance of the respective vehicle capturing site. Accordirig to one aspect of the present invention, a capturing site may be independent; i.e., a capturing site is a 'stand-alone' facility, operating without communicating with a central control station.
Brief Description of the Drawings
The above and other characteristics and advantages of the invention will be better understood through the following illustrative and non-limitative detailed description of preferred embodiments thereof, with reference to the appended drawings, wherein:
Figs, la to lc (prior art) show an image of an exemplary captured vehicle, and the vehicle's 'straightforward' silhouette and manipulated side silhouette, respectively;
Figs. 2a and 2b show image pictures of the left-hand side and back side of a captured vehicle, respectively, according to a preferred embodiment of the present invention;
Figs. 3a and 3b show the silhouette and contours that correspond to the image pictures shown in Figs. 2a and 2b after employing image segmentation process, according to the preferred embodiment of the invention;
Fig. 4 shows a 'negative' picture of the silhouette and contours shown in Fig. 3b, which is used for extraction of key features, for determining the model of the vehicle;
Fig. 5 shows exemplary steps for extracting color, license plate number and model of a vehicle, according to a preferred embodiment of the present invention; and
Fig. 6 schematically illustrates a basic system that comprises a capturing site and a control station.
Detailed Description of Preferred Embodiments Figs. 2a and 2b show exemplary digitized image pictures of the left-hand side and back side of a vehicle that was captured in the FOV of the camera (not shown), respectively, according to a preferred embodiment of the present invention. Only one (digitized) image picture (i.e., the image picture shown, for example, in Fig. 2a or in Fig. 2b) is required for obtaining the silhouette and contours of the vehicle, and, therefrom, the model of the vehicle. The two figures (i.e., Figs. 2a and 2b) are only meant to suggest that the model of a (wanted) vehicle may be obtained by capturing the vehicle in different angles. The image picture of a vehicle may be obtained in a 'straightforward' manner if the video acquisition device (not shown) is located in a fixed position with respect to the captured vehicle. In cases where there is a relative motion, i.e., between the video acquisition device (not shown) and the captured vehicle, a motion- detection algorithm is employed on the stream of video sequence, which is provided by the video acquisition device, . to identify the occurrence of a motion. The identified occurrence of the motion allows extraction of the image picture that includes the image of the (wanted) vehicle.
The digitized image picture is utilized for identification of the license plate number, color and model of the vehicle whose image was captured, as is described in connection with the respective figure(s).
Figs. 3a and 3b show the silhouette and contours after image segmentation process that correspond to the digitized image pictures shown in Figs. 2a and 2b, respectively, according to the preferred embodiment of the invention. The silhouette and contours of the vehicle are obtained by employing image analysis and segmentation techniques on the corresponding pixels of the digitized image that is included in the Area of Interest (AOI) (not shown), which is extracted from the FOV. Employing the image analysis and segmentation techniques allow identifying pixels ('points') that represent transitions (i.e., borders) between two adjacent elements, or objects, of the vehicle (e.g., between a window and the roof). The identified border pixels form essentially broken lines, which represent in great accuracy the captured vehicle.
In general, image analysis and segmentation techniques return information associated with the intensity characteristics of a digitized image, which may include one or more objects. The latter techniques utilize edge detection algorithms to detect edges, which are places, or points, in the digitized image, that correspond to object(s) boundaries. The more advanced edge detection techniques involve the use of color data to locate edges in a scene, as utilization of color differences between regions allows obtaining more precise edge detection. Further description of color edge detection may be found in, e.g., "A computational Approach to Edge Detection" (IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-8, No. 6, November, pp.679-698), and "CS 223B Project: Color Edge Detection Results", Kate Starbird / John Owens (Int. web site: http://graphics,stanford.edu/~jowens/223b/results.html).
The corresponding edges are identified by identifying abrupt changes in the intensity characteristics of the digitized image. The abrupt changes may be identified using one of the criteria, according to which the abrupt changes are associated with places wherein:
1. The first derivative of the intensity is larger in magnitude than a predetermined threshold; or
2. The second derivative of the intensity has a zero crossing. The most powerful segmentation and edge-detection methods are the Sobel, Prewitt, Roberts, Laplacian of Gaussian method, zero-crossing and Canny method. These known techniques utilize several derivative estimators, each of which depending on whether the derivative operation should be sensitive to horizontal or to vertical edges, or both.
Fig. 4 shows a 'negative' picture of the silhouette and contours shown in Fig. 3b. The silhouette and contours shown in Fig. 4 are the result of the vehicle being captured at right-angle with respect to the left-hand side of the vehicle. However, the vehicle may be captured from different angles and distances, in which case a 'normalization' process will take place, after which the resulting normalized image will appear essentially like the exemplary image shown in Fig. 3a, or in Fig. 3b.
The normalization process is associated with scaling (i.e., distance compensation) and rotation (i.e., angle compensation) of the obtained silhouette and contours, in order to allow correlation between (the normalized) key features that are part of, and/or resulting from, the silhouette and contours of the captured vehicle, and 'full real sized' key features belonging to actual models of ('real') vehicles, thereby allowing extracting the model of the vehicle. The full/real sized key features are stored in a Car Model Database (CMDB) (not shown). The scaling factor is determined according to the known distance of the video acquisition device from captured vehicles, and the rotation angle is determined according to the known relative height and angle (i.e., orientation) of the video acquisition device with respect to the captured vehicles.
In Fig. 4, the silhouette and contours are indicated as broken lines, or dots. However, normally, after being normalized, the ('original') broken lines (i.e., dots) of the silhouette and contours are reconstructed; i.e., missing sections are added to the broken lines of the silhouette and contours, employing corresponding 'Line tracking' algorithms.. However, reconstruction of the silhouette and/or contours (not shown) may be an option in some cases, due to the original broken lines being adequate for identifying the required key features that are included within the silhouette and contours.
Key features may comprise selected portions, sections or segments of the completed or of the original, silhouette and/or contours. For example, lines 41 and 44 may be selected, which represent the unique shape of the back right-hand side lamp and roof of the exemplary vehicle, respectively. By knowing the coordinates (not shown) of the corresponding pixels, lines 41 and 44 are mathematically characterized; i.e., corresponding mathematical equations are derived, which represent lines 41 and 44. The coefficients of the derived mathematical equations may be stored in a temporary storage array, in order them to be, whenever required, compared to corresponding data that is stored in the CMDB (not shown).
Additionally, or alternatively, key features may be selected, which represent prominent areas, or surfaces, of parts, or elements, for example, the surface of bumper 46 and lamp 43.
Additionally, or alternatively, key features may be selected, which represent lengths, for example, the maximal width (47) of the vehicle, and/or the width of the back window (48) and/or the distance between the (centers of the) wheels (49), and/or the height of the vehicle (50).
Additionally, or alternatively, key features may be selected, which represent typical angles; for example, angles a and β . Additionally, or alternatively, key features may be selected, which represent ratios between two elements, or other key features, of the vehicle, for example, between lines 41 and 44, between surfaces 43 and 46, between angles a and β , etc. Of course, other, or additional, key features may be selected, for enhancing the process of model extraction.
According to the present invention, a mathematical model is formed, which comprises a group of selected key features, for example, surface 46, length of the vehicle 47 and angle β). The mathematical model is, then, correlated with the vehicle models that are stored in the CMDB, after which (normally) one, or more, vehicle model(s) are found to, which matches the mathematical model. The number and type of the key features (i.e., forming the 'group') are determined so as to allow only one correlation iteration; i.e., in most cases only one iteration would be required, between the mathematical model and the data stored in the CMDB, for finding only one vehicle model that perfectly matches the mathematical model. The latter vehicle model would be determined to be the model of the captured vehicle. If the latter correlation process yields more than one vehicle model, additional key features will be correlated with corresponding data stored in the CMDB, until only one model is found, which best matches the mathematical model. The mathematical model may be a two dimensional (2D) model or three dimensional (3D) model.
The present invention also provides a system, which utilizes the model of a vehicle, which is obtained in the way described above, for obtaining a complete (i.e., license plate number, color and model) identification of the vehicle, as described in Fig. 5. Fig. 5 shows a general block diagram of the present invention. Video camera 501 generates a continuous video stream that corresponds to vehicle 516 that passes by in the FOV of Video camera 501. The continuous video stream is forwarded to a computer unit (not shown); which utilizes a software (not shown) for detecting a motion that is related to vehicle 516, or, whenever applicable, with other moving objects (not shown). Detecting a motion (i.e., in the video stream) is carried out in step 502, by employing one of the known motion- detection algorithms. Step 502 may be implemented using either software or hardware tools.
Whenever a motion is detected, in step 502, a check is made, in step 503, whether the (detected) moving object is a vehicle. The check is based on prior knowledge that is related to the location of video camera 501 with respect to the monitored, or inspected, area, and to the expected relative area of vehicles with respect to the area of the FOV of video camera 501. If no motion is detected, or if there is motion detected, but of an object other than a vehicle, the system continues to check for motion that is associated with a vehicle (503a).
If a moving vehicle is detected, an image picture (i.e., which includes the captured vehicle) is selected from the corresponding video stream (not shown). In order to save computational time and resources, the segmentation process is employed on a 'narrowed' picture that includes the image of the captured vehicle, rather than on the whole image picture. Accordingly, an Area Of Interest (AOI) is extracted from the image picture (in step 504), which includes the vehicle's image. In order to save computational time, the AOI is extracted by utilizing motion considerations; i.e., by identifying pixels whose characteristics indicate that a motion has occurred, or by identifying the 'center of motion' of the moving vehicle. If more than one vehicle is detected in the image picture, a corresponding AOI is extracted for each one of the detected vehicles. The digitized video' content of (each one of) the AOI undergoes filtration, noise reduction, light condition correction and brightness control, in order to obtain essentially noiseless picture (step 505), after which the model, color} and license plate number of the vehicle are extracted (steps 511, 507 and 508, respectively).
In order to extract the model of a vehicle, the vehicle's silhouette and contours are first extracted from the corresponding AOI, by employing segmentation process (step 506), and preferred key features are identified in the silhouette and contours and mathematically characterized. A grouped key features is formed, which consists of several key features that are selected from the preferred key features (step 506a). The grouped key features are translated into a mathematical model (step 506b), which may represent either a 2D or 3D surface of the vehicle. The mathematical model is correlated (step 506c) with the corresponding data stored in the CMDB (not shown), which is related to equivalent mathematical models of corresponding existing vehicle design models. The mathematical model may be obtained by using, e.g., the known "structure from X" algorithm, or "shade from X" algorithm. The 2D (or 3D) surface represents the detected vehicle.
Normally, correlation of the mathematical model with the equivalent mathematical models, that correspond to respective existing vehicle design models, yields different probability values for different vehicle design models, and the vehicle model who yields the highest probability value is determined to be the model of the (captured) vehicle. A predetermined probability threshold and probability margin may be utilized, in order to enhance and improve the model-decision process. However, due to the increased similarity between models of modern vehicles (i.e., resulting from fashion considerations, functionality and imitation), it is probable that correlation of the mathematical model would yield two, or more, high probability values with very small margin, which would result in 'finding' two, or more, 'candidate' (i.e., matching) models. Accordingly, if more tha one model is found (step 509), a further correlation iteration would be required (steps 510). Therefore, in step 510a a key feature (i.e., other than those forming the group) is selected from the key features identified in step 506, and correlated (step 510b) with data, which corresponds to equivalent key feature, that is stored in the CMDB. The key features selected in step 510 are selected according to apriori knowledge that is to the matching models (e.g., wheels or light distances).
Steps 510 and 510b are repeated until one vehicle model is determined to be the vehicle's model, based on the accumulating results of the correlation iterations. The model determination conforms to the rule, according to which the more key features are involved in the correlation process (i.e., more iterations are performed), the more decisive the decision is regarding the true design model of the vehicle.
In step 507, the color of the vehicle is extracted, by first analyzing the characteristics of the pixels of the corresponding AOI, for allowing calculating averaged color values. Each one of the averaged color values corresponds to a different area, or element, of the vehicle, for example, doors, bumper, lamps cover, etc. Then, an averaged color is chosen as the representative color of the vehicle, which has the maximal value. A known color reference is utilized for measuring the effect of ambient factors on the color reference, and the representative color of the vehicle is modified according to the measured effect of the ambient factors on the known color reference. The modified representative color is the true color of the vehicle. The average color of the vehicle is extracted, taking into account all colored portions of the vehicle (i.e., ignoring windows and the like). Since the lighting conditions change over the day's progression, it is impossible to determine the color of a vehicle just by sampling its color, as it would appear different at different hours of the day. Accordingly, the color of the vehicle is deduced by comparing the sampled color, with a known reference color as appearing at the time of the sampling, and extrapolating the difference in the shading on the color sampled from the vehicle.
Currently, there are techniques which may assist in refining the process of color extraction. For example, it is possible to utilize Automatic Gain Control (AGC), or the white balance technique.
Knowing the location (i.e., orientation, height and distance) of video camera 501, with respect to captured vehicles (e.g., vehicle 516), and expecting the captured objects to be vehicles, including knowing the general shape of vehicles, provide a 'prior knowledge' as to the relative location of the larger areas where to search for the dominant color (i.e., vehicle's color) of the captured vehicle (e.g., the area of the lower part of the doors of the vehicle, if the video acquisition device is located to capture the left, or right-hand side of the vehicle).
Accordingly, in order to obtain fast responding system (i.e., regarding determination of the vehicle's color), analyzing the pixels and calculating averaged color values are limited to selected pre-defined areas, where the probability to detect painted parts of the vehicle is relatively high, for example, the lower part of the doors, or the engine cover.
In step 507, the vehicle's license plate number is identified. As described in the background, this feature is already widely implemented using the well-known License Plate number Recognition (LPR) technology, which utilizes the so called Optical Character Recognition (OCR) algorithm, and is incorporated in this invention with no significant changes. It should be noted, that appropriate consideration is given to cope with photographing conditions which' might impair the picture — such as movement blur, lighting variations, various angles of photography, etc. Of course, the license plate number may be extracted, or identified, by utilizing other known techniques.
The extracted vehicle details (i.e., the design model, color and license plate number, step 512) are compared (in step 513) to combinations of models, colors and license plate numbers of registered vehicles, which are part of vehicles profiles that are forwarded to a WMDB (513) from a Global Model Database (GMDB, 514), and are temporarily stored in the WMDB. The vehicles profiles may include additional data, such as details of the vehicle's owner, the serial number of the vehicle's engine, accidents in which the vehicle was involved, is the vehicle wanted for any reason (and specifying the reason), etc.
If a (perfect) match is found between the extracted vehicle details and one of the combinations of color, license plate number and model of specific vehicle's profile that are contained within the WMDB (in step 515), a first set of predetermined actions may be commenced (step 516). For example, the system may open a gate for allowing the vehicle to enter a restricted area. Another example would be identifying the entering and exiting instants at which a vehicle enters and exits a toll road. If there is only partial match (step 517), for example, a match is found only with regard to the color and license plate number, or there is no match at all (519), the vehicle may be determined as suspected as, e.g., stolen. Accordingly, a second predetermined set of actions may be commenced (step 518). Fig. 6 schematically illustrates the general layout and functionality of the system, according to the preferred embodiment of the present invention. Reference numeral 600 denotes an exemplary vehicle capturing site, whic comprises at least video camera 601, computer unit 603, WMDB 605 (an option), Car Model Database (CMDB) 611 and transceiver 606.
Video camera 601 generates a video data, in the form of still images, or stream of video frames, that corresponds to an object(s) passing in front of the FOV of video camera 601, for example, vehicle 610. The video data that is captured by video camera 601 is processed and analyzed 'on-site', by computer unit 603 that is located relatively close to video camera 601. Video camera 601 is directed to a preferred direction, so as to monitor a preferred area, where vehicles are likely to pass. In order to allow the processing and analysis of images that correspond to vehicles, and/or to other objects, computer unit 603 employs a corresponding software package 604.
The vehicles' color, license plate number and model (i.e., the "extracted vehicle details", reference numeral 607) are extracted from the image picture (not shown) of the captured vehicle by utilizing software package 604, essentially as described before. The vehicle's model is extracted, or determined, by correlating key features (not shown), which are extracted from the corresponding silhouette and contours (see Fig. 4), and a corresponding mathematical model, that are (i.e., key features and mathematical model) obtained from the processed and analyzed image picture of the vehicle, to corresponding key features and mathematical models that are stored in CMDB 611 and characterize essentially every existing vehicle model. WMDB 605 contains vehicle profiles of interest (e.g., of stolen vehicles), such as vehicle profile 609. The vehicle profiles of interest may be updated, for example, whenever new relevant information should be added (e.g., indicating that a wanted vehicle was last seen with two people in it). Alternatively, or additionally, old vehicle profiles may be discarded, and new vehicle profiles added from/to WMDB 605} respectively. Transceiver 606 allows computer unit 603 to communicate the extracted details 607 to a remote server, via a communication network 623.
Reference numeral 600a denotes a central control station that includes at least a server 621 and transceiver 606a. Server 621 includes at least GMDB 620 and a software package (not shown) the task of which is at least to control data flow via transceiver 606a and to compare incoming data (e.g., data forwarded from computer unit 603 residing in capturing site 600) to a data stored in GMDB 620. GMDB 620 contains a plurality of vehicle profiles, such as exemplary vehicle profile 609, which are related to essentially every registered vehicle, some of which may be forwarded to WMDB 605. Alternatively, the vehicle profiles, contained in GMDB 621, may be associated only with a selected group of registered vehicles. In addition, GMDB 620 may contain also key features and mathematical models that represent essentially every existing design model (i.e., of vehicles), some of which may be forwarded, whenever required, to CMDB 611. Alternatively, the key features and mathematical models required for operating capturing site 600 may be uploaded locally, at capturing site 600. In order to allow complete identification of a specific vehicle, the content of the corresponding vehicle profile must include at least the corresponding design model, color and license plate number of the specific registered vehicle. Referring again to exemplary vehicle profile 609, it includes an exemplary "license plate number: AA-BBB-CC", "(Design) Model: Mazda 323 1.6L", "Color: red". Optionally, vehicle profile 609 may include other identifying details such as, the vehicle's owner, the validity of its license, its accidents history, whether or not it is stolen or used by criminals/terrorists, etc, and is sufficient to completely identify any registered vehicle.
Computer unit 603 communicates with remote server 621 by utilizing transceivers 606 and 606a, which may communicate with each other vie communication network 623, which may be, e.g., a cellular network or the Internet, or the Local Area Network (LAN), or the Wide Area Network (WAN).
Several capturing stations, such as capturing station 600 (e.g., capturing sites 600/1, 600/2 and 600/3) may be deployed to cover a large area of interest, according to wanted strategy, for cooperating with central control station 600a and while operating independently with respect to each other.
The system shown in Fig. 6 may operate in either of at least three modes. The first mode involves unconditionally forwarding the extracted details of every vehicle, which appears in every FOV of every video camera, to remote server 621, which may, then, decide upon actions to be carried out. After being forwarded, the extracted details are compared, in server 621 of central control station 600a, to vehicle profiles that are stored in GMDB 620. Each vehicle profile has essentially the structure of exemplary vehicle profile 609. Server 621 may commence a series of actions according to the result of the comparison result(s). For example, a perfect match is found between the (license plate number, color and model of the) captured vehicle 610 and a corresponding vehicle profile in GMDB 620 (i.e., vehicle profile 609), and vehicle 610 is registered as a vehicle that has a permission to enter a restricted area. Accordingly, capturing site 600 may be utilized (i.e., by central control station 600a) to relay a command, in the form of corresponding transmission, from server 621, to open automatic gate 610a, by forwarding a corresponding transmission from transceiver 606a to transceiver 606 via communication network 623, and from transceiver 606 to gate 610a, via wireless communication channel 61 Ob According to another example, there is no perfect match between a captured vehicle to any of the vehicle profiles contained in GMDB 620 (i.e., there is a model mismatch), in which case server 621 may forward that fact to the corresponding capturing site, in order to allow the latter site to act accordingly (e.g., stop the vehicle for interrogation). According to one example, a system operating under the first mode may be incorporated into a 'toll road' system, in order to allow calculating the travel fee of vehicles, by continuously forwarding extracted details (e.g., extracted details 607) of vehicles whenever vehicles appear in the FOV of each one of the corresponding video camera (e.g., camera 601).. According to another example, the system may be incorporated into a parking system, for granting automatic entrance to pre-paid members, or, alternatively, for calculating parking fee for non-members. According to still another example, the system may be utilized for evaluating average speed of a vehicle, by measuring the travel time of the vehicle between two selected capturing sites, the relative location of which is known.
The second mode involves forwarding from server 621 to corresponding capturing sites a data that relates to specific vehicles of interest. In the example shown in Fig. 6, vehicle profile 609 was forwarded from central control station 600a to WMDB 605 because the specific vehicle (i.e., vehicle 610) is wanted for being stolen, as can be seen in the (magnified) vehicle profile 609 (i.e., "Reason: stolen"). In Fig. 6, the image of exemplary vehicle 610 (having a plate number: 'AA-BBB-CC, color: 'red' and model: 'Mazda 323 1.6L') is captured by video camera 601, forwarded to, and processed and analyzed by, software package 604 in order to obtain the vehicle's 610 extracted details 607. Software package 604 is also responsible for comparing between the extracted details 607 to corresponding vehicle profiles 608 that are stored in WMDB 605, in order to find a matching vehicle profile; i.e., extracted details 607 match the vehicle profile 609 in WMDB 605, since the combination of the license plate number (i.e., 'AA- BBB-CC), vehicle's color (i.e., 'red') and model (i.e., Mazda 323 1.6L), is identical to the combination of the license plate number, color and model that are stored in vehicle profile 609. After the wanted vehicle (i.e., vehicle 610) is identified by capturing site 600 (i.e., by finding matching details in record 609 in WMDB 605), according to one aspect of the present invention, the fact of the latter vehicle being captured may is forwarded from computer unit 603 to central control station 600a, via communication network 623, in order for control station 600a to decide upon the actions to be carried out. According to a another aspect, computer unit 603 decides upon the actions to be carried out.
Another example may involve two persoiis being witnesses to a 'heat and run' accident, and a policeman holding them for questioning. One person may remember only the color of the heating vehicle (e.g., "RED"), and another person may identify only the 'general' model of the vehicle (e.g., FORD). The policeman may, then, communicate the color (Red) and model (Ford) of the heating vehicle to central control station 600a, which may forward to selected capturing sites a message such as "Stop every vehicle whose color and model are Red and FORD, respectively, for being involved in a 'heat and run' scenario". Accordingly, server 621 communicates a vehicle profile, such as vehicle profile 609, which contains essentially only five items; i.e., "Color: Red", "Model: Ford", "Wanted: yes", "Reason: involved in a lieat and run' accident", and the latter message. According to the third mode of operation, each capturing site is a 'standalone' facility, i.e., a capturing site, such as capturing site 600, is operable regardless of an existence of a central control station such as control station 600a. A system operating according to the third mode may be utilized, for example, by, e.g., residents of a private apartment house, in which case the residents own the system, operate it and are responsible for the content of WMDB 605, which may include information about the vehicles of the residents and the residents' (optionally) friends or relatives.
The disclosed system has several advantages, one of which is that only one GMDB 620 is utilized, whose data resources are shared by all of the capturing stations, and each capturing station (e.g., capturing station 600) contains a local database (i.e., WMDB 605) having significantly smaller content. A second advantage is; that the communication requirements between, e.g., capturing station 600 and remote server 620 are minimal — communication is required only for updating WMDB 605, i.e., whenever a vehicle profile of a new vehicle is to be added to WMDB 605. Another advantage is time efficiency; i.e., there is no need to search a huge database for every vehicle that is analyzed, but rather, only limited numbers of vehicle profiles are utilized.
Capturing site 600 may be stationary with respect to the monitored area, or be mobile, for example if installed in a police car, and it is capable of capturing stationary vehicles and moving vehicles. In the first case (i.e., stationary vehicles), video camera 601 generates a still image picture of the corresponding vehicle. In the second case (i.e., moving vehicles), video camera 601 generates a sequence of corresponding images, and a motion- detection algorithm is employed thereon, for allowing to extract an image picture of the captured vehicle. Although the method disclosed in the present invention is utilized for identifying models of vehicles, the method may be utilized for different purposes. For example, the method could be adapted to identify persons, by capturing the image of the face, and/or other parts, of a person;; extracting the person's corresponding silhouette and contours, and utilizing identified key features in the silhouette and contours that may assist in discriminating between different individuals/persons. Such key features may be, for example, the distance between the ears of the person, the shape of the person's nose, eyebrows and mouth, etc. Such person- oriented identification system will be useful for, e.g., automatically opening a door of a secured room only whenever an authorized person who wishes to enter the secured room. According to another example, the photos of known, and/or potential, criminals are in possession of, e.g., a bank, and whenever such a person enters into the bank, the person- oriented identification system sends a corresponding alarm to the security personnel of the bank, and/or, to the nearest police station, which may include the photo of the suspected person.
While some embodiments of the invention have been described by way of illustration, it will be apparent that the invention can be carried out in practice with many modifications, variations and adaptations, and with the use of numerous equivalents or alternative solutions that are within the scope of persons skilled in the art, without departing from the spirit of the invention or exceeding the scope of the claims.

Claims

1. Method for automatic and real time identification of vehicles, comprising: A - Providing a video acquisition device capable of generating a video data, that corresponds to objects passing in the Field of View (FOV) of said device;
B - Providing a computer unit for memorizing the data and controlling the operations set forth hereinafter;
C - Inserting in a Car Model Database (CMDB), data that defines essentially every existing vehicle's design model; and
D - Providing a computer unit for indicating that an object is in the FOV of the image generating apparatus and causing said apparatus to obtain an image picture that includes the image of said object; E - extracting an Area Of Interest (AOI) from said image picture; F - extracting said image of object from said AOI; G - determining whether said image of object represents a vehicle; and H - whenever said image of object is determined to be an image of a vehicle, extracting from said image of vehicle the color and license plate number of the vehicle.
2. Method according to claim 1, wherein the data that define the vehicle's design models comprise a number of characteristic geometric features, the combination or concurrent presence whereof is specific to given models of vehicles, said method further comprising a) Representing each of said characteristic geometric features by an identifying digital word; b) Determining, memorizing and constantly adjourning the identifying words of all, or a sufficiently large number of known vehicles models; c) Acquiring by the video acquisition device, continuously, or at predetermined intervals, a background image of the selected area in the absence of traffic; d) Comparing the procured images with said background image, whereby to extract images of objects that are not in said background and are assumed to be of vehicles; e) Selecting from said object images the geometric features of the assumed vehicle; f) Representing said features by corresponding digital words; g) Comparing the resulting digital words with said identifying digital words; and h) Determining the result of the comparison among the following: 1) there is a certainty or a high probability that the assumed vehicle is a vehicle of a specific model; 2) there is a certainty or a high probability that the assumed vehicle is a vehicle of one of the models of a group of models; 3) there is a low probability that the assumed vehicle is a vehicle of one of the models of a group of models; 4) there is a random chance that the assumed vehicle is of a specific model or of a limited groups of specific models; 5) the assumed vehicle is not of a known model, 6) the assumed vehicle is not a vehicle at all.
3. Method according to claim 1, wherein a television monitor is connected to the video acquisition device, for locally, or remotely, monitoring the selected area, and, thereby, allowing indicating the presence of a vehicle in the FOV.
4. Method according to claim 3, wherein the television monitor is utilized for carrying out calibration and maintenance procedures.
5. Method according to claim 1, wherein the computer unit is initialized with parameters for optimization of the analysis of stream of consecutive video frames produced by the video acquisition device, selection of a frame that includes the image of the object, extraction of the object from said image, determination whether said object is a vehicle, and extraction of the model, color and license plate number of said vehicle, said parameters may be related to the relative height, tilt angel and distance of the video acquisition device, with respect to the preferred area, for normalizing the captured vehicle,' for allowing comparing between key features of the normalized captured vehicle to key features that represent vehicle's models and are contained within the CMDB, thereby allowing obtaining the captured vehicle's model, wherein said parameters may relate to a calibration color plate, for optimizing the color determination process, and to the direction of the moving vehicles, for optimizing the image extraction process.
6. Method according to claim 1, wherein the indication to the computer unit, regarding the presence of a vehicle, or another object, in the FOV, and the selection of the image to process, is carried out automatically, by: a) Analyzing, automatically and in real-time, a sequence of corresponding successive video frames, contained in the stream of video frames, that correspond to an object being captured by the video acquisition device, said analysis including utilization of said stream of video frames, for detection of 'motion center' of corresponding object(s), with respect to the FOV of the video acquisition device, said motion center is detected by employing motion- detection technique on said successive video frames, said motion- detection technique utilizing a differentiation algorithm, according to which the characteristics of corresponding pixels, said successive video frames are compared, and corresponding differences of characteristics are calculated, thereby detecting a moving object; and b) selecting, from said successive video frames, an image to be processed, said image is selected according to the image's occurrence instant, which is synchronized with the instant of detected motion of the vehicle, so as to ensure that the content of the (selected) image includes all, or at least most, of the object(s) that was captured by said video acquisition device.
7. Method according to claim 1, wherein the motion-detection stage is utilized for obtaining the speed of the captured vehicle, by utilizing the apriori knowledge related to the relative location of the video acquisition device, with respect to the captured vehicle, and the number of video frames per second.
8. Method according to claim 1, wherein the indication of a vehicle being in the FOV is 'manual', according to which the operator of the video acquisition device indicates to the computer unit that a vehicle is in the FOV, in order to obtain the image picture that includes a still-picture of the vehicle.
9. Method according to claim 1, wherein selecting the image picture that includes the object is carried out by: a) utilizing a motion-detection algorithm for identifying the instant at which the vehicle enters the FOV (Field of View); b) utilizing the identified instant for obtaining a succession of video frames, from which several consecutive frames are selected that include the image of the vehicle, or a portion thereof, the selection being based on the identified instant, the expected vehicle's average speed and the number of video frames per second; and c) choosing from the selected consecutive frames a middle frame, which is most likely to contain the whole image of the object, said middle frame being said image picture.
10. Method according to claim 1, wherein more than one objects are captured by the same capturing device and essentially at the same time, for each one of said objects is identified and analyzed a corresponding AOI.
11. Method according to claim 1, wherein extraction of the image of the object from the AOI is carried out by employing segmentation process on the AOI, said segmentation process including: a) Filtering said image, so as to discard effects of ambient conditions, in order to obtain an essentially noiseless picture of the captured object; b) Obtaining the silhouette of said object and contours contained therein, by employing edge-detection algorithm; c) Whenever and wherein required, completing said silhouette and contours by adding missing sections, or 'dots', by employing corresponding ('Line tracking') algorithms; d) Obtaining a binary image of the silhouette and contours, by converting the reconstructed silhouette and contours to white lines on black background; and e) Extracting the area that is confined within said silhouette, said extracted area being the image of said object.
12. Method according to claim 11, wherein the identification of the silhouette of the vehicle's image and the contours of the vehicle's prominent elements is implemented by employing a segmentation process, said segmentation process including employing a 'differentiation operator' on the pixels in the picture contained within the AOI, after which pixels having relatively high value are registered, said pixels representing the border points between adjacent distinguishable areas, and a collection of corresponding pixels forms corresponding border lines, which represent the required silhouette and contour lines.
13. Method according to claim 12, wherein the silhouette and contours, after being identified, are normalized, in order to allow correlation between the normalized key features of the captured vehicle and 'full real sized' key features belonging to actual existing vehicle's models, the normalization including scaling and, whenever required, rotation of the picture in the AOI, said scaling factor being determined according to the known distance of the video acquisition device from captured vehicles, said the rotation angle being determined according to the known relative height, angle of said video acquisition device, with respect to the captured vehicles.
14. Method according to claim 1, wherein the Car Model Database (CMDB) is external to the computer unit, and communicating with the computer unit by utilizing bidirectional communication channel.
15. Method according to claim 1, wherein the CMDB resides within the computer unit.
16. Method according to claim 1, wherein the video acquisition device is stationary, with respect to the preferred covered area, and the color reference is a standard calibration colored plate, which is located in a fixed location in a way that the calibration colored plate continuously appears in a pre-determined size and location in the FOV of the video acquisition device, said standard calibration colored plate being utilized for continuously measuring the effect of the ambient factors thereon, and for modifying the representative color of the vehicle according to the measurement results at the time of the actual capturing of the vehicle's image, said modified representative color of the vehicle being the . true color of the vehicle.
17. Method according to claim 1, wherein the video acquisition device is non-stationary, and the image of the captured vehicle is a still picture^ and the color reference is obtained by periodically facing a standard calibration colored plate in front of the video acquisition device and measuring the effect of the ambient factors thereon.
18. Method according to claim 16, wherein a color reference, in the form of a relatively small standard calibration colored plate, is continuously positioned in a fixed location in front of the video acquisition device, in order to allow continuous measurement of the effect of ambient factors, while allowing the video acquisition device to capture images of vehicles.
19. Method according to claim 17, further comprising a distance measuring sub-system, for continuously measuring the distance between the (moving) video acquisition devices to the (moving) vehicle whose image is to be captured, the measured distance and the (known) height of the video acquisition device allow the computer unit optimizing the normalization of the image captured by the video acquisition device.
20. Method according to claim 1, further comprising providing a transceiver unit, said transceiver unit being located at the capturing site, and allowing exchanging data between the computer unit residing within the capturing site, and a central control station.
21. Method according to any of claims 1 to 20, further comprising providing a plurality of capturing sites, said capturing sites being deployed in predetermined locations, and communicating with the central control station.
22. Method according to claim 21, further comprising providing a central control station, for communicating with the plurality of capturing sites, said central control station including a Global Model Database (GMDB), said GMDB including data in the form of vector characteristics, which represent key features and mathematical models that characterize essentially every existing vehicle design models, and data that is related to the status of selected vehicles, said central control station including a central computer and a transceiver, for allowing said central computer to exchange messages with the plurality of capturing sites.
23. Method according to claim 22, further comprising the steps: a) Continuously operating the video acquisition device in each one of the vehicle capturing sites; b) Capturing every image of every vehicle that enters each one of the FOV of a respective video acquisition device, and performing the steps: b.l) extracting the model, color and license plate number from the respective image picture(s); b.2) identifying and unconditionally transmitting, essentially in realtime, the extracted details of each vehicle to the central control station; b.3) comparing, in said central control station, the vehicle's extracted details to "vehicles profiles" that are stored in the WMDB; and b.4) responding according to the result of the comparison.
24. Method according to claim 22, further comprising: a) Transmitting a request, from the central control station to the plurality of vehicles capturing sites, to commence a response only upon identification of specific vehicles of interest; and b) Whenever a specific vehicle of interest is identified, by at least one of the capturing sites, commencing a corresponding response by the respective capturing site.
25. Method according to claim 22, further comprising providing a central Graphic User Interface (GUI), for allowing a person to interact with the central computer, and, through the central computer, with the vehicles capturing sites, the interaction including at least updating the content of the GMDB, CMDB and WMDB, transmitting requests and directives from the central control station to the capturing sites, presenting a picture, and related data, of wanted specific vehicle that are identified.
26. System for automatic and real time identification of a vehicle, comprising a plurality of vehicle capturing sites, for allowing an automatic and real time identification of vehicles by identifying, or extracting, the design model, color and license plate number of the vehicles, and a central control station, capable of communicating with said capturing sites, for allowing checking the status of the vehicle by utilizing its identified, or extracted, model, color and license plate number.
27. System according to claim 26, comprising a central control station and a plurality of capturing sites, each of which comprising: a) Video acquisition device, being located in known orientation, height and distance, with respect to captured vehicles, and being capable of generating a video data, in the form of still images, or stream of consecutive video frames, that correspond to objects passing in front of the Field Of View (FOV) of said video acquisition device; b) Database (Car Model Database - CMDB), said CMDB containing data, and or features, and or images, and/or structures, and/or descriptions, and mathematical models, and key features, that correspond essentially to every existing vehicle's design model; and c) Computer unit and corresponding processing software, said computer unit comprising means for: c.l) allowing indicating to said computer unit that an object is in the FOV, for selecting, from the stream of consecutive video stream, an image picture that includes the image of said object; c.2) extracting an Area Of Interest (AOI) from said image picture, which" includes the image of said object; c.3) extracting the image of said object from said AOI, by employing edge- detection algorithm on the pixels of said AOI; c.4) determining whether the image of said object is an image of a vehicle; and, whenever the object is determined as a vehicle c.5) extracting the design model, color and license plate number of the related vehicle.
28. System according to claim 27, further comprising, for each computer unit: a) means for processing and analyzing automatically and in real-time a sequence of corresponding consecutive video frames contained in the stream of consecutive video frames, which correspond to an object being captured by the video acquisition device, said analyzing including utilization of the generated stream of video frames for detection of 'motion center' of corresponding object(s), with respect to the FOV of the video acquisition device, said motion center being detected by employing motion- detection technique on said successive video frames, said motion-detection technique utilizing a differentiation algorithm, according to which the characteristics of corresponding pixels of successive video frames are compared, and corresponding differences of characteristics are calculated, thereby detecting a moving object; and b) means for selecting, from the consecutive video frames, an image picture to be processed, the image being selected according to the image's occurrence instant, said instant being synchronized with the instant of detected motion of the vehicle, so as to ensure that the content of said (selected) image picture includes all, or at least most, of the object(s) being captured by said video acquisition device.
29. System according to claim 27, further comprising, for each computer unit, a motion- detection algorithm, which includes identifying a pixel whose characteristics were changed and tracking changes along pixels that form a trajectory, said trajectory originating from said pixel.
30. System according to claim 27, further comprising, for each computer unit, means for identifying the corresponding 'motion-center' of the image, by evaluating pixels whose characteristics are varied due to motion, for extracting the Area OF Interest (AOI) from the image picture.
31. System according to claim 27, further comprising, for each computer unit, extraction means, for extracting the (captured) object from the AOI, said extraction means comprising segmentation means, which includes: a) Means for filtering the image, so as to discard effects of ambient light conditions, dust, shadow, motion blur and clouds, in order to obtain an essentially noiseless picture of the captured object; b) Edge-detection algorithm, for obtaining the silhouette of said object and contours contained therein; c) Means for completing the silhouette and contours by adding missing sections, and/or 'dots', by employing corresponding ('Line tracking') algorithms; d) Means for generating binary image of the silhouette and contours; and e) Means for extracting the area confined within the closed silhouette, the extracted area being the image of the (captured) object.
32. System according to claim 27, wherein the means for extracting the vehicle design model further comprising, for each computer unit:. a) Means for identifying, in the image of object, the silhouette and contours of the vehicle; ; b) Means for ' mathematically characterizing preferred key features, said key features being part of, and/or resulting from, said silhouette and said contours; c) Means for generating a mathematical model of the related vehicle, by mathematically characterizing the overall surface area of the related vehicle, by utilizing a corresponding group of characterized key features, said key features of the group being selected from said preferred key features; d) Means for correlating the generated mathematical model with data being contained within the CMDB and corresponds to existing vehicle design models, for assigning a primary probability value for each one of the existing vehicle design models; and e) Means for choosing the vehicle's design model that is assigned the highest primary probability value as the preferred vehicle design model of the related vehicle.
33. System according to claim 32, wherein the mathematical model is a 2- dimensional (2D) model, or, alternatively, a 3-dimensional (3D) model, said 3D model said 3D model being obtained by applying trigonometric calculations on said 2D model.
34. System according to claim 32, further comprising, for each computer unit, utilization of a probability threshold value and probability margin, for enhancing the vehicle design model determination process.
35. System according to claim 27, wherein the CMDB is, per capturing site, external to the computer unit and communicating with the computer unit by utilizing bidirectional communication channel.
36. System according to claim 27, wherein the CMDB resides, per capturing site, within the computer unit.
37. System according to claim 27, further comprising, for each computer unit, color extraction means, said color extraction means comprising: a) means for analyzing the characteristics of the pixels of the AOI, for allowing calculating averaged color values, each one of said averaged color values corresponding to a different area of the vehicle; b) means for choosing the averaged color having the maximal value as the representative color of the vehicle; c) Known color reference, for measuring the effect of ambient factors on the known color reference; d) Means for modifying the representative color of the vehicle according to the measured effect of the ambient factors on the known color reference, the modified representative color being the true color of the vehicle.
38. System according to claim 27, wherein several of the video acquisition devices are stationary and the remaining video acquisition device are non- stationary.
39. System according to claim 27, wherein the central control station comprises a central computer and a transceiver, for allowing said central computer to exchange messages with the computer units residing in the respective capturing site.
40. System according to claim 39, further comprising a Global Model Database (GMDB), said GMDB including data in the form , of vector characteristics, which represent key features and mathematical models that characterize essentially every existing vehicle design models, and data that is related to the status of selected vehicles.
41. System according to claim 27, wherein the central control station forwards a message, or request, to every one of the, or only to pre-selected, capturing sites, according to predetermined criteria.
42. System according to claim 39, further comprising a central Graphic User Interface (GUI), for allowing a person to interact with the central computer, and, through the central computer, with the computer units in the respective vehicles capturing sites, the interaction including at least updating the content of the GMDB, CMDB and WMDB, transmitting requests, inquiries and directives from said central control station to said computer units, presenting a picture, and related data, of wanted specific vehicle that are identified.
PCT/IL2003/000910 2002-11-04 2003-11-03 Automatic, real time and complete identification of vehicles WO2004042673A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003278585A AU2003278585A1 (en) 2002-11-04 2003-11-03 Automatic, real time and complete identification of vehicles

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL15263702A IL152637A0 (en) 2002-11-04 2002-11-04 Automatic, real time and complete identification of vehicles
IL152637 2002-11-04

Publications (2)

Publication Number Publication Date
WO2004042673A2 true WO2004042673A2 (en) 2004-05-21
WO2004042673A3 WO2004042673A3 (en) 2004-07-15

Family

ID=32310091

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2003/000910 WO2004042673A2 (en) 2002-11-04 2003-11-03 Automatic, real time and complete identification of vehicles

Country Status (3)

Country Link
AU (1) AU2003278585A1 (en)
IL (1) IL152637A0 (en)
WO (1) WO2004042673A2 (en)

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006134498A3 (en) * 2005-06-10 2007-04-26 Accenture Global Services Gmbh Electronic vehicle indentification
EP1803107A1 (en) * 2004-10-11 2007-07-04 Stratech Systems Limited A system and method for automatic exterior and interior inspection of vehicles
ES2284358A1 (en) * 2005-10-28 2007-11-01 Manuel Angel Cordon Cano System for automatic recognition of vehicles in traffic e.g. parking, has local module of recognition with image recorder for registering image of vehicle, where information is compiled by central module of integration
US7372996B2 (en) 2004-12-27 2008-05-13 Trw Automotive U.S. Llc Method and apparatus for determining the position of a vehicle seat
US7489981B2 (en) 2004-09-01 2009-02-10 Honda Motor Co., Ltd. Assembly line control system and immobilizer data protocol and communication process flow
US7970644B2 (en) 2003-02-21 2011-06-28 Accenture Global Services Limited Electronic toll management and vehicle identification
DE102006012914B4 (en) * 2005-03-15 2012-01-19 Visteon Global Technologies Inc. System and method for determining the distance to a preceding vehicle
US8265988B2 (en) 2003-02-21 2012-09-11 Accenture Global Services Limited Electronic toll management and vehicle identification
US8301363B2 (en) 2006-11-10 2012-10-30 Eng Celeritas S.R.L. System and method for detection of average speed of vehicles for traffic control
WO2013059255A1 (en) * 2011-10-17 2013-04-25 Whp Workflow Solutions, Llc Situational awareness
CN103116986A (en) * 2013-01-21 2013-05-22 信帧电子技术(北京)有限公司 Vehicle identification method
DE102012003776B3 (en) * 2012-02-25 2013-07-25 Volkswagen Ag Method for identifying a vehicle in a vehicle-to-vehicle communication
US8504415B2 (en) 2006-04-14 2013-08-06 Accenture Global Services Limited Electronic toll management for fleet vehicles
CN103279736A (en) * 2013-04-27 2013-09-04 电子科技大学 License plate detection method based on multi-information neighborhood voting
EP2652680A1 (en) * 2010-12-13 2013-10-23 Incca GmbH System and method for assisting the performance of a maintenance or operating process
CN103390161A (en) * 2013-07-22 2013-11-13 公安部第三研究所 Method for performing binarization processing on license plate with local shadow area
EP2750081A1 (en) 2012-12-31 2014-07-02 Instytut Badawczy Dróg I Mostów A method for vehicle identification and a system for vehicle identification
WO2014101970A1 (en) 2012-12-31 2014-07-03 Instytut Badawczy Drog I Mostow A method for vehicle identification and a system for vehicle identification
CN104167095A (en) * 2014-08-05 2014-11-26 江苏省邮电规划设计院有限责任公司 Method for checking vehicle behavior modes on basis of smart cities
WO2014199173A1 (en) * 2013-06-13 2014-12-18 G24 Ltd Car park monitoring system and method
ES2530684A1 (en) * 2012-10-03 2015-03-04 Univ Las Palmas Gran Canaria Telematic system to record the withdrawal, transfer and deposit of vehicles (Machine-translation by Google Translate, not legally binding)
JP2015064752A (en) * 2013-09-25 2015-04-09 株式会社東芝 Vehicle monitoring device and vehicle monitoring method
FR3018940A1 (en) * 2014-03-24 2015-09-25 Survision AUTOMATIC CLASSIFICATION SYSTEM FOR MOTOR VEHICLES
US9214191B2 (en) 2009-04-28 2015-12-15 Whp Workflow Solutions, Llc Capture and transmission of media files and associated metadata
WO2016066360A1 (en) * 2014-10-27 2016-05-06 Robert Bosch Gmbh Device and method for operating a parking space
US9471838B2 (en) 2012-09-05 2016-10-18 Motorola Solutions, Inc. Method, apparatus and system for performing facial recognition
EP3154003A1 (en) * 2015-10-07 2017-04-12 Accenture Global Solutions Limited Border inspection with aerial cameras
EP3159828A1 (en) * 2015-10-19 2017-04-26 Continental Automotive GmbH Adaptive calibration using visible car details
BE1023888B1 (en) * 2016-01-20 2017-09-05 Accenture Global Solutions Ltd BORDER INSPECTION WITH AERIAL CAMERAS
WO2017176711A1 (en) * 2016-04-04 2017-10-12 3M Innovative Properties Company Vehicle recognition system using vehicle characteristics
CN107316016A (en) * 2017-06-19 2017-11-03 桂林电子科技大学 A kind of track of vehicle statistical method based on Hadoop and monitoring video flow
DE102016221521A1 (en) * 2016-11-03 2018-05-03 Jenoptik Robot Gmbh Method for masking information in an image of an object recorded by a traffic monitoring device
CN110119726A (en) * 2019-05-20 2019-08-13 四川九洲视讯科技有限责任公司 A kind of vehicle brand multi-angle recognition methods based on YOLOv3 model
DE102018104408A1 (en) * 2018-02-27 2019-08-29 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Method and system for recognizing a vehicle type of a vehicle
DE102011081439B4 (en) 2011-08-23 2019-09-12 Robert Bosch Gmbh Method and device for evaluating an image of a camera of a vehicle
US10419722B2 (en) 2009-04-28 2019-09-17 Whp Workflow Solutions, Inc. Correlated media source management and response control
US10445576B2 (en) * 2016-09-23 2019-10-15 Cox Automotive, Inc. Automated vehicle recognition systems
US10565065B2 (en) 2009-04-28 2020-02-18 Getac Technology Corporation Data backup and transfer across multiple cloud computing providers
DE102018006752A1 (en) * 2018-08-24 2020-02-27 Trw Automotive Gmbh Device, method and system for determining a driving characteristic of another vehicle in the environment of its own vehicle
CN111368651A (en) * 2020-02-18 2020-07-03 杭州海康威视系统技术有限公司 Vehicle identification method and device and electronic equipment
US10846809B2 (en) 2015-10-07 2020-11-24 Accenture Global Services Limited Automated border inspection
US20210103288A1 (en) * 2019-10-02 2021-04-08 William D. Nemedi System and method of organizing and controlling autonomous vehicles
CN112911221A (en) * 2021-01-15 2021-06-04 欧冶云商股份有限公司 Remote live-action storage supervision system based on 5G and VR videos
CN113310662A (en) * 2021-04-30 2021-08-27 北京海纳川汽车部件股份有限公司 Test method, platform and storage medium for automobile lamp
WO2022073018A1 (en) * 2020-09-30 2022-04-07 Rekor Systems, Inc. Systems and methods for traffic monitoring with improved privacy protections
CN115641359A (en) * 2022-10-17 2023-01-24 北京百度网讯科技有限公司 Method, apparatus, electronic device, and medium for determining motion trajectory of object
CN116343187A (en) * 2023-03-23 2023-06-27 北京博宏科元信息科技有限公司 Intelligent matching method, device, equipment and medium for license plate numbers of electric bicycles
US11720969B2 (en) * 2020-02-07 2023-08-08 International Business Machines Corporation Detecting vehicle identity and damage status using single video analysis
CN116935659A (en) * 2023-09-12 2023-10-24 四川遂广遂西高速公路有限责任公司 High-speed service area bayonet vehicle auditing system and method thereof

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107436440B (en) * 2017-09-22 2023-09-05 乐山师范学院 Pedestrian flow measurement system based on scanning laser ranging
CN111191481B (en) * 2018-11-14 2023-08-25 杭州海康威视数字技术股份有限公司 Vehicle identification method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5801943A (en) * 1993-07-23 1998-09-01 Condition Monitoring Systems Traffic surveillance and simulation apparatus
GB2344205A (en) * 1998-11-26 2000-05-31 Roke Manor Research Vehicle identification
US20020145541A1 (en) * 2001-03-30 2002-10-10 Communications Res. Lab., Ind. Admin. Inst. (90%) Road traffic monitoring system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5801943A (en) * 1993-07-23 1998-09-01 Condition Monitoring Systems Traffic surveillance and simulation apparatus
GB2344205A (en) * 1998-11-26 2000-05-31 Roke Manor Research Vehicle identification
US20020145541A1 (en) * 2001-03-30 2002-10-10 Communications Res. Lab., Ind. Admin. Inst. (90%) Road traffic monitoring system

Cited By (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8265988B2 (en) 2003-02-21 2012-09-11 Accenture Global Services Limited Electronic toll management and vehicle identification
US8775236B2 (en) 2003-02-21 2014-07-08 Accenture Global Services Limited Electronic toll management and vehicle identification
US8660890B2 (en) 2003-02-21 2014-02-25 Accenture Global Services Limited Electronic toll management
US10885369B2 (en) 2003-02-21 2021-01-05 Accenture Global Services Limited Electronic toll management and vehicle identification
US7970644B2 (en) 2003-02-21 2011-06-28 Accenture Global Services Limited Electronic toll management and vehicle identification
US8463642B2 (en) 2003-02-21 2013-06-11 Accenture Global Services Limited Electronic toll management and vehicle identification
US7489981B2 (en) 2004-09-01 2009-02-10 Honda Motor Co., Ltd. Assembly line control system and immobilizer data protocol and communication process flow
EP1803107A1 (en) * 2004-10-11 2007-07-04 Stratech Systems Limited A system and method for automatic exterior and interior inspection of vehicles
EP1803107A4 (en) * 2004-10-11 2009-03-04 Stratech Systems Ltd A system and method for automatic exterior and interior inspection of vehicles
US8155384B2 (en) 2004-10-11 2012-04-10 Stratech Systems Limited System and method for automatic exterior and interior inspection of vehicles
US7372996B2 (en) 2004-12-27 2008-05-13 Trw Automotive U.S. Llc Method and apparatus for determining the position of a vehicle seat
DE102006012914B4 (en) * 2005-03-15 2012-01-19 Visteon Global Technologies Inc. System and method for determining the distance to a preceding vehicle
US8548845B2 (en) 2005-06-10 2013-10-01 Accenture Global Services Limited Electric toll management
US7676392B2 (en) 2005-06-10 2010-03-09 Accenture Global Services Gmbh Electronic toll management
US8775235B2 (en) 2005-06-10 2014-07-08 Accenture Global Services Limited Electric toll management
WO2006134498A3 (en) * 2005-06-10 2007-04-26 Accenture Global Services Gmbh Electronic vehicle indentification
CN101872496B (en) * 2005-06-10 2012-03-21 埃森哲环球服务有限公司 Electronic vehicle indentification
US9240078B2 (en) 2005-06-10 2016-01-19 Accenture Global Services Limited Electronic toll management
US10115242B2 (en) 2005-06-10 2018-10-30 Accenture Global Services Limited Electronic toll management
CN101228558B (en) * 2005-06-10 2010-08-18 埃森哲环球服务有限公司 Electronic vehicle identification
ES2284358A1 (en) * 2005-10-28 2007-11-01 Manuel Angel Cordon Cano System for automatic recognition of vehicles in traffic e.g. parking, has local module of recognition with image recorder for registering image of vehicle, where information is compiled by central module of integration
US8504415B2 (en) 2006-04-14 2013-08-06 Accenture Global Services Limited Electronic toll management for fleet vehicles
US8768755B2 (en) 2006-04-14 2014-07-01 Accenture Global Services Limited Electronic toll management for fleet vehicles
US8301363B2 (en) 2006-11-10 2012-10-30 Eng Celeritas S.R.L. System and method for detection of average speed of vehicles for traffic control
US10565065B2 (en) 2009-04-28 2020-02-18 Getac Technology Corporation Data backup and transfer across multiple cloud computing providers
US10419722B2 (en) 2009-04-28 2019-09-17 Whp Workflow Solutions, Inc. Correlated media source management and response control
US9214191B2 (en) 2009-04-28 2015-12-15 Whp Workflow Solutions, Llc Capture and transmission of media files and associated metadata
US9760573B2 (en) 2009-04-28 2017-09-12 Whp Workflow Solutions, Llc Situational awareness
US10728502B2 (en) 2009-04-28 2020-07-28 Whp Workflow Solutions, Inc. Multiple communications channel file transfer
EP2652680A1 (en) * 2010-12-13 2013-10-23 Incca GmbH System and method for assisting the performance of a maintenance or operating process
DE102011081439B4 (en) 2011-08-23 2019-09-12 Robert Bosch Gmbh Method and device for evaluating an image of a camera of a vehicle
WO2013059255A1 (en) * 2011-10-17 2013-04-25 Whp Workflow Solutions, Llc Situational awareness
US9165198B2 (en) 2012-02-25 2015-10-20 Audi Ag Method for identifying a vehicle during vehicle-to-vehicle communication
DE102012003776B3 (en) * 2012-02-25 2013-07-25 Volkswagen Ag Method for identifying a vehicle in a vehicle-to-vehicle communication
US9471838B2 (en) 2012-09-05 2016-10-18 Motorola Solutions, Inc. Method, apparatus and system for performing facial recognition
ES2530684A1 (en) * 2012-10-03 2015-03-04 Univ Las Palmas Gran Canaria Telematic system to record the withdrawal, transfer and deposit of vehicles (Machine-translation by Google Translate, not legally binding)
WO2014101970A1 (en) 2012-12-31 2014-07-03 Instytut Badawczy Drog I Mostow A method for vehicle identification and a system for vehicle identification
EP2750081A1 (en) 2012-12-31 2014-07-02 Instytut Badawczy Dróg I Mostów A method for vehicle identification and a system for vehicle identification
CN103116986B (en) * 2013-01-21 2014-12-10 信帧电子技术(北京)有限公司 Vehicle identification method
CN103116986A (en) * 2013-01-21 2013-05-22 信帧电子技术(北京)有限公司 Vehicle identification method
CN103279736A (en) * 2013-04-27 2013-09-04 电子科技大学 License plate detection method based on multi-information neighborhood voting
CN103279736B (en) * 2013-04-27 2016-03-30 电子科技大学 A kind of detection method of license plate based on multi-information neighborhood ballot
GB2529366A (en) * 2013-06-13 2016-02-17 G24 Ltd Car park monitoring system and method
WO2014199173A1 (en) * 2013-06-13 2014-12-18 G24 Ltd Car park monitoring system and method
CN103390161A (en) * 2013-07-22 2013-11-13 公安部第三研究所 Method for performing binarization processing on license plate with local shadow area
JP2015064752A (en) * 2013-09-25 2015-04-09 株式会社東芝 Vehicle monitoring device and vehicle monitoring method
FR3018940A1 (en) * 2014-03-24 2015-09-25 Survision AUTOMATIC CLASSIFICATION SYSTEM FOR MOTOR VEHICLES
EP2924671A1 (en) * 2014-03-24 2015-09-30 Survision Automatic automotive vehicle classification system
CN104167095A (en) * 2014-08-05 2014-11-26 江苏省邮电规划设计院有限责任公司 Method for checking vehicle behavior modes on basis of smart cities
WO2016066360A1 (en) * 2014-10-27 2016-05-06 Robert Bosch Gmbh Device and method for operating a parking space
JP2017534118A (en) * 2014-10-27 2017-11-16 ローベルト ボツシユ ゲゼルシヤフト ミツト ベシユレンクテル ハフツングRobert Bosch Gmbh Apparatus and method for operating a parking lot
US10467895B2 (en) 2014-10-27 2019-11-05 Robert Bosch Gmbh Device and method for operating a parking facility
US10878249B2 (en) 2015-10-07 2020-12-29 Accenture Global Solutions Limited Border inspection with aerial cameras
US10846809B2 (en) 2015-10-07 2020-11-24 Accenture Global Services Limited Automated border inspection
EP3154003A1 (en) * 2015-10-07 2017-04-12 Accenture Global Solutions Limited Border inspection with aerial cameras
WO2017080715A1 (en) * 2015-10-19 2017-05-18 Continental Automotive Gmbh Adaptive calibration using visible car details
EP3159828A1 (en) * 2015-10-19 2017-04-26 Continental Automotive GmbH Adaptive calibration using visible car details
BE1023888B1 (en) * 2016-01-20 2017-09-05 Accenture Global Solutions Ltd BORDER INSPECTION WITH AERIAL CAMERAS
WO2017176711A1 (en) * 2016-04-04 2017-10-12 3M Innovative Properties Company Vehicle recognition system using vehicle characteristics
US10445576B2 (en) * 2016-09-23 2019-10-15 Cox Automotive, Inc. Automated vehicle recognition systems
DE102016221521A1 (en) * 2016-11-03 2018-05-03 Jenoptik Robot Gmbh Method for masking information in an image of an object recorded by a traffic monitoring device
CN107316016B (en) * 2017-06-19 2020-06-23 桂林电子科技大学 Vehicle track statistical method based on Hadoop and monitoring video stream
CN107316016A (en) * 2017-06-19 2017-11-03 桂林电子科技大学 A kind of track of vehicle statistical method based on Hadoop and monitoring video flow
US11007892B2 (en) 2018-02-27 2021-05-18 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Method and system for identifying a vehicle type of a vehicle
DE102018104408A1 (en) * 2018-02-27 2019-08-29 Dr. Ing. H.C. F. Porsche Aktiengesellschaft Method and system for recognizing a vehicle type of a vehicle
DE102018006752A1 (en) * 2018-08-24 2020-02-27 Trw Automotive Gmbh Device, method and system for determining a driving characteristic of another vehicle in the environment of its own vehicle
CN110119726A (en) * 2019-05-20 2019-08-13 四川九洲视讯科技有限责任公司 A kind of vehicle brand multi-angle recognition methods based on YOLOv3 model
US20210103288A1 (en) * 2019-10-02 2021-04-08 William D. Nemedi System and method of organizing and controlling autonomous vehicles
US11720969B2 (en) * 2020-02-07 2023-08-08 International Business Machines Corporation Detecting vehicle identity and damage status using single video analysis
CN111368651A (en) * 2020-02-18 2020-07-03 杭州海康威视系统技术有限公司 Vehicle identification method and device and electronic equipment
CN111368651B (en) * 2020-02-18 2024-03-08 杭州海康威视系统技术有限公司 Vehicle identification method and device and electronic equipment
WO2022073018A1 (en) * 2020-09-30 2022-04-07 Rekor Systems, Inc. Systems and methods for traffic monitoring with improved privacy protections
CN112911221A (en) * 2021-01-15 2021-06-04 欧冶云商股份有限公司 Remote live-action storage supervision system based on 5G and VR videos
CN113310662A (en) * 2021-04-30 2021-08-27 北京海纳川汽车部件股份有限公司 Test method, platform and storage medium for automobile lamp
CN115641359A (en) * 2022-10-17 2023-01-24 北京百度网讯科技有限公司 Method, apparatus, electronic device, and medium for determining motion trajectory of object
CN115641359B (en) * 2022-10-17 2023-10-31 北京百度网讯科技有限公司 Method, device, electronic equipment and medium for determining movement track of object
CN116343187A (en) * 2023-03-23 2023-06-27 北京博宏科元信息科技有限公司 Intelligent matching method, device, equipment and medium for license plate numbers of electric bicycles
CN116343187B (en) * 2023-03-23 2023-11-28 北京博宏科元信息科技有限公司 Intelligent matching method, device, equipment and medium for license plate numbers of electric bicycles
CN116935659A (en) * 2023-09-12 2023-10-24 四川遂广遂西高速公路有限责任公司 High-speed service area bayonet vehicle auditing system and method thereof
CN116935659B (en) * 2023-09-12 2023-12-08 四川遂广遂西高速公路有限责任公司 High-speed service area bayonet vehicle auditing system and method thereof

Also Published As

Publication number Publication date
IL152637A0 (en) 2004-02-19
AU2003278585A1 (en) 2004-06-07
WO2004042673A3 (en) 2004-07-15

Similar Documents

Publication Publication Date Title
WO2004042673A2 (en) Automatic, real time and complete identification of vehicles
CN104303193B (en) Target classification based on cluster
US20030053659A1 (en) Moving object assessment system and method
US20030123703A1 (en) Method for monitoring a moving object and system regarding same
US20030053658A1 (en) Surveillance system and methods regarding same
US20140028842A1 (en) Calibration device and method for use in a surveillance system for event detection
US20060200307A1 (en) Vehicle identification and tracking system
KR102122859B1 (en) Method for tracking multi target in traffic image-monitoring-system
CN108710827B (en) A kind of micro- police service inspection in community and information automatic analysis system and method
KR102122850B1 (en) Solution for analysis road and recognition vehicle license plate employing deep-learning
KR102282800B1 (en) Method for trackig multi target employing ridar and camera
CN102254394A (en) Antitheft monitoring method for poles and towers in power transmission line based on video difference analysis
CN100496122C (en) Method for tracking principal and subordinate videos by using single video camera
CN113743260A (en) Pedestrian tracking method under dense pedestrian flow condition of subway platform
Kumar et al. Traffic surveillance and speed limit violation detection system
Dinh et al. Development of a tracking-based system for automated traffic data collection for roundabouts
CN111950499A (en) Method for detecting vehicle-mounted personnel statistical information
EP3244344A1 (en) Ground object tracking system
Tang Development of a multiple-camera tracking system for accurate traffic performance measurements at intersections
Roy et al. Vehicle number plate recognition and parking system
KR102434154B1 (en) Method for tracking multi target in traffic image-monitoring-system
CN114708544A (en) Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof
CN106997685A (en) A kind of roadside parking space detection device based on microcomputerized visual
CN114979567B (en) Object and region interaction method and system applied to video intelligent monitoring
Xu et al. Detection Method of Tourist Flow in Scenic Spots based on Kalman Filter Prediction

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP