WO2009052578A1 - Vérification d'identification de caractéristique d'image sur des images multiples - Google Patents

Vérification d'identification de caractéristique d'image sur des images multiples Download PDF

Info

Publication number
WO2009052578A1
WO2009052578A1 PCT/AU2008/001577 AU2008001577W WO2009052578A1 WO 2009052578 A1 WO2009052578 A1 WO 2009052578A1 AU 2008001577 W AU2008001577 W AU 2008001577W WO 2009052578 A1 WO2009052578 A1 WO 2009052578A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
character
region
character region
prospective
Prior art date
Application number
PCT/AU2008/001577
Other languages
English (en)
Inventor
Subhash Challa
Duc Dinh Minh Vo
Suvorova Sofia
Original Assignee
Sensen Networks Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2007905816A external-priority patent/AU2007905816A0/en
Application filed by Sensen Networks Pty Ltd filed Critical Sensen Networks Pty Ltd
Publication of WO2009052578A1 publication Critical patent/WO2009052578A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names

Definitions

  • the present invention relates to techniques for identifying features of data sets and particularly, but not exclusively, to identifying the location of a character region within a larger image by reference to a plurality of images obtained over time.
  • a number of techniques exist to perform the first step including colour detection, signature analysis, edge detection, and so on. Any inclination from the horizontal line in the captured image is determined and the image rotated before it becomes ready for a character recognition module.
  • the image may also be further processed to remove noise.
  • a known histogram method may be used, where each character is labelled in the license plate image, and then each label is extracted. Each character in the plate is extracted in a single image and normalized prior to the recognition step.
  • the segmented characters are first normalized and then fed into a neural network for optical character recognition, for example a back propagation feed forward Neural Network consisting of two layers.
  • the neural network outputs are normalized and used as estimates of the a posteriori probability of each character:
  • the quality of the acquired image must be of a level that allows a relatively clear photograph to be taken to increase the accuracy of the OCR techniques employed. This tends to be achievable on open roads during daylight hours or under well lit street lighting. However, there are many situations where such optimum conditions are not available, such as at night time on roads with no or poor street lighting, during wet weather, in car parks, under bridges or in poorly lit tunnels. In such conditions, such prior art techniques generally require the use of relatively expensive cameras which can operate in a variety of lighting conditions, and/or the use of additional vehicle sensors to trigger lighting or flashes at the time of taking the photograph to illuminate the subject of the image being acquired.
  • License plate recognition or automatic number plate recognition (ANPR) is thus the use of video captured images for automatic identification of a vehicle through its license plate.
  • the applications are numerous and include surveillance, theft prevention, parking lot attendance, identification of stolen vehicles, traffic laws enforcement, border crossing and toll roads. While other automatic vehicle identification methods are in use, such as transponders, bar-coded labels and radio-frequency tags, or proposed, such as electronic license plates, license plate reading remains, and is likely to remain, the way a car is identified.
  • LPR attempts to make the reading automatic by processing sets of images captured by cameras.
  • LPR systems comprise a series of steps that consist of detecting a vehicle, triggering the captures of images of that vehicle and treating those images for recognition of the characters in the license plate.
  • Image analysis in LPR has three parts; (i) localization (extraction) of license plate from image, (ii) segmentation (extraction) of characters from localized license plate region, and (iii) recognition of those characters. These steps are performed automatically by software and require intelligent algorithms to achieve a high reliability.
  • Plate localization is an important step in LPR. It aims to locate the license plate of the vehicle in an image. Although the human eye can immediately visually locate a license plate in a still or moving image, it is not a trivial task for a computer program to do so in real time.
  • the present invention provides a system for identifying a character region in an image, the system comprising: an image capture device for capturing a plurality of images in time series; and data processing means adapted to identify in each image one or more prospective character regions, and adapted to, for a first prospective character region identified in a first image, determine from a predetermined expected target trajectory an expected image area in which the character region would be expected to occur in a subsequent image and, should a second prospective character region arise in the expected image area of the subsequent image, associating the first prospective character region with the second prospective character region.
  • Each region of the segment may for example comprise a 3 pixel by 3 pixel region or block.
  • the plurality of regions of the segment preferably comprises all unique 3 pixel by 3 pixel blocks of the segment.
  • eight such predetermined region templates are preferably used of the type shown in Figure 8 attached.
  • Alternative embodiments may comprise a region of any suitable size and shape, and may for example involve selection of appropriate predetermined region templates to exploit vertical and horizontal contrast.
  • the plurality of data sets may be acquired from one sensor at different times, or acquired from a plurality of sensors at different times.
  • the image data sets may be image data sets acquired from one or more cameras. Additionally, data fusion in accordance with the teachings of International Patent Application No. PCT/AU2007/001274, the content of which is incorporated herein by reference, may be applied in some embodiments of the present invention.
  • aspects of the invention comprise systems and apparatus for carrying out the above described method aspects.
  • the systems may comprise cameras or other sensors for acquiring sensor acquired data sets and apparatus for performing the above described method steps.
  • the apparatus may comprise programmable computers.
  • FIG. 1 illustrates license plate region (LPR) determination
  • Figure 3 illustrates character segmentation
  • Figure 4 is a photograph illustrating a typical scene requiring license plate recognition
  • Figure 5 is a schematic of license plate tracking between image frames
  • Figure 6 illustrates verification of a trajectory of a likely license plate location over multiple images relative to an expected trajectory
  • Figure 7 illustrates conversion of a colour (RGB) license plate image to a greyscale image, and then to a binary black and white version
  • Figure 8 illustrates eight suitable predetermined 3 by 3 region templates
  • Figure 9 illustrates operation of the Sobel operator
  • Figure 15 illustrates edge images of Fig 14 after out plate area removal
  • Figure 16 illustrates the images of Fig 15 cropped to the plate boundaries
  • Figure 17 illustrates the plate character segmentation algorithm
  • Fig. 18 (a), (b) and (c) show the normalized Y projection function and corresponding black and white plate images
  • Figure 19 shows the normalized X projection function and its corresponding black and white plate image
  • Fig 23 illustrates Horizontal, vertical M3 and diagonal median filtering masks
  • Fig 24 illustrates a finally cropped plate image, before and after Otsu binary thresholding.
  • a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
  • a machine-readable medium includes read only memory ("ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • program modules may be practiced with other computer system configurations, including hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like.
  • the invention may be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • the personal computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 49.
  • the remote computer 49 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the personal computer 20, although only a memory storage device 50 has been illustrated in FIG. 1.
  • the logical connections depicted in FIG. 1 include a local area network (LAN) 51 and a wide area network
  • WAN wide area network
  • the following describes a preferred embodiment of the invention which involves identifying one or more features in the form of alphanumeric characters of a vehicle license plate represented in a plurality of sensor acquired data sets in the form of digital image files.
  • the plurality of digital image files are taken of the same subject, in this embodiment being a vehicle license plate, by a single camera.
  • the license plate is extracted, the characters segmented, and each character of each image determined.
  • the image may contain true multiple plates (due to the presence of multiple vehicles).
  • the present invention recognises that an effective way of dealing with these uncertainties is to introduce a tracker, in which increased accuracy can be provided by assessing a series of images and comparing the location and trajectory of the license plate in the image sequence.
  • LPR License Plate Recognition
  • the frame rate of the camera is sufficient to capture at least two frames (or may be more depending on the actual speed of the car and the frame rate of the camera) of one car so that a track can be built.
  • Figure 3 is a schematic of license plate tracking between image frames.
  • the initiation process does not include all N regions.
  • a possible window in the image is defined as shown in figure 4. Assuming an oncoming traffic (traffic towards the camera), the regions that are below the "Top Limit" line are used to initiate the tracks. For our steps below we assume that the N selected regions satisfy this criterion.
  • Figure 5 illustrates the track update procedure
  • track 34 would have four entries after assessment of images 3 OA-D revealed the character regions 32A-D followed trajectory 24. • Have gone below the "bottom Limit" line (refer to figure 4)
  • the present embodiment increases the true plate detection probability.
  • PD Probability of detecting a true LP region in a single frame
  • preferred embodiment of the invention preferably involve subsequent character segmentation and recognition.
  • the method can be used with two, three, or more than four images, however the inventors have found that in some cases acceptable accuracy can be achieved using four or five images.
  • image quality is relatively low, for example where there is poor contrast in the image due to under or overexposure lighting conditions, or if the vehicle is moving quickly relative to the camera, more images may be required. For example, when attempting to determined the characters on a license plate of a vehicle moving over 50km per hour relative to the camera, fifty images may be taken at high speed for use in the present embodiment.
  • the following describes a preferred embodiment of the invention which involves identifying one or more features in the form of alphanumeric characters of a vehicle license plate represented in a plurality of sensor acquired data sets in the form of digital image files.
  • the plurality of digital image files are taken of the same subject, in this embodiment being a vehicle license plate, by a single camera.
  • the license plate is extracted, the characters segmented, and each character of each image determined.
  • the algorithm uses a pattern checking technique based on spatial pixel distribution and defines a score for a certain segment of the image indicating a likelihood of it being a LP region. Based on the obtained score, several possible LP regions are extracted and passed on to a module to determine tracks formed by those regions. Notably, multiple license plates present in a single image may be handled by this technique.
  • the plate is converted into a binary (black and white) image (1-0 image) using a hysteresis threshold method.
  • Figure 7 illustrates conversion of a colour (RGB) license plate image to a greyscale image, and then to a binary black and white version.
  • Figure 2 illustrates an example of such a binary thresholded image.
  • Each pixel of a 3x3 block can be represented as a vector of [pi p2 ... P 9 ], where each P t is either 0 or 1 , and where 0 corresponds to a black pixel and 1 corresponds to a white pixel.
  • P can be seen as the index which uniquely identifies the neighborhood. P varies from 0 to 511, so that there are 512 possible neighbourhoods for a 3 by 3 size.
  • the index P for each neighborhood is given in the titles in Figure 8. As can be seen the particular reference neighbourhoods selected are those with strong contrast caused by either vertical or horizontal edges in the image.
  • the detection algorithm is then as follows. • Divide the original image into small (20x40 pixels) regions in the overlapping manner, such that each next region is 50% overlapping with the previous one.
  • license plate regions may be verified and/or discarded, cropped to a plate region, thresholded, cleaned, character segmented, character cropped and/or undergo optical character recognition.
  • regions identified by the present embodiment as being license plate regions are passed to the tracker module described above with reference to Figures 4 to 7.
  • the following provides details of particularly suitable plate cropping techniques and character segmentation techniques which may be used in some embodiments of the present invention.
  • the image segment will normally contain both license plate area and some background area as it is unlikely that the image segment will be of identical size as the license plate and be precisely aligned with the license plate. Examples are shown in Fig. 12.
  • the license plate region has been identified in images such as those in Figure 12, it remains necessary to more accurately identify the bounds of the license plate itself.
  • the present embodiment for plate cropping is based on the fact that the density of edges in the license plate region is normally much higher than in the non-plate regions.
  • edge image is first obtained by applying a Sobel operator, examples are shown in Fig. 13. Then, any flat lines that are longer than a given threshold (normally set slightly bigger than the maximum character width, which can be predetermined for a given installation of an imaging device) will be removed. This is helpful in removing non-character line noise, with examples shown in Fig. 14. Finally, edge removal is carried out, based on the horizontal and vertical projection, examples are shown in Fig. 15 and 16.
  • the Sobel detector is a 2D spatial gradient operator that emphasizes the high spatial frequency components applying to a grey scale image. This operator is used to detect the edges, and its operation is illustrated in Fig. 9.
  • the masks of Figure 9 are designed to respond maximally to edges running vertically and horizontally relative to the pixel grid, one mask for each of the two perpendicular orientations.
  • the masks can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation (Gx and Gy). These can then be combined together to find the absolute magnitude of the gradient at each point and the orientation of that gradient.
  • the gradient magnitude is given by:
  • y[Gx* + Gf although typically, an approximate magnitude is computed using:
  • ⁇ Gx ⁇ + ⁇ Gy ⁇ which is much faster to compute.
  • Out plate area removal is then conducted.
  • this technique is based on the fact that the density of edges in the plate area is normally much higher than in the non-plate area.
  • the preprocessed edge image is first projected to Y (vertical) direction, and then normalized with its maximum value.
  • An example of normalized Y projection of plate in examples in the following section (plate (c)) is shown in Fig. 10, which is a normalized Y projection function of plate (c) illustrated in Figure 14c.
  • row 2 exceeds this threshold so that rows 0 and 1 can be excluded as being non-plate region above the license plate region.
  • rows 31 to 35 can be excluded as being non-plate region below the license plate region.
  • the leftmost columns from column 0 to about column 57, can be excluded as being non-plate region to the left of the license plate region by reference to the threshold value of 0.4.
  • the rightmost columns from about column 170 and all columns farther right can be excluded as being non-plate region to the right of the license plate region by reference to the threshold value of 0.4.
  • Figure 12 illustrates three examples of FFT Located Plate Images.
  • Figure 14 illustrates edge images obtained by using a Sobel operator upon the images of Figure 12.
  • Figure 14 illustrates edge images after lone line removal.
  • Figure 15 illustrates edge images after out plate area removal.
  • Figure 16 illustrates cropped image cropped to the plate boundaries identified by this technique.
  • Character segmentation plays a very important role in plate recognition system. Since there are many kinds of plates in different states, or in different countries, and each image of a plate can be obtained in totally different illumination condition, processing these plate images and segmentation becomes extremely varied and difficult. The following sets out a robust character segmentation algorithm, which includes edge removal, long line removal, character grab based top and bottom position estimation etc.
  • the structure of the plate character segmentation algorithm is shown in Fig. 17.
  • the input image is from the edge detection based plate crop function explained above with reference to Figures 9 to 16.
  • the algorithm of Figure 17 is thus provided with a portion of an image which has been cropped to conform to the edges of an identified license plate region.
  • the plate frame edge is removed based on the binary image projected to X and Y direction.
  • a pre-lone line removal operation applies to frame edge removed image to separate possible character connected to boundary or background.
  • the "First Character Grab Cut and Non-Character Components Removal” operation will apply, the outputs of this operation are median top and bottom cut off positions, which can remove some incorrectly connected components, such as bolts that are used to fix plate on car etc., median character height, median character width and median character size, which will be used in the second and final cut as a reference size.
  • the "Second Character Grab Cut” operation will output "left and right” cut off positions.
  • the final operation is "Final Character Grab cut and Character Recognition", which will output recognized plate string.
  • edge removal algorithm There are some plates that have frames around plate characters. In order to segment these plate characters properly after edge detection based plate cropping, it is necessary to remove the plate edges.
  • the basic idea of removing these frame edges is to project the black and white binary image into X and Y directions, then cut off edges from top, bottom, left and right based on these project functions.
  • the preprocessed black and white image is first projected to Y (vertical) direction, and then normalized with its maximum value.
  • the top edge is removed from first row to a certain row, in which the project function value goes just blow the threshold which is currently set to 0.75.
  • the removal process will cut from bottom row, left column and right column to certain position (row or column), in which the project function value just below the threshold.
  • FIG. 18 An example of normalized Y projection of plate and its corresponding images before and after top and bottom cut off are shown in Fig. 18 (a), (b) and (c), which shows the normalized Y projection function and corresponding black and white plate images. Since the bottom does not meet the cut off condition, no bottom rows are cut off in this case. Similar consideration apply to the left and right edges, and Figure 19 shows the normalized X projection function and its corresponding black and white plate image for this purpose.
  • the algorithm of Figure 17 moves to the First Character Grab Cut.
  • the first character grab cut any components that are too small or too big will be removed first, in which the "too big" and “too small” thresholds are set as hard thresholds in this cut.
  • the width and height ratio of any components which are too large (too fat to be a character) or too thin (too thin to be a character) will also be removed, in which the width and height ratio thresholds are also set as hard thresholds in this cut.
  • An example of black and white image before and after non-character components removal is shown in Figs. 21a and 21b, respectively.
  • the outputs of the First Character Grab Cut are the top and bottom plate character cut positions, median character height, median character width and median character size of plate characters.
  • the top character cut position is calculated as the median value of top character positions of all possible character candidates
  • the bottom character cut position is calculated as the median value of bottom character positions of all possible character candidates.
  • the output median character height, median character width and median character size are the median values of the heights, widths and sizes of all possible character candidates and these output values will be used as reference for the final character grab cut.
  • the output of the "Second Character Grab Cut” is the left and right cut off positions.
  • the top and bottom part of preprocessed black and white plate image will be cut off based on the output top and bottom character cut positions of the "First Character Grab Cut".
  • An example of black and white image plate before and after top and bottom cut off is shown in Fig. 22 (a) and 22(b).
  • Preferred embodiments of the invention realise several advantages.
  • the image quality does not need to be of as high a standard compared with prior art techniques. Therefore, additional lighting and ideal camera placement that may be required to increase the accuracy of prior art methods are not necessary in the preferred embodiment.
  • it is not necessary to use dedicated license plate image capture cameras with the present embodiment but instead images captured by existing devices, such as closed circuit television (CCTV) cameras, or highway monitoring cameras, may be used.
  • CCTV closed circuit television
  • the preferred embodiment is therefore more cost effective and simpler to install and/or set up compared with prior art methods and equipment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne l'identification d'une zone de caractères dans une image qui consiste à identifier dans chacune des images d'une pluralité d'images en série chronologique une ou plusieurs zones de caractères possibles. Pour une première zone de caractères possible identifiée dans une première image, on détermine à partir d'une trajectoire cible prévisionnelle préétablie l'endroit où la zone de caractères serait attendue dans une image suivante. Si une seconde zone de caractères possible apparaît dans la zone attendue de l'image suivante, la première zone de caractères possible est associée à la seconde pour former ou étendre une 'trace'. On peut ainsi repérer des zones de caractères possibles par leur trace sur des trames d'images multiples, en conformité avec la trajectoire attendue, ce qui donne un indicateur additionnel important permettant de déterminer si une zone de caractères possible est, en fait, une zone de caractères réelle ou un artéfact parasite.
PCT/AU2008/001577 2007-10-24 2008-10-24 Vérification d'identification de caractéristique d'image sur des images multiples WO2009052578A1 (fr)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
AU2007905816A AU2007905816A0 (en) 2007-10-24 Locating a character region in an image
AU2007905816 2007-10-24
AU2007905815 2007-10-24
AU2007905847 2007-10-24
AU2007905815A AU2007905815A0 (en) 2007-10-24 Locating a character region in an image (11)
AU2007905847A AU2007905847A0 (en) 2007-10-24 Verification of identification of an image characteristic over multiple images

Publications (1)

Publication Number Publication Date
WO2009052578A1 true WO2009052578A1 (fr) 2009-04-30

Family

ID=40578977

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2008/001577 WO2009052578A1 (fr) 2007-10-24 2008-10-24 Vérification d'identification de caractéristique d'image sur des images multiples

Country Status (1)

Country Link
WO (1) WO2009052578A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102909A (zh) * 2014-07-23 2014-10-15 中科联合自动化科技无锡有限公司 基于多重视觉信息的车辆特征定位及匹配方法
CN112785550A (zh) * 2020-12-29 2021-05-11 浙江大华技术股份有限公司 图像质量值确定方法、装置、存储介质及电子装置
US11948374B2 (en) 2021-07-20 2024-04-02 Walmart Apollo, Llc Systems and methods for detecting text of interest

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040057599A1 (en) * 2002-06-27 2004-03-25 Kabushiki Kaisha Toshiba Image processing apparatus and method
WO2006006149A2 (fr) * 2004-07-08 2006-01-19 Hi-Tech Solutions Ltd Systeme et procede de reconnaissance de caracteres
EP1085456B1 (fr) * 1999-09-15 2006-11-22 Siemens Corporate Research, Inc. Procédé de segmentation de caractères pour la reconnaissance des plaques d'immatriculation des véhicules

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1085456B1 (fr) * 1999-09-15 2006-11-22 Siemens Corporate Research, Inc. Procédé de segmentation de caractères pour la reconnaissance des plaques d'immatriculation des véhicules
US20040057599A1 (en) * 2002-06-27 2004-03-25 Kabushiki Kaisha Toshiba Image processing apparatus and method
WO2006006149A2 (fr) * 2004-07-08 2006-01-19 Hi-Tech Solutions Ltd Systeme et procede de reconnaissance de caracteres

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102909A (zh) * 2014-07-23 2014-10-15 中科联合自动化科技无锡有限公司 基于多重视觉信息的车辆特征定位及匹配方法
CN104102909B (zh) * 2014-07-23 2017-03-15 中科联合自动化科技无锡有限公司 基于多重视觉信息的车辆特征定位及匹配方法
CN112785550A (zh) * 2020-12-29 2021-05-11 浙江大华技术股份有限公司 图像质量值确定方法、装置、存储介质及电子装置
CN112785550B (zh) * 2020-12-29 2024-06-04 浙江大华技术股份有限公司 图像质量值确定方法、装置、存储介质及电子装置
US11948374B2 (en) 2021-07-20 2024-04-02 Walmart Apollo, Llc Systems and methods for detecting text of interest

Similar Documents

Publication Publication Date Title
Panahi et al. Accurate detection and recognition of dirty vehicle plate numbers for high-speed applications
US9111169B2 (en) Method and system of identifying one or more features represented in a plurality of sensor acquired data sets
CN106600977B (zh) 基于多特征识别的违停检测方法及系统
EP2031571B1 (fr) Dispositif, procédé et programme de détermination de type de véhicule
US8798314B2 (en) Detection of vehicles in images of a night time scene
KR101038669B1 (ko) 영상기반의 비검지식 차량 인식 시스템 및 차량 인식방법
CN110619279B (zh) 一种基于跟踪的路面交通标志实例分割方法
Huang et al. An intelligent strategy for checking the annual inspection status of motorcycles based on license plate recognition
JP4587038B2 (ja) 車両位置検出方法、並びに車両速度検出方法及び装置
CN109800752B (zh) 一种基于机器视觉的汽车车牌字符分割识别算法
CN111382704A (zh) 基于深度学习的车辆压线违章判断方法、装置及存储介质
CN101937508A (zh) 一种基于高清图像的车牌定位与识别方法
Yousef et al. SIFT based automatic number plate recognition
Johnson et al. Number-plate matching for automatic vehicle identification
Chen et al. Toward community sensing of road anomalies using monocular vision
CN104700068A (zh) 一种基于svm的驾驶员安全带检测方法
WO2009052578A1 (fr) Vérification d'identification de caractéristique d'image sur des images multiples
Munajat et al. Vehicle detection and tracking based on corner and lines adjacent detection features
JP3291873B2 (ja) ナンバープレートの認識装置
JP2004234486A (ja) 車両逆走検知装置
CN116682268A (zh) 基于机器视觉的便携式城市道路车辆违章稽查系统及方法
JP2893814B2 (ja) 車番自動読取装置におけるプレート切出し装置
JP2910130B2 (ja) 車番自動読取装置
JP3117497B2 (ja) ナンバープレート認識装置
Adorni et al. License-plate recognition for restricted-access area control

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08841790

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08841790

Country of ref document: EP

Kind code of ref document: A1