WO2018028230A1 - Deep learning-based method and device for segmenting vehicle license plate characters, and storage medium - Google Patents

Deep learning-based method and device for segmenting vehicle license plate characters, and storage medium Download PDF

Info

Publication number
WO2018028230A1
WO2018028230A1 PCT/CN2017/080128 CN2017080128W WO2018028230A1 WO 2018028230 A1 WO2018028230 A1 WO 2018028230A1 CN 2017080128 W CN2017080128 W CN 2017080128W WO 2018028230 A1 WO2018028230 A1 WO 2018028230A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
license plate
analyzed
label
neural network
Prior art date
Application number
PCT/CN2017/080128
Other languages
French (fr)
Chinese (zh)
Inventor
谷爱国
温炜
许健
李岩
万定锐
Original Assignee
东方网力科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东方网力科技股份有限公司 filed Critical 东方网力科技股份有限公司
Publication of WO2018028230A1 publication Critical patent/WO2018028230A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates

Definitions

  • the present invention relates to the field of data identification, and in particular, to a method, device and storage medium for license plate character segmentation based on deep learning.
  • License plate recognition is commonly used in bayonet, electric police, toll stations and parking lots.
  • the traditional license plate recognition algorithm includes license plate character segmentation.
  • the license plate character segmentation technology divides the license plate image area to obtain all the independent character regions on the license plate image, mainly for horizontal projection and vertical projection of the license plate, specifically: for the license plate
  • the image is preprocessed to obtain a binarized image; the top-down progressive scan of the license plate image and the bottom-up progressive scan are performed to obtain the height range of the license plate characters; the height range of the license plate characters is scanned from left to right to determine each The range of widths of characters; the top-down and bottom-up progressive scans of each character's width range to obtain a more precise height range for each character.
  • An object of the present invention is to provide a method, a device and a storage medium for license plate character segmentation based on deep learning, which can perform effective character segmentation on a license plate with characters stuck, a license plate with a large noise pollution, and a defaced license plate, thereby obtaining more Accurate license plate character area to improve the accuracy of license plate character segmentation.
  • an embodiment of the present invention provides a method for segmentation of a license plate character based on deep learning, the method comprising:
  • the method further includes: analyzing, according to the neural network model, an image of the license plate to be analyzed, to obtain a label image corresponding to the license plate to be analyzed.
  • the image of the license plate to be analyzed is analyzed based on the neural network model to obtain the label image corresponding to the license plate to be analyzed, including:
  • the mask image is processed to obtain a label image of the license plate to be analyzed.
  • the probability and the first label value corresponding to each pixel are The probability of the second tag value is processed by the image of the license plate to be analyzed to obtain a mask image, including:
  • the mask image is calculated according to the following formula:
  • the probability of the first tag value corresponding to the pixels of the i-th row and the j-th column, P 0 (i, j) is the number corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed
  • the mask image is processed to obtain a label image of the license plate to be analyzed, including:
  • determining, according to the number, a division position between characters on the license plate to be analyzed includes:
  • the value corresponding to the number is greater than a preset threshold, the value corresponding to the number is used as the split position of the currently adjacent character.
  • the marking the original image to obtain a label image comprises:
  • An area marked as the first tag value and an area marked as the second tag value constitute the tag image.
  • an embodiment of the present invention provides a license plate character segmentation device based on deep learning, and the device includes:
  • a marking unit configured to acquire an original image of the license plate, and mark the original image to obtain a label image
  • a constructing unit configured to construct a neural network according to the original image and the label image
  • a classified image generating unit configured to classify the regions in the original image to obtain a classified image based on the neural network and a softmax regression loss function
  • a determining unit configured to compare the classified image with the label image, and determine whether the classified image is consistent with the label image
  • a training unit configured to perform a neural network model based on the classified image if not inconsistent.
  • the device further includes:
  • the label image generating unit is configured to analyze an image of the license plate to be analyzed based on the neural network model to obtain a label image corresponding to the license plate to be analyzed.
  • the label image generating unit includes:
  • a probability acquisition unit configured to pass the image of the license plate to be analyzed to the neural network model, to obtain a probability of a first label value and a probability of a second label value corresponding to each pixel of the image of the license plate to be analyzed;
  • a first processing unit configured to process, according to a probability of the first label value corresponding to each pixel and a probability of the second label value, an image of the license plate to be analyzed to obtain a mask image
  • a second processing unit configured to process the mask image to obtain a label image of the license plate to be analyzed.
  • the first processing unit includes:
  • the mask image is calculated according to the following formula:
  • P 1 (i, j) is the probability of the first tag value corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed
  • P 0 (i, j) a probability of the second tag value corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed
  • i 1, 2, 3...M
  • j 1, 2, 3...N
  • M is the height of the image of the license plate to be analyzed
  • N is the width of the image of the license plate to be analyzed.
  • the second processing unit includes:
  • a statistical unit configured to count the number of pixels in each column of the mask image whose pixel value is the first pixel value
  • a determining unit configured to determine, according to the number, a split position between characters on the license plate to be analyzed
  • a label image acquiring unit configured to obtain a label image of the image of the license plate to be analyzed according to the split position between the characters on the license plate to be analyzed.
  • the determining unit includes:
  • a comparing unit configured to compare the value corresponding to the number with a preset threshold
  • the segmentation position determining unit is configured to use the value corresponding to the number as the segmentation position of the currently adjacent character in a case where the value corresponding to the number is greater than a preset threshold.
  • the marking unit comprises:
  • a first tag value marking unit configured to mark an area between adjacent characters of the original image as a first tag value
  • a second tag value marking unit configured to mark other areas than the area between the adjacent characters as a second tag value
  • a constituting unit configured to form the label image as an area marked as the first label value and an area marked as the second label value.
  • an embodiment of the present invention provides a storage medium, where the storage medium includes a set of instructions, when executed, causing at least one processor to perform an operation including: obtaining Obtaining an original image of the license plate, marking the original image to obtain a label image;
  • the embodiment of the invention provides a method, a device and a storage medium for license plate character segmentation based on deep learning.
  • the original image is marked to obtain a label image
  • the neural network is constructed according to the original image and the label image, based on the neural network and softmax regression. a loss function, classifying the regions in the original image to obtain a classified image, and comparing the classified image with the label image. If the classified image is inconsistent with the label image, the classified image is trained to obtain a neural network.
  • the network model obtains the label image from the original image through the neural network model, thereby obtaining a more accurate license plate character area and improving the accuracy of the license plate character segmentation.
  • FIG. 1 is a flowchart of a method for segmentation of a license plate character based on deep learning according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a structural neural network according to Embodiment 1 of the present invention.
  • FIG. 3 is a schematic diagram of a neural network prediction network corresponding to FIG. 2 according to Embodiment 1 of the present invention
  • FIG. 4 is a schematic diagram of a license plate character segmentation method based on deep learning according to Embodiment 1 of the present invention.
  • FIG. 5 is a flowchart of step S106 in another method for segmentation of a license plate character based on deep learning according to Embodiment 1 of the present invention
  • FIG. 6 is a flowchart of step S101 in a method for segmenting license plate characters based on deep learning according to Embodiment 1 of the present invention
  • FIG. 7 is a schematic diagram of a license plate character segmentation apparatus based on deep learning according to Embodiment 2 of the present invention.
  • the embodiment of the invention provides a method, a device and a storage medium for license plate character segmentation based on deep learning.
  • the original image is marked to obtain a label image, and the neural network is constructed according to the original image and the label image, based on the neural network and softmax regression.
  • the classified image is trained to obtain a neural network model, and the original image is obtained through the neural network model to obtain a label image, so that the license plate and noise pollution of the character adhesion can be greatly increased.
  • the license plate and the defaced license plate are effectively character-divided to obtain a more accurate license plate character area and improve the accuracy of the license plate character segmentation. The details are described below by way of examples.
  • FIG. 1 is a flowchart of a method for segmentation of license plate characters based on deep learning according to an embodiment of the present invention.
  • step S101 acquiring an original image of a license plate, and marking the original image to obtain a label image
  • the original image of the license plate is first acquired, the area between adjacent characters on the original image of the license plate is marked as a first label value, and the other area on the original image of the license plate is marked as a second label value, wherein A tag value is 1 and a second tag value is 0. Then, the area marked with 1 and the area marked with 0 constitute a label image.
  • each license plate image constitutes two corresponding images, which are the original image of the license plate and the label image.
  • the original image of the license plate is “Jing C ⁇ 874”, and the adjacent characters “Beijing” and “C” The area between the areas is marked as 1, and so on, and the area other than the adjacent characters is marked as 0, thereby constituting the label image.
  • Step S102 constructing a neural network according to the original image and the label image
  • the neural network is seven layers, each layer includes a convolution layer and an activation layer, and the original image and the label image are sequentially passed through the layers to form a neural network.
  • Step S103 classifying the regions in the original image to obtain a classified image based on the neural network and a softmax regression loss function
  • the original image and the tag image construct a neural network, and then divide the region in the original image based on the neural network and the softmax regression loss function.
  • the class is classified as an image. For details, refer to FIG. 3.
  • the area in the original image may be classified by: classifying at least one area in the original image, and classifying the area as 1 at the division position (that is, the label is 1), and other positions are classified.
  • a value of 0, the classified image contains only two values of 0 and 1.
  • at least one position may exist in the original image, and each position may be measured by a pixel, for example, one pixel may correspond to one position, or one pixel may correspond to one position.
  • step S104 the classified image is compared with the label image to determine whether the classified image and the label image are consistent. If not, step S105 is performed; if they are consistent, step S107 is performed.
  • Step S105 training the classified image to obtain a neural network model
  • the classified image is compared with the label image to determine whether the classified image matches the label image. If there is no match, the classified image needs to be trained to obtain a neural network model.
  • the present embodiment has established a neural network model based on the original image.
  • the number of original images is not limited in this embodiment.
  • the original images of multiple license plates may be used for training to obtain a neural network model.
  • the embodiment may further input the original image of the license plate into the neural network model to obtain the to-be-aligned label image calculated by the original image through the neural network model;
  • the neural network model Comparing the tag image to be compared with the tag image of the original image of the license plate obtained in step S101, if the two match, the neural network model can be considered to be successful; if not, the original image of more license plates can be continued.
  • the neural network model is trained until the to-be-aligned label image obtained by training the network model matches the label image.
  • the matching of the foregoing two label images can be measured by the degree of matching, for example,
  • the preset values may be reached for the same value of the same region in the two label images. For example, if 99% of the same is present, the two label images may be considered to match.
  • the preset value can be set according to the actual situation, and is not exhaustive in this embodiment.
  • Step S106 analyzing an image of the license plate to be analyzed based on the neural network model to obtain a label image corresponding to the license plate to be analyzed;
  • the image of the license plate to be analyzed outputs the probability of the first label value corresponding to the pixel of the original image and the probability of the second label value through the neural network model, and then processes the mask image of the image to be analyzed, and finally performs the mask image.
  • Post-processing obtains the label image of the license plate to be analyzed.
  • the foregoing is to use the neural network model established in step S105 to analyze other license plates to obtain a processing flow of the tag images of other license plates.
  • the step S106 may be implemented by the following steps, including:
  • Step S201 the image of the license plate to be analyzed is passed through a neural network model, and the probability of the first label value corresponding to each pixel of the image of the license plate to be analyzed and the probability of the second label value are obtained;
  • the probability of the first tag value corresponding to each pixel of the image of the license plate to be analyzed and the probability of the second tag value are finally output.
  • the first tag value may be 1, and the second tag value may be 0, that is, the probability that the tag value corresponding to each pixel of the image of the license plate to be analyzed is 1 and the tag value is 0.
  • Label value is 1
  • the probability is represented by P1, where P1(i,j) represents the i-th row of the image of the license plate to be analyzed, and the probability of the tag value corresponding to the pixel of the jth column is 1;
  • the probability of the tag value of 0 is represented by P0, where P0 (i, j) represents the probability that the pixel value of the pixel of the jth column corresponds to the i-th row of the image of the license plate to be analyzed.
  • the label value of 1 represents a gap between adjacent characters of the license plate, and the label value of 0 represents other areas except the gap.
  • Step S202 processing the image of the license plate to be analyzed according to the probability of the first label value corresponding to each pixel and the probability of the second label value to obtain a mask image;
  • P 1 (i, j) is the probability of the first label value corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed
  • P 0 (i, j) is the image of the license plate to be analyzed.
  • M is the height of the image of the license plate to be analyzed
  • N The width of the image of the license plate to be analyzed.
  • Step S203 processing the mask image to obtain a label image of the license plate to be analyzed.
  • the step S203 may be implemented by the following steps, including:
  • Step S301 counting the pixels in each column of the mask image whose pixel value is the first pixel value. number;
  • the first pixel value is 1, through the statistical mask image
  • Step S302 determining, according to the number, a division position between characters on the license plate to be analyzed
  • the value corresponding to the number is compared with a preset threshold; if the value corresponding to the number is greater than the preset threshold, the value corresponding to the number is used as the segmentation position of the currently adjacent character.
  • the preset threshold is represented by threshold, and the specific process is:
  • the value of a is determined by a large number of experiments, and can also be obtained by a statistical method.
  • a is determined to a certain value, the error between the segmentation position obtained by the above method and the actual segmentation position of the character on the license plate is minimized. It is possible to determine the optimal value of the current value a.
  • Step S303 obtaining a label image according to the division position between the characters on the license plate to be analyzed.
  • the step S101 may be implemented by the following steps, including:
  • Step S401 marking an area between adjacent characters of the original image as a first label value
  • the first tag value is 1, and the area between adjacent characters in the original image is marked as 1.
  • Step S402 marking other areas except the area between adjacent characters as the second label value
  • the second tag value is 0, and in the original image, other areas are marked as 0 except for adjacent characters.
  • step S403 the area marked as the first label value and the area marked as the second label value constitute a label image.
  • the area between adjacent characters marked 1 and the area marked 0 constitute a label image.
  • the rules of the characters on the license plate can be automatically learned during the neural network learning process, such as the existence of gaps and character contours between characters, and the license plate for character sticking, character staining and noise pollution is effectively segmented.
  • the invention provides a license plate character segmentation method based on deep learning, which obtains a tag image by marking an original image, constructs a neural network according to the original image and the tag image, and based on the neural network and a softmax regression loss function, the original
  • the regions in the image are classified to obtain a classified image, and the classified image is compared with the label image. If the classified image is inconsistent with the label image, the classified image is trained to obtain a neural network model, and the original image is passed.
  • the neural network model obtains the label image to obtain a more accurate license plate character area and improve the accuracy of license plate character segmentation.
  • FIG. 7 is a schematic diagram of a license plate character segmentation apparatus based on deep learning according to Embodiment 2 of the present invention.
  • the apparatus includes a marking unit 10, a construction unit 20, a classified image generation unit 30, a determination unit 40, a training unit 50, and a label image generation unit 60.
  • a marking unit 10 configured to acquire an original image of the license plate, and mark the original image to obtain a label image
  • the constructing unit 20 is configured to construct a neural network according to the original image and the label image;
  • the classified image generating unit 30 is configured to classify the regions in the original image to obtain a classified image based on the neural network and the softmax regression loss function;
  • the determining unit 40 is configured to compare the classified image with the label image, and determine whether the classified image is consistent with the label image;
  • the training unit 50 is configured to perform a neural network model based on the classified image after the inconsistency.
  • the present embodiment has established a neural network model based on the original image.
  • the number of original images is not limited in this embodiment.
  • the original images of multiple license plates may be used for training to obtain a neural network model.
  • the original image of the license plate can also be input into the neural network model to obtain the to-be-aligned label image calculated by the original image through the neural network model;
  • the comparison between the image of the label to be compared with the image of the original image of the obtained license plate is used. If the two match, the neural network model can be considered to be successful; if not, the original image of more license plates can be used to train the nerve. The network model until the to-be-aligned tag image obtained by training the network model matches the tag image.
  • the matching of the foregoing two label images may be measured by the degree of matching.
  • the ratio of the same label value corresponding to the same area in the two label images may reach a preset value, for example, 99% of the same, It can be considered that the two tag images match.
  • the preset value can be set according to the actual situation, and is not exhaustive in this embodiment.
  • the method may further include:
  • the label image generating unit 60 is configured to analyze an image of the license plate to be analyzed based on the neural network model to obtain a label image corresponding to the license plate to be analyzed.
  • the tag image generating unit 60 includes:
  • a probability acquisition unit (not shown), configured to pass an image of the license plate to be analyzed to a neural network model, and obtain a probability of a first tag value corresponding to each pixel of the image of the license plate to be analyzed and a probability of the second tag value;
  • a first processing unit (not shown), configured to process an image of the license plate to be analyzed according to a probability of a first label value corresponding to each pixel and a probability of a second label value to obtain a mask image;
  • a second processing unit (not shown) is configured to process the mask image to obtain a label image of the license plate to be analyzed.
  • the first processing unit includes:
  • the second processing unit includes:
  • a statistical unit (not shown) for counting the number of pixels in each column of the mask image whose pixel value is the first pixel value
  • a label image acquisition unit (not shown) for obtaining a label image of the license plate to be analyzed according to the division position between the characters on the license plate to be analyzed.
  • the determining unit includes:
  • a comparison unit (not shown) for comparing the value corresponding to the number with a preset threshold
  • the split position determining unit (not shown) is configured to use the value corresponding to the number as the split position of the currently adjacent character if the number corresponding to the number is greater than a preset threshold.
  • the marking unit 10 comprises:
  • a first tag value marking unit (not shown) for marking an area between adjacent characters of the original image as the first tag value
  • a second tag value marking unit (not shown) for marking other areas than the area between adjacent characters as the second tag value
  • a constituent unit (not shown) for constituting the area marked as the first label value and the area marked as the second label value constitute the label image.
  • the invention provides a license plate character segmentation device based on deep learning, which obtains a tag image by marking an original image, constructs a neural network according to the original image and the tag image, and based on the neural network and a softmax regression loss function, the original Classification of areas in the image
  • the classified image is obtained, and the classified image is compared with the label image. If the classified image is inconsistent with the label image, the classified image is trained to obtain a neural network model, and the original image is obtained through a neural network model to obtain a label image. In order to obtain a more accurate license plate character area, the accuracy of license plate character segmentation is improved.
  • the device provided by the embodiment of the present invention may be specific hardware on the device or software or firmware installed on the device.
  • the implementation principle and the technical effects of the device provided by the embodiments of the present invention are the same as those of the foregoing method embodiments.
  • a person skilled in the art can clearly understand that, for the convenience and brevity of the description, the specific working processes of the foregoing system, device and unit can refer to the corresponding processes in the foregoing method embodiments, and details are not described herein again.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or call connection shown or discussed may be an indirect coupling or a call connection through some of the communication interfaces, devices or units, and may be in electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in the embodiment provided by the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or made as a standalone product When used, it can be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • Embodiments of the present invention also provide a storage medium including a set of instructions that, when executed, cause at least one processor to perform an operation including: acquiring an original image of a license plate, and performing the original image Marking the label image;
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Provided in the present invention are a deep learning-based method and device for segmenting vehicle license plate characters, and a storage medium. The method comprises: acquiring the original image of a vehicle license plate, and labeling the original image to obtain a labeled image; constructing a neural network on the basis of the original image and the labeled image; on the basis of the neural network and loss function of softmax regression, classifying regions of the original image to obtain a classified image; comparing the classified image with the labeled image to determine whether the classified image and the labeled image are consistent; if not, performing training on the basis of the classified image to obtain the neural network model.

Description

一种基于深度学习的车牌字符分割方法、装置及存储介质License plate character segmentation method, device and storage medium based on deep learning
相关申请的交叉引用Cross-reference to related applications
本申请基于申请号为201610652746、X、申请日为2016年08月10日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。The present application is based on a Chinese patent application filed on Apr. 10, 2016, the entire disclosure of which is hereby incorporated by reference.
技术领域Technical field
本发明涉及数据识别领域,特别是涉及一种基于深度学习的车牌字符分割方法、装置及存储介质。The present invention relates to the field of data identification, and in particular, to a method, device and storage medium for license plate character segmentation based on deep learning.
背景技术Background technique
车牌识别通常应用在卡口、电警、收费站和停车场等场景。传统的车牌识别算法包括车牌字符分割,车牌字符分割技术是将车牌图像区域进行分割,从而获得车牌图像上所有的独立的字符区域,主要是对车牌作水平投影和垂直投影,具体为:对车牌图像进行预处理得到二值化图像;对车牌图像进行自上而下逐行扫描和自下而上逐行扫描获取车牌字符的高度范围;对车牌字符的高度范围自左向右扫描确定每个字符的宽度范围;根据每个字符的宽度范围再自上而下和自下而上逐行扫描,从而获取每个字符更精确的高度范围。License plate recognition is commonly used in bayonet, electric police, toll stations and parking lots. The traditional license plate recognition algorithm includes license plate character segmentation. The license plate character segmentation technology divides the license plate image area to obtain all the independent character regions on the license plate image, mainly for horizontal projection and vertical projection of the license plate, specifically: for the license plate The image is preprocessed to obtain a binarized image; the top-down progressive scan of the license plate image and the bottom-up progressive scan are performed to obtain the height range of the license plate characters; the height range of the license plate characters is scanned from left to right to determine each The range of widths of characters; the top-down and bottom-up progressive scans of each character's width range to obtain a more precise height range for each character.
但是随着社会发展和维护社会安全的需要,城市中架设的监控摄像机越来越多,监控摄像机的安装高度和角度已经远超出智能交通和安防行业所要求的安装标准,监控摄像机的成像质量也是层次不齐,其中比较突出的一个问题是需要识别的车牌越来越小,而且由于图像质量问题,车牌模糊不清,字符存在粘连或者受到噪声污染,字符间就会存在干扰,传统的分割算法无法通过水平投影和垂直投影找到字符间的分割位置,因此不能 很好地解决上述问题。However, with the development of society and the need to maintain social security, there are more and more surveillance cameras installed in the city. The installation height and angle of surveillance cameras have far exceeded the installation standards required by the intelligent transportation and security industries. The imaging quality of surveillance cameras is also One of the more prominent problems is that the license plate that needs to be identified is getting smaller and smaller, and because of the image quality problem, the license plate is ambiguous, the characters are stuck or contaminated by noise, and there will be interference between characters. The traditional segmentation algorithm The split position between characters cannot be found by horizontal projection and vertical projection, so it cannot Solve the above problems well.
发明内容Summary of the invention
本发明的目的在于提供一种基于深度学习的车牌字符分割方法、装置及存储介质,可以对字符粘连的车牌、噪声污染较大的车牌和受到污损的车牌进行有效的字符分割,从而获取更精确的车牌字符区域,提高车牌字符分割的准确性。An object of the present invention is to provide a method, a device and a storage medium for license plate character segmentation based on deep learning, which can perform effective character segmentation on a license plate with characters stuck, a license plate with a large noise pollution, and a defaced license plate, thereby obtaining more Accurate license plate character area to improve the accuracy of license plate character segmentation.
第一方面,本发明实施例提供了一种基于深度学习的车牌字符分割方法,所述方法包括:In a first aspect, an embodiment of the present invention provides a method for segmentation of a license plate character based on deep learning, the method comprising:
获取车牌的原始图像,将所述原始图像进行标记得到标签图像;Obtaining an original image of the license plate, and marking the original image to obtain a label image;
根据所述原始图像和所述标签图像构造神经网络;Constructing a neural network according to the original image and the label image;
基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像;And classifying the regions in the original image to obtain a classified image based on the neural network and a softmax regression loss function;
将所述分类后的图像与所述标签图像进行对比,判断所述分类后的图像与所述标签图像是否一致;如果不一致,则基于所述分类后的图像进行训练得到神经网络模型。Comparing the classified image with the label image to determine whether the classified image is consistent with the label image; if not, training based on the classified image to obtain a neural network model.
上述方案中,所述方法还包括:基于所述神经网络模型,对待分析车牌的图像进行分析,以得到所述待分析车牌对应的标签图像。In the above solution, the method further includes: analyzing, according to the neural network model, an image of the license plate to be analyzed, to obtain a label image corresponding to the license plate to be analyzed.
上述方案中,所述基于所述神经网络模型,对待分析车牌的图像进行分析,以得到所述待分析车牌对应的标签图像,包括:In the above solution, the image of the license plate to be analyzed is analyzed based on the neural network model to obtain the label image corresponding to the license plate to be analyzed, including:
将所述待分析车牌的图像通过所述神经网络模型,得到所述待分析车牌的图像的每个像素对应的第一标签值的概率和第二标签值的概率;Passing the image of the license plate to be analyzed through the neural network model to obtain a probability of a first tag value corresponding to each pixel of the image of the license plate to be analyzed and a probability of a second tag value;
根据所述每个像素对应的所述第一标签值的概率和所述第二标签值的概率,对所述待分析车牌的图像进行处理得到掩膜图像;And processing, according to the probability of the first label value corresponding to each pixel and the probability of the second label value, the image of the license plate to be analyzed to obtain a mask image;
对所述掩膜图像进行处理,得到所述待分析车牌的标签图像。The mask image is processed to obtain a label image of the license plate to be analyzed.
上述方案中,所述根据所述每个像素对应的所述第一标签值的概率和 所述第二标签值的概率,对所述待分析车牌的图像进行处理得到掩膜图像,包括:In the above solution, the probability and the first label value corresponding to each pixel are The probability of the second tag value is processed by the image of the license plate to be analyzed to obtain a mask image, including:
根据下式计算所述掩膜图像:The mask image is calculated according to the following formula:
Figure PCTCN2017080128-appb-000001
第i行和第j列的像素对应的所述第一标签值的概率,P0(i,j)为所述待分析车牌的图像的第i行和第j列的像素对应的所述第二标签值的概率,i=1,2,3…M,j=1,2,3…N,所述M为所述待分析车牌的图像的高度,所述N为所述待分析车牌的图像的宽度。
Figure PCTCN2017080128-appb-000001
The probability of the first tag value corresponding to the pixels of the i-th row and the j-th column, P 0 (i, j) is the number corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed The probability of the two-tag value, i=1, 2, 3...M, j=1, 2, 3...N, the M is the height of the image of the license plate to be analyzed, and the N is the license plate to be analyzed The width of the image.
上述方案中,所述对所述掩膜图像进行处理,得到所述待分析车牌的标签图像,包括:In the above solution, the mask image is processed to obtain a label image of the license plate to be analyzed, including:
统计所述掩膜图像的每一列中像素值为第一像素值的像素的个数;Counting, in each column of the mask image, a pixel value of a pixel value of the first pixel value;
根据所述个数确定所述待分析车牌上的字符间的分割位置;Determining, according to the number, a division position between characters on the license plate to be analyzed;
根据所述待分析车牌上的字符间的分割位置,得到所述待分析车牌的图像的标签图像。And obtaining a label image of the image of the license plate to be analyzed according to the division position between the characters on the license plate to be analyzed.
上述方案中,所述根据所述个数确定所述待分析车牌上的字符间的分割位置包括:In the above solution, determining, according to the number, a division position between characters on the license plate to be analyzed includes:
将所述个数对应的数值与预设的阈值进行比较;Comparing the value corresponding to the number with a preset threshold;
如果所述个数对应的数值大于预设的阈值,则将所述个数对应的数值作为当前相邻的字符的分割位置。If the value corresponding to the number is greater than a preset threshold, the value corresponding to the number is used as the split position of the currently adjacent character.
上述方案中,所述将所述原始图像进行标记得到标签图像包括:In the above solution, the marking the original image to obtain a label image comprises:
将所述原始图像的相邻字符之间的区域标记为第一标签值;Marking an area between adjacent characters of the original image as a first tag value;
将除所述相邻字符之间的区域外的其它区域标记为第二标签值;Marking other areas than the area between the adjacent characters as the second tag value;
将标记为所述第一标签值的区域和标记为所述第二标签值的区域构成所述标签图像。 An area marked as the first tag value and an area marked as the second tag value constitute the tag image.
第二方面,本发明实施例提供了一种基于深度学习的车牌字符分割装置,所述装置包括:In a second aspect, an embodiment of the present invention provides a license plate character segmentation device based on deep learning, and the device includes:
标记单元,用于获取车牌的原始图像,将所述原始图像进行标记得到标签图像;a marking unit, configured to acquire an original image of the license plate, and mark the original image to obtain a label image;
构造单元,用于根据所述原始图像和所述标签图像构造神经网络;a constructing unit, configured to construct a neural network according to the original image and the label image;
分类后图像生成单元,用于基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像;a classified image generating unit, configured to classify the regions in the original image to obtain a classified image based on the neural network and a softmax regression loss function;
判断单元,用于将所述分类后的图像与所述标签图像进行对比,判断所述分类后的图像与所述标签图像是否一致;a determining unit, configured to compare the classified image with the label image, and determine whether the classified image is consistent with the label image;
训练单元,用于如果不一致,则基于所述分类后的图像进行训练得到神经网络模型。a training unit, configured to perform a neural network model based on the classified image if not inconsistent.
上述方案中,所述装置还包括:In the above solution, the device further includes:
标签图像生成单元,用于基于所述神经网络模型,对待分析车牌的图像进行分析,以得到所述待分析车牌对应的标签图像。The label image generating unit is configured to analyze an image of the license plate to be analyzed based on the neural network model to obtain a label image corresponding to the license plate to be analyzed.
上述方案中,所述标签图像生成单元包括:In the above solution, the label image generating unit includes:
概率获取单元,用于将所述待分析车牌的图像通过所述神经网络模型,得到所述待分析车牌的图像的每个像素对应的第一标签值的概率和第二标签值的概率;a probability acquisition unit, configured to pass the image of the license plate to be analyzed to the neural network model, to obtain a probability of a first label value and a probability of a second label value corresponding to each pixel of the image of the license plate to be analyzed;
第一处理单元,用于根据所述每个像素对应的所述第一标签值的概率和所述第二标签值的概率,对所述待分析车牌的图像进行处理得到掩膜图像;a first processing unit, configured to process, according to a probability of the first label value corresponding to each pixel and a probability of the second label value, an image of the license plate to be analyzed to obtain a mask image;
第二处理单元,用于对所述掩膜图像进行处理,得到所述待分析车牌的标签图像。And a second processing unit, configured to process the mask image to obtain a label image of the license plate to be analyzed.
上述方案中,所述第一处理单元包括:In the above solution, the first processing unit includes:
根据下式计算所述掩膜图像:The mask image is calculated according to the following formula:
Figure PCTCN2017080128-appb-000002
Figure PCTCN2017080128-appb-000002
其中,所述
Figure PCTCN2017080128-appb-000003
为所述掩膜图像,P1(i,j)为所述待分析车牌的图像的第i行和第j列的像素对应的所述第一标签值的概率,P0(i,j)为所述待分析车牌的图像的第i行和第j列的像素对应的所述第二标签值的概率,i=1,2,3…M,j=1,2,3…N,所述M为所述待分析车牌的图像的高度,所述N为所述待分析车牌的图像的宽度。
Wherein said
Figure PCTCN2017080128-appb-000003
For the mask image, P 1 (i, j) is the probability of the first tag value corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed, P 0 (i, j) a probability of the second tag value corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed, i=1, 2, 3...M, j=1, 2, 3...N, M is the height of the image of the license plate to be analyzed, and N is the width of the image of the license plate to be analyzed.
上述方案中,所述第二处理单元包括:In the above solution, the second processing unit includes:
统计单元,用于统计所述掩膜图像的每一列中像素值为第一像素值的像素的个数;a statistical unit, configured to count the number of pixels in each column of the mask image whose pixel value is the first pixel value;
确定单元,用于根据所述个数确定所述待分析车牌上的字符间的分割位置;a determining unit, configured to determine, according to the number, a split position between characters on the license plate to be analyzed;
标签图像获取单元,用于根据所述待分析车牌上的字符间的分割位置,得到所述待分析车牌的图像的标签图像。And a label image acquiring unit, configured to obtain a label image of the image of the license plate to be analyzed according to the split position between the characters on the license plate to be analyzed.
上述方案中,所述确定单元包括:In the above solution, the determining unit includes:
比较单元,用于将所述个数对应的数值与预设的阈值进行比较;a comparing unit, configured to compare the value corresponding to the number with a preset threshold;
分割位置确定单元,用于在所述个数对应的数值大于预设的阈值的情况下,将所述个数对应的数值作为当前相邻的字符的分割位置。The segmentation position determining unit is configured to use the value corresponding to the number as the segmentation position of the currently adjacent character in a case where the value corresponding to the number is greater than a preset threshold.
上述方案中,所述标记单元包括:In the above solution, the marking unit comprises:
第一标签值标记单元,用于将所述原始图像的相邻字符之间的区域标记为第一标签值;a first tag value marking unit, configured to mark an area between adjacent characters of the original image as a first tag value;
第二标签值标记单元,用于将除所述相邻字符之间的区域外的其它区域标记为第二标签值;a second tag value marking unit, configured to mark other areas than the area between the adjacent characters as a second tag value;
构成单元,用于将标记为所述第一标签值的区域和标记为所述第二标签值的区域构成所述标签图像。And a constituting unit configured to form the label image as an area marked as the first label value and an area marked as the second label value.
第二方面,本发明实施例提供了一种存储介质,该存储介质包括一组指令,当执行所述指令时,引起至少一个处理器执行包括以下的操作:获 取车牌的原始图像,将所述原始图像进行标记得到标签图像;In a second aspect, an embodiment of the present invention provides a storage medium, where the storage medium includes a set of instructions, when executed, causing at least one processor to perform an operation including: obtaining Obtaining an original image of the license plate, marking the original image to obtain a label image;
根据所述原始图像和所述标签图像构造神经网络;Constructing a neural network according to the original image and the label image;
基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像;And classifying the regions in the original image to obtain a classified image based on the neural network and a softmax regression loss function;
将所述分类后的图像与所述标签图像进行对比,判断所述分类后的图像与所述标签图像是否一致;如果不一致,则基于所述分类后的图像进行训练得到神经网络模型。Comparing the classified image with the label image to determine whether the classified image is consistent with the label image; if not, training based on the classified image to obtain a neural network model.
本发明实施例提供了一种基于深度学习的车牌字符分割方法、装置及存储介质,通过将原始图像进行标记得到标签图像,根据原始图像和标签图像构造神经网络,基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像,将分类后的图像与标签图像进行对比,如果分类后的图像与标签图像不一致,则将分类后的图像进行训练得到神经网络模型,将原始图像通过神经网络模型得到标签图像,从而获取更精确的车牌字符区域,提高车牌字符分割的准确性。The embodiment of the invention provides a method, a device and a storage medium for license plate character segmentation based on deep learning. The original image is marked to obtain a label image, and the neural network is constructed according to the original image and the label image, based on the neural network and softmax regression. a loss function, classifying the regions in the original image to obtain a classified image, and comparing the classified image with the label image. If the classified image is inconsistent with the label image, the classified image is trained to obtain a neural network. The network model obtains the label image from the original image through the neural network model, thereby obtaining a more accurate license plate character area and improving the accuracy of the license plate character segmentation.
附图说明DRAWINGS
为了更清楚地说明本发明实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments will be briefly described below. It should be understood that the following drawings show only certain embodiments of the present invention, and therefore It should be seen as a limitation on the scope, and those skilled in the art can obtain other related drawings according to these drawings without any creative work.
图1为本发明实施例一提供的一种基于深度学习的车牌字符分割方法流程图;FIG. 1 is a flowchart of a method for segmentation of a license plate character based on deep learning according to Embodiment 1 of the present invention; FIG.
图2为本发明实施例一提供的构造神经网络示意图;2 is a schematic diagram of a structural neural network according to Embodiment 1 of the present invention;
图3为本发明实施例一提供的与图2相对应的神经网络预测网络示意图;3 is a schematic diagram of a neural network prediction network corresponding to FIG. 2 according to Embodiment 1 of the present invention;
图4为本发明实施例一提供的一种基于深度学习的车牌字符分割方法 中步骤S106的流程图;FIG. 4 is a schematic diagram of a license plate character segmentation method based on deep learning according to Embodiment 1 of the present invention. The flowchart of step S106;
图5为本发明实施例一提供的另一种基于深度学习的车牌字符分割方法中步骤S106的流程图;FIG. 5 is a flowchart of step S106 in another method for segmentation of a license plate character based on deep learning according to Embodiment 1 of the present invention;
图6为本发明实施例一提供的一种基于深度学习的车牌字符分割方法中步骤S101的流程图;FIG. 6 is a flowchart of step S101 in a method for segmenting license plate characters based on deep learning according to Embodiment 1 of the present invention;
图7为本发明实施例二提供的一种基于深度学习的车牌字符分割装置示意图。FIG. 7 is a schematic diagram of a license plate character segmentation apparatus based on deep learning according to Embodiment 2 of the present invention.
附图标记说明:Description of the reference signs:
10-标记单元;    20-构造单元;    30-分类后图像生成单元;10-label unit; 20-construction unit; 30-classified image generation unit;
40-判断单元;    50-训练单元;    60-标签图像生成单元。40-judging unit; 50-training unit; 60-tag image generating unit.
具体实施方式detailed description
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present invention. It is a partial embodiment of the invention, and not all of the embodiments. The components of the embodiments of the invention, which are generally described and illustrated in the figures herein, may be arranged and designed in various different configurations. Therefore, the following detailed description of the embodiments of the invention in the claims All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
针对现有的车牌字符分割技术采用水平投影和垂直投影的方法对字符进行分割,但是对于字符粘连、受到污损和噪声污染较大的车牌无法进行分割。本发明实施例提供了一种基于深度学习的车牌字符分割方法、装置及存储介质,通过将原始图像进行标记得到标签图像,根据原始图像和标签图像构造神经网络,基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像,将分类后的图像与标签图 像进行对比,如果分类后的图像与标签图像不一致,则将分类后的图像进行训练得到神经网络模型,将原始图像通过神经网络模型得到标签图像,从而可以对字符粘连的车牌、噪声污染较大的车牌和受到污损的车牌进行有效的字符分割,获取更精确的车牌字符区域,提高车牌字符分割的准确性。下面通过实施例进行详细描述。For the existing license plate character segmentation technology, the horizontal projection and the vertical projection method are used to segment the characters, but the license plate for character sticking, staining and noise pollution cannot be divided. The embodiment of the invention provides a method, a device and a storage medium for license plate character segmentation based on deep learning. The original image is marked to obtain a label image, and the neural network is constructed according to the original image and the label image, based on the neural network and softmax regression. a loss function, classifying the regions in the original image to obtain a classified image, and classifying the image and the label map For comparison, if the classified image is inconsistent with the label image, the classified image is trained to obtain a neural network model, and the original image is obtained through the neural network model to obtain a label image, so that the license plate and noise pollution of the character adhesion can be greatly increased. The license plate and the defaced license plate are effectively character-divided to obtain a more accurate license plate character area and improve the accuracy of the license plate character segmentation. The details are described below by way of examples.
图1为本发明实施例提供的一种基于深度学习的车牌字符分割方法流程图。FIG. 1 is a flowchart of a method for segmentation of license plate characters based on deep learning according to an embodiment of the present invention.
参照图1,步骤S101,获取车牌的原始图像,将原始图像进行标记得到标签图像;Referring to FIG. 1, step S101, acquiring an original image of a license plate, and marking the original image to obtain a label image;
具体地,先获取车牌的原始图像,对车牌的原始图像上的相邻字符之间的区域标记为第一标签值,将车牌的原始图像上的其它区域标记为第二标签值,其中,第一标签值为1,第二标签值为0。那么,标记为1的区域和标记为0的区域构成标签图像。Specifically, the original image of the license plate is first acquired, the area between adjacent characters on the original image of the license plate is marked as a first label value, and the other area on the original image of the license plate is marked as a second label value, wherein A tag value is 1 and a second tag value is 0. Then, the area marked with 1 and the area marked with 0 constitute a label image.
这样,每一张车牌图像构成对应的两张图像,分别为车牌的原始图像和标签图像,例如,车牌的原始图像为“京C·874”,将相邻字符“京”和“C”之间的区域标记为1,依次类推,将除相邻字符之外的区域标记为0,从而构成标签图像。In this way, each license plate image constitutes two corresponding images, which are the original image of the license plate and the label image. For example, the original image of the license plate is “Jing C·874”, and the adjacent characters “Beijing” and “C” The area between the areas is marked as 1, and so on, and the area other than the adjacent characters is marked as 0, thereby constituting the label image.
步骤S102,根据原始图像和标签图像构造神经网络;Step S102, constructing a neural network according to the original image and the label image;
具体可参照如图2所示的构造神经网络示意图,神经网络为七层,每层包括卷积层和激活层,将原始图像和标签图像通过依次通过各层,则构成神经网络。For details, refer to the schematic diagram of constructing a neural network as shown in FIG. 2. The neural network is seven layers, each layer includes a convolution layer and an activation layer, and the original image and the label image are sequentially passed through the layers to form a neural network.
步骤S103,基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像;Step S103, classifying the regions in the original image to obtain a classified image based on the neural network and a softmax regression loss function;
这里,在上述步骤S102中,原始图像和标签图像构造神经网络,再基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分 类得到分类后的图像,具体可参照图3。Here, in the above step S102, the original image and the tag image construct a neural network, and then divide the region in the original image based on the neural network and the softmax regression loss function. The class is classified as an image. For details, refer to FIG. 3.
对所述原始图像中的区域进行分类,具体的方式可以为:将原始图像中的至少一个区域进行分类,当该区域处于分割位置处分类为1(也就是说标记为1),其他位置分类为0,分类后的图像只包含0和1两个值。进一步地,其中原始图像中可以存在至少一个位置,每一个位置可以采用像素来度量,比如,一个像素可以对应一个位置,或者,可以一块像素对应一个位置。The area in the original image may be classified by: classifying at least one area in the original image, and classifying the area as 1 at the division position (that is, the label is 1), and other positions are classified. A value of 0, the classified image contains only two values of 0 and 1. Further, at least one position may exist in the original image, and each position may be measured by a pixel, for example, one pixel may correspond to one position, or one pixel may correspond to one position.
步骤S104,将分类后的图像与标签图像进行对比,判断分类后的图像与标签图像是否一致,如果不一致,则执行步骤S105;如果一致,则执行步骤S107。In step S104, the classified image is compared with the label image to determine whether the classified image and the label image are consistent. If not, step S105 is performed; if they are consistent, step S107 is performed.
步骤S105,将分类后的图像进行训练得到神经网络模型;Step S105, training the classified image to obtain a neural network model;
这里,将分类后的图像与标签图像进行对比,确定分类后的图像与标签图像是否匹配,如果不匹配,则需要将分类后的图像进行训练得到神经网络模型。Here, the classified image is compared with the label image to determine whether the classified image matches the label image. If there is no match, the classified image needs to be trained to obtain a neural network model.
至此,本实施例已经基于原始图像建立得到神经网络模型。另外,可以理解的是,本实施例中没有对原始图像的数量进行限定,实际处理中,可以采用多个车牌的原始图像进行训练以得到神经网络模型。So far, the present embodiment has established a neural network model based on the original image. In addition, it can be understood that the number of original images is not limited in this embodiment. In actual processing, the original images of multiple license plates may be used for training to obtain a neural network model.
另外,还可以理解的是,本实施例在完成上述步骤S105之后,还可以将车牌的原始图像输入到神经网络模型,以得到该原始图像通过神经网络模型计算得到的待比对标签图像;In addition, it can be understood that, after completing the foregoing step S105, the embodiment may further input the original image of the license plate into the neural network model to obtain the to-be-aligned label image calculated by the original image through the neural network model;
利用待比对标签图像与步骤S101中得到的车牌的原始图像的标签图像进行对比,若两者匹配,则可以认为神经网络模型建立成功;若不匹配,可以继续通过更多的车牌的原始图像来训练神经网络模型,直至通过训练网络模型得到的待比对标签图像与标签图像匹配为止。Comparing the tag image to be compared with the tag image of the original image of the license plate obtained in step S101, if the two match, the neural network model can be considered to be successful; if not, the original image of more license plates can be continued. The neural network model is trained until the to-be-aligned label image obtained by training the network model matches the label image.
另外,前述两个标签图像的匹配,可以采用匹配程度来衡量,比如, 可以为两个标签图像中相同的区域对应的标记值相同的比例达到预设值,比如,存在99%的相同,就可以认为两个标签图像匹配。其中,预设值可以根据实际情况进行设置,本实施例中不进行穷举。In addition, the matching of the foregoing two label images can be measured by the degree of matching, for example, The preset values may be reached for the same value of the same region in the two label images. For example, if 99% of the same is present, the two label images may be considered to match. The preset value can be set according to the actual situation, and is not exhaustive in this embodiment.
在前述步骤建立得到神经网络模型的基础上,还可以如图1所示,包括以下处理:Based on the foregoing steps to establish a neural network model, as shown in FIG. 1, the following processing is included:
步骤S106,基于所述神经网络模型,对待分析车牌的图像进行分析,以得到所述待分析车牌对应的标签图像;Step S106, analyzing an image of the license plate to be analyzed based on the neural network model to obtain a label image corresponding to the license plate to be analyzed;
这里,待分析车牌的图像通过神经网络模型输出原始图像的像素对应的第一标签值的概率和第二标签值的概率,再对待分析车牌的图像进行处理掩膜图像,最后对掩膜图像进行后处理得到待分析车牌的标签图像。Here, the image of the license plate to be analyzed outputs the probability of the first label value corresponding to the pixel of the original image and the probability of the second label value through the neural network model, and then processes the mask image of the image to be analyzed, and finally performs the mask image. Post-processing obtains the label image of the license plate to be analyzed.
前述就是采用步骤S105中建立的神经网络模型,对其他车牌进行分析,以得到其他车牌的标签图像的处理流程。The foregoing is to use the neural network model established in step S105 to analyze other license plates to obtain a processing flow of the tag images of other license plates.
步骤S107,结束。Step S107, ending.
进一步的,如图4所示,上述实施例基于深度学习的车牌字符分割方法中,步骤S106可采用如下步骤实现,包括:Further, as shown in FIG. 4, in the method for segmentation of the license plate character based on the deep learning, the step S106 may be implemented by the following steps, including:
步骤S201,将待分析车牌的图像通过神经网络模型,得到待分析车牌的图像的每个像素对应的第一标签值的概率和第二标签值的概率;Step S201, the image of the license plate to be analyzed is passed through a neural network model, and the probability of the first label value corresponding to each pixel of the image of the license plate to be analyzed and the probability of the second label value are obtained;
这里,通过得到神经网络模型,然后将待分析车牌的图像输入神经网络模型,最终输出待分析车牌的图像的每个像素对应的第一标签值的概率和第二标签值的概率。Here, by obtaining the neural network model, and then inputting the image of the license plate to be analyzed into the neural network model, the probability of the first tag value corresponding to each pixel of the image of the license plate to be analyzed and the probability of the second tag value are finally output.
其中,待分析车牌的图像可以用H表示,H(i,j)表示待分析车牌的图像H的第i行,第j列的像素值,i=1,2,3,…M,j=1,2,3,…N,M表示待分析车牌的图像H的高度,N表示待分析车牌的图像H的宽度。Wherein, the image of the license plate to be analyzed may be represented by H, and H(i, j) represents the i-th row of the image H of the license plate to be analyzed, and the pixel value of the j-th column, i=1, 2, 3, ..., M, j= 1, 2, 3, ..., N, M represents the height of the image H of the license plate to be analyzed, and N represents the width of the image H of the license plate to be analyzed.
第一标签值可以为1,第二标签值可以为0,也就是输出待分析车牌的图像的每个像素对应的标签值为1的概率和标签值为0的概率。标签值为1 的概率用P1表示,其中P1(i,j)表示待分析车牌的图像的第i行,第j列的像素对应的标签值为1的概率;标签值为0的概率用P0表示,其中P0(i,j)表示待分析车牌的图像的第i行,第j列的像素对应的标签值为0的概率。这样,待分析车牌的图像的每个像素对应的标签值为1的概率和标签值为0的概率,且它们的概率之和为1,具体如公式(1)可知:The first tag value may be 1, and the second tag value may be 0, that is, the probability that the tag value corresponding to each pixel of the image of the license plate to be analyzed is 1 and the tag value is 0. Label value is 1 The probability is represented by P1, where P1(i,j) represents the i-th row of the image of the license plate to be analyzed, and the probability of the tag value corresponding to the pixel of the jth column is 1; the probability of the tag value of 0 is represented by P0, where P0 (i, j) represents the probability that the pixel value of the pixel of the jth column corresponds to the i-th row of the image of the license plate to be analyzed. Thus, the probability that the label value of each pixel of the image of the license plate to be analyzed corresponds to 1 and the probability that the label value is 0, and the sum of their probabilities is 1, as shown in formula (1):
P0(i,j)+P1(i,j)=1    (1)P 0 (i,j)+P 1 (i,j)=1 (1)
其中,标签值为1代表车牌相邻字符之间的间隙,标签值为0代表除间隙外的其它区域。Wherein, the label value of 1 represents a gap between adjacent characters of the license plate, and the label value of 0 represents other areas except the gap.
步骤S202,根据每个像素对应的第一标签值的概率和第二标签值的概率,对所述待分析车牌的图像进行处理得到掩膜图像;Step S202, processing the image of the license plate to be analyzed according to the probability of the first label value corresponding to each pixel and the probability of the second label value to obtain a mask image;
这里,根据待分析车牌的图像的每个像素对应的标签值为1的概率和标签值为0的概率对待分析车牌的图像进行处理,得到掩膜图像
Figure PCTCN2017080128-appb-000004
,掩膜图像
Figure PCTCN2017080128-appb-000005
和待分析车牌的图像H的宽度和高度都相等,获取掩膜图像
Figure PCTCN2017080128-appb-000006
的方法具体如公式(2)所示:
Here, according to the probability that the label value corresponding to each pixel of the image of the license plate to be analyzed is 1 and the probability that the label value is 0, the image of the license plate is processed to obtain a mask image.
Figure PCTCN2017080128-appb-000004
Mask image
Figure PCTCN2017080128-appb-000005
The width and height of the image H of the license plate to be analyzed are equal, and a mask image is obtained.
Figure PCTCN2017080128-appb-000006
The method is as shown in formula (2):
Figure PCTCN2017080128-appb-000007
Figure PCTCN2017080128-appb-000007
其中,
Figure PCTCN2017080128-appb-000008
为掩膜图像,P1(i,j)为待分析车牌的图像的第i行和第j列的像素对应的第一标签值的概率,P0(i,j)为待分析车牌的图像的第i行和第j列的像素对应的第二标签值的概率,i=1,2,3…M,j=1,2,3…N,M为待分析车牌的图像的高度,N为所述待分析车牌的图像的宽度。
among them,
Figure PCTCN2017080128-appb-000008
For the mask image, P 1 (i, j) is the probability of the first label value corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed, and P 0 (i, j) is the image of the license plate to be analyzed. The probability of the second tag value corresponding to the pixels of the i-th row and the j-th column, i=1, 2, 3...M, j=1, 2, 3...N, M is the height of the image of the license plate to be analyzed, N The width of the image of the license plate to be analyzed.
步骤S203,对所述掩膜图像进行处理,得到所述待分析车牌的标签图像。Step S203, processing the mask image to obtain a label image of the license plate to be analyzed.
具体地,如图5所示,上述实施例基于深度学习的车牌字符分割方法中,步骤S203可采用如下步骤实现,包括:Specifically, as shown in FIG. 5, in the method for segmenting the license plate character based on the deep learning, the step S203 may be implemented by the following steps, including:
步骤S301,统计掩膜图像的每一列中像素值为第一像素值的像素的个 数;Step S301, counting the pixels in each column of the mask image whose pixel value is the first pixel value. number;
这里,第一像素值为1,通过统计掩膜图像
Figure PCTCN2017080128-appb-000009
每一列中像素值为1的个数,个数可用T1(j)表示,其中,j=1,2,3,…,N,N表示原始图像H的宽度。
Here, the first pixel value is 1, through the statistical mask image
Figure PCTCN2017080128-appb-000009
The number of pixels in each column is 1, and the number can be represented by T1(j), where j=1, 2, 3, ..., N, N represents the width of the original image H.
步骤S302,根据个数确定待分析车牌上的字符间的分割位置;Step S302, determining, according to the number, a division position between characters on the license plate to be analyzed;
具体地,将个数对应的数值与预设的阈值进行比较;如果个数对应的数值大于预设的阈值,则将个数对应的数值作为当前相邻的字符的分割位置。Specifically, the value corresponding to the number is compared with a preset threshold; if the value corresponding to the number is greater than the preset threshold, the value corresponding to the number is used as the segmentation position of the currently adjacent character.
这里,预设的阈值用threshold表示,具体过程为:Here, the preset threshold is represented by threshold, and the specific process is:
当T1(j)>threshold时,则将j作为当前相邻字符的分割位置,其中,threshold=a*M,M表示掩膜图像
Figure PCTCN2017080128-appb-000010
的高度,a取0.2。
When T1(j)>threshold, j is taken as the division position of the current adjacent character, where threshold=a*M, M represents the mask image
Figure PCTCN2017080128-appb-000010
The height, a takes 0.2.
这里,a值是经过大量的实验确定的,也可以通过统计的方法获取,当a取定某一个值时,使得上述方法分割得到的分割位置与车牌上字符实际的分割位置误差最小时,则可以确定当前值为a的最优值。Here, the value of a is determined by a large number of experiments, and can also be obtained by a statistical method. When a is determined to a certain value, the error between the segmentation position obtained by the above method and the actual segmentation position of the character on the license plate is minimized. It is possible to determine the optimal value of the current value a.
步骤S303,根据待分析车牌上的字符间的分割位置得到标签图像。Step S303, obtaining a label image according to the division position between the characters on the license plate to be analyzed.
进一步的,如图6所示,上述实施例基于深度学习的车牌字符分割方法中,步骤S101可采用如下步骤实现,包括:Further, as shown in FIG. 6 , in the method for segmentation of the license plate character based on the deep learning, the step S101 may be implemented by the following steps, including:
步骤S401,将原始图像的相邻字符之间的区域标记为第一标签值;Step S401, marking an area between adjacent characters of the original image as a first label value;
这里,第一标签值为1,将原始图像中相邻字符之间的区域标记为1。Here, the first tag value is 1, and the area between adjacent characters in the original image is marked as 1.
步骤S402,将除相邻字符之间的区域外的其它区域标记为第二标签值;Step S402, marking other areas except the area between adjacent characters as the second label value;
这里,第二标签值为0,在原始图像中,除了相邻字符外,将其他区域标记为0。Here, the second tag value is 0, and in the original image, other areas are marked as 0 except for adjacent characters.
步骤S403,将标记为第一标签值的区域和标记为第二标签值的区域构成标签图像。In step S403, the area marked as the first label value and the area marked as the second label value constitute a label image.
这里,将标记为1的相邻字符之间的区域和标记为0的区域构成标签 图像。Here, the area between adjacent characters marked 1 and the area marked 0 constitute a label image.
通过这种特定的标记,在神经网络学习过程中能够自动学习车牌上字符的规则,比如字符间存在间隙和字符轮廓,对于字符粘连,字符污损和噪声污染较大的车牌实现有效地分割。Through this specific marking, the rules of the characters on the license plate can be automatically learned during the neural network learning process, such as the existence of gaps and character contours between characters, and the license plate for character sticking, character staining and noise pollution is effectively segmented.
本发明提供了一种基于深度学习的车牌字符分割方法,通过将原始图像进行标记得到标签图像,根据原始图像和标签图像构造神经网络,基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像,将分类后的图像与标签图像进行对比,如果分类后的图像与标签图像不一致,则将分类后的图像进行训练得到神经网络模型,将原始图像通过神经网络模型得到标签图像,从而获取更精确的车牌字符区域,提高车牌字符分割的准确性。The invention provides a license plate character segmentation method based on deep learning, which obtains a tag image by marking an original image, constructs a neural network according to the original image and the tag image, and based on the neural network and a softmax regression loss function, the original The regions in the image are classified to obtain a classified image, and the classified image is compared with the label image. If the classified image is inconsistent with the label image, the classified image is trained to obtain a neural network model, and the original image is passed. The neural network model obtains the label image to obtain a more accurate license plate character area and improve the accuracy of license plate character segmentation.
图7为本发明实施例二提供的一种基于深度学习的车牌字符分割装置示意图。FIG. 7 is a schematic diagram of a license plate character segmentation apparatus based on deep learning according to Embodiment 2 of the present invention.
参照图7,该装置包括标记单元10、构造单元20、分类后图像生成单元30、判断单元40、训练单元50和标签图像生成单元60。Referring to FIG. 7, the apparatus includes a marking unit 10, a construction unit 20, a classified image generation unit 30, a determination unit 40, a training unit 50, and a label image generation unit 60.
标记单元10,用于获取车牌的原始图像,将原始图像进行标记得到标签图像;a marking unit 10, configured to acquire an original image of the license plate, and mark the original image to obtain a label image;
构造单元20,用于根据原始图像和标签图像构造神经网络;The constructing unit 20 is configured to construct a neural network according to the original image and the label image;
分类后图像生成单元30,用于基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像;The classified image generating unit 30 is configured to classify the regions in the original image to obtain a classified image based on the neural network and the softmax regression loss function;
判断单元40,用于将分类后的图像与所述标签图像进行对比,判断分类后的图像与标签图像是否一致;The determining unit 40 is configured to compare the classified image with the label image, and determine whether the classified image is consistent with the label image;
训练单元50,用于在不一致的情况下,基于所述分类后的图像进行训练得到神经网络模型。The training unit 50 is configured to perform a neural network model based on the classified image after the inconsistency.
至此,本实施例已经基于原始图像建立得到神经网络模型。另外,可 以理解的是,本实施例中没有对原始图像的数量进行限定,实际处理中,可以采用多个车牌的原始图像进行训练以得到神经网络模型。So far, the present embodiment has established a neural network model based on the original image. In addition, It is understood that the number of original images is not limited in this embodiment. In actual processing, the original images of multiple license plates may be used for training to obtain a neural network model.
另外,还可以理解的是,还可以将车牌的原始图像输入到神经网络模型,以得到该原始图像通过神经网络模型计算得到的待比对标签图像;In addition, it can also be understood that the original image of the license plate can also be input into the neural network model to obtain the to-be-aligned label image calculated by the original image through the neural network model;
利用待比对标签图像与得到的车牌的原始图像的标签图像进行对比,若两者匹配,则可以认为神经网络模型建立成功;若不匹配,可以继续通过更多的车牌的原始图像来训练神经网络模型,直至通过训练网络模型得到的待比对标签图像与标签图像匹配为止。The comparison between the image of the label to be compared with the image of the original image of the obtained license plate is used. If the two match, the neural network model can be considered to be successful; if not, the original image of more license plates can be used to train the nerve. The network model until the to-be-aligned tag image obtained by training the network model matches the tag image.
另外,前述两个标签图像的匹配,可以采用匹配程度来衡量,比如,可以为两个标签图像中相同的区域对应的标记值相同的比例达到预设值,比如,存在99%的相同,就可以认为两个标签图像匹配。其中,预设值可以根据实际情况进行设置,本实施例中不进行穷举。In addition, the matching of the foregoing two label images may be measured by the degree of matching. For example, the ratio of the same label value corresponding to the same area in the two label images may reach a preset value, for example, 99% of the same, It can be considered that the two tag images match. The preset value can be set according to the actual situation, and is not exhaustive in this embodiment.
在前述建立得到神经网络模型的基础上,还可以包括:Based on the foregoing establishment of the neural network model, the method may further include:
标签图像生成单元60,用于基于所述神经网络模型,对待分析车牌的图像进行分析,以得到所述待分析车牌对应的标签图像。The label image generating unit 60 is configured to analyze an image of the license plate to be analyzed based on the neural network model to obtain a label image corresponding to the license plate to be analyzed.
进一步地,标签图像生成单元60包括:Further, the tag image generating unit 60 includes:
概率获取单元(未示出),用于将待分析车牌的图像通过神经网络模型,得到待分析车牌的图像的每个像素对应的第一标签值的概率和第二标签值的概率;a probability acquisition unit (not shown), configured to pass an image of the license plate to be analyzed to a neural network model, and obtain a probability of a first tag value corresponding to each pixel of the image of the license plate to be analyzed and a probability of the second tag value;
第一处理单元(未示出),用于根据每个像素对应的第一标签值的概率和第二标签值的概率,对所述待分析车牌的图像进行处理得到掩膜图像;a first processing unit (not shown), configured to process an image of the license plate to be analyzed according to a probability of a first label value corresponding to each pixel and a probability of a second label value to obtain a mask image;
第二处理单元(未示出),用于对所述掩膜图像进行处理,得到所述待分析车牌的标签图像。A second processing unit (not shown) is configured to process the mask image to obtain a label image of the license plate to be analyzed.
进一步地,第一处理单元(未示出)包括:Further, the first processing unit (not shown) includes:
根据公式(2)计算掩膜图像,其中,
Figure PCTCN2017080128-appb-000011
为所述掩膜图像,P1(i,j)为 待分析车牌的图像的第i行和第j列的像素对应的第一标签值的概率,P0(i,j)为待分析车牌的图像的第i行和第j列的像素对应的第二标签值的概率,i=1,2,3…M,j=1,2,3…N,M为待分析车牌的图像的高度,N为待分析车牌的图像的宽度。
Calculating a mask image according to formula (2), wherein
Figure PCTCN2017080128-appb-000011
For the mask image, P 1 (i, j) is the probability of the first tag value corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed, and P 0 (i, j) is the license plate to be analyzed. The probability of the second tag value corresponding to the pixels of the i-th row and the j-th column of the image, i=1, 2, 3...M, j=1, 2, 3...N, M is the height of the image of the license plate to be analyzed , N is the width of the image of the license plate to be analyzed.
进一步地,第二处理单元(未示出)包括:Further, the second processing unit (not shown) includes:
统计单元(未示出),用于统计掩膜图像的每一列中像素值为第一像素值的像素的个数;a statistical unit (not shown) for counting the number of pixels in each column of the mask image whose pixel value is the first pixel value;
确定单元(未示出),用于根据个数确定待分析车牌上的字符间的分割位置;Determining a unit (not shown) for determining a division position between characters on the license plate to be analyzed according to the number;
标签图像获取单元(未示出),用于根据待分析车牌上的字符间的分割位置得到待分析车牌的标签图像。A label image acquisition unit (not shown) for obtaining a label image of the license plate to be analyzed according to the division position between the characters on the license plate to be analyzed.
进一步地,确定单元(未示出)包括:Further, the determining unit (not shown) includes:
比较单元(未示出),用于将个数对应的数值与预设的阈值进行比较;a comparison unit (not shown) for comparing the value corresponding to the number with a preset threshold;
分割位置确定单元(未示出),用于在个数对应的数值大于预设的阈值的情况下,将个数对应的数值作为当前相邻的字符的分割位置。The split position determining unit (not shown) is configured to use the value corresponding to the number as the split position of the currently adjacent character if the number corresponding to the number is greater than a preset threshold.
进一步地,标记单元10包括:Further, the marking unit 10 comprises:
第一标签值标记单元(未示出),用于将原始图像的相邻字符之间的区域标记为第一标签值;a first tag value marking unit (not shown) for marking an area between adjacent characters of the original image as the first tag value;
第二标签值标记单元(未示出),用于将除相邻字符之间的区域外的其它区域标记为第二标签值;a second tag value marking unit (not shown) for marking other areas than the area between adjacent characters as the second tag value;
构成单元(未示出),用于将标记为第一标签值的区域和标记为所述第二标签值的区域构成所述标签图像。A constituent unit (not shown) for constituting the area marked as the first label value and the area marked as the second label value constitute the label image.
本发明提供了一种基于深度学习的车牌字符分割装置,通过将原始图像进行标记得到标签图像,根据原始图像和标签图像构造神经网络,基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类 得到分类后的图像,将分类后的图像与标签图像进行对比,如果分类后的图像与标签图像不一致,则将分类后的图像进行训练得到神经网络模型,将原始图像通过神经网络模型得到标签图像,从而获取更精确的车牌字符区域,提高车牌字符分割的准确性。The invention provides a license plate character segmentation device based on deep learning, which obtains a tag image by marking an original image, constructs a neural network according to the original image and the tag image, and based on the neural network and a softmax regression loss function, the original Classification of areas in the image The classified image is obtained, and the classified image is compared with the label image. If the classified image is inconsistent with the label image, the classified image is trained to obtain a neural network model, and the original image is obtained through a neural network model to obtain a label image. In order to obtain a more accurate license plate character area, the accuracy of license plate character segmentation is improved.
本发明实施例所提供的装置可以为设备上的特定硬件或者安装于设备上的软件或固件等。本发明实施例所提供的装置,其实现原理及产生的技术效果和前述方法实施例相同,为简要描述,装置实施例部分未提及之处,可参考前述方法实施例中相应内容。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,前述描述的系统、装置和单元的具体工作过程,均可以参考上述方法实施例中的相对应过程,在此不再赘述。The device provided by the embodiment of the present invention may be specific hardware on the device or software or firmware installed on the device. The implementation principle and the technical effects of the device provided by the embodiments of the present invention are the same as those of the foregoing method embodiments. For a brief description, where the device embodiment is not mentioned, reference may be made to the corresponding content in the foregoing method embodiments. A person skilled in the art can clearly understand that, for the convenience and brevity of the description, the specific working processes of the foregoing system, device and unit can refer to the corresponding processes in the foregoing method embodiments, and details are not described herein again.
在本发明所提供的实施例中,应该理解到,所揭露装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通话连接可以是通过一些通话接口,装置或单元的间接耦合或通话连接,可以是电性,机械或其它的形式。In the embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. The device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed. Alternatively, the mutual coupling or direct coupling or call connection shown or discussed may be an indirect coupling or a call connection through some of the communication interfaces, devices or units, and may be in electrical, mechanical or other form.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明提供的实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in the embodiment provided by the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使 用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。The function is implemented in the form of a software functional unit and sold or made as a standalone product When used, it can be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including The instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
本发明实施例还提供了一种存储介质,该存储介质包括一组指令,当执行所述指令时,引起至少一个处理器执行包括以下的操作:获取车牌的原始图像,将所述原始图像进行标记得到标签图像;Embodiments of the present invention also provide a storage medium including a set of instructions that, when executed, cause at least one processor to perform an operation including: acquiring an original image of a license plate, and performing the original image Marking the label image;
根据所述原始图像和所述标签图像构造神经网络;Constructing a neural network according to the original image and the label image;
基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像;And classifying the regions in the original image to obtain a classified image based on the neural network and a softmax regression loss function;
将所述分类后的图像与所述标签图像进行对比,判断所述分类后的图像与所述标签图像是否一致;如果不一致,则基于所述分类后的图像进行训练得到神经网络模型。Comparing the classified image with the label image to determine whether the classified image is consistent with the label image; if not, training based on the classified image to obtain a neural network model.
而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释,此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。It should be noted that similar reference numerals and letters indicate similar items in the following figures. Therefore, once an item is defined in a drawing, it is not necessary to further define and explain it in the subsequent drawings. Moreover, the terms "first", "second", "third", and the like are used merely to distinguish a description, and are not to be construed as indicating or implying a relative importance.
最后应说明的是:以上所述实施例,仅为本发明的具体实施方式,用以说明本发明的技术方案,而非对其限制,本发明的保护范围并不局限于此,尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术 人员应当理解:任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本发明实施例技术方案的精神和范围。都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。 Finally, it should be noted that the above-mentioned embodiments are merely specific embodiments of the present invention, and are used to explain the technical solutions of the present invention, and are not limited thereto, and the scope of protection of the present invention is not limited thereto, although reference is made to the foregoing. EXAMPLES The present invention has been described in detail, and the prior art It should be understood that any person skilled in the art can modify or modify the technical solutions described in the foregoing embodiments within the technical scope of the present invention, or replace some of the technical features. These modifications, variations, and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention. All should be covered by the scope of the present invention. Therefore, the scope of the invention should be determined by the scope of the claims.

Claims (15)

  1. 一种基于深度学习的车牌字符分割方法,所述方法包括:A method for segmentation of license plate characters based on deep learning, the method comprising:
    获取车牌的原始图像,将所述原始图像进行标记得到标签图像;Obtaining an original image of the license plate, and marking the original image to obtain a label image;
    根据所述原始图像和所述标签图像构造神经网络;Constructing a neural network according to the original image and the label image;
    基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像;And classifying the regions in the original image to obtain a classified image based on the neural network and a softmax regression loss function;
    将所述分类后的图像与所述标签图像进行对比,判断所述分类后的图像与所述标签图像是否一致;如果不一致,则基于所述分类后的图像进行训练得到神经网络模型。Comparing the classified image with the label image to determine whether the classified image is consistent with the label image; if not, training based on the classified image to obtain a neural network model.
  2. 根据权利要求1所述的方法,其中,所述方法还包括:The method of claim 1 wherein the method further comprises:
    基于所述神经网络模型,对待分析车牌的图像进行分析,以得到所述待分析车牌对应的标签图像。Based on the neural network model, an image of the license plate to be analyzed is analyzed to obtain a label image corresponding to the license plate to be analyzed.
  3. 根据权利要求2所述的方法,其中,所述基于所述神经网络模型,对待分析车牌的图像进行分析,以得到所述待分析车牌对应的标签图像,包括:The method of claim 2, wherein the analyzing the image of the license plate to be analyzed based on the neural network model to obtain the label image corresponding to the license plate to be analyzed comprises:
    将所述待分析车牌的图像通过所述神经网络模型,得到所述待分析车牌的图像的每个像素对应的第一标签值的概率和第二标签值的概率;Passing the image of the license plate to be analyzed through the neural network model to obtain a probability of a first tag value corresponding to each pixel of the image of the license plate to be analyzed and a probability of a second tag value;
    根据所述每个像素对应的所述第一标签值的概率和所述第二标签值的概率,对所述待分析车牌的图像进行处理得到掩膜图像;And processing, according to the probability of the first label value corresponding to each pixel and the probability of the second label value, the image of the license plate to be analyzed to obtain a mask image;
    对所述掩膜图像进行处理,得到所述待分析车牌的标签图像。The mask image is processed to obtain a label image of the license plate to be analyzed.
  4. 根据权利要求3所述的方法,其中,所述根据所述每个像素对应的所述第一标签值的概率和所述第二标签值的概率,对所述待分析车牌的图像进行处理得到掩膜图像,包括:The method according to claim 3, wherein the image of the license plate to be analyzed is processed according to a probability of the first tag value corresponding to each pixel and a probability of the second tag value Mask images, including:
    根据下式计算所述掩膜图像:The mask image is calculated according to the following formula:
    Figure PCTCN2017080128-appb-100001
    Figure PCTCN2017080128-appb-100001
    其中,所述
    Figure PCTCN2017080128-appb-100002
    为所述掩膜图像,P1(i,j)为所述待分析车牌的图像的第i行和第j列的像素对应的所述第一标签值的概率,P0(i,j)为所述待分析车牌的图像的第i行和第j列的像素对应的所述第二标签值的概率,i=1,2,3…M,j=1,2,3…N,所述M为所述待分析车牌的图像的高度,所述N为所述待分析车牌的图像的宽度。
    Wherein said
    Figure PCTCN2017080128-appb-100002
    For the mask image, P 1 (i, j) is the probability of the first tag value corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed, P 0 (i, j) a probability of the second tag value corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed, i=1, 2, 3...M, j=1, 2, 3...N, M is the height of the image of the license plate to be analyzed, and N is the width of the image of the license plate to be analyzed.
  5. 根据权利要求3所述的方法,其中,所述对所述掩膜图像进行处理,得到所述待分析车牌的标签图像,包括:The method according to claim 3, wherein the processing the mask image to obtain the label image of the license plate to be analyzed comprises:
    统计所述掩膜图像的每一列中像素值为第一像素值的像素的个数;Counting, in each column of the mask image, a pixel value of a pixel value of the first pixel value;
    根据所述个数确定所述待分析车牌上的字符间的分割位置;Determining, according to the number, a division position between characters on the license plate to be analyzed;
    根据所述待分析车牌上的字符间的分割位置,得到所述待分析车牌的图像的标签图像。And obtaining a label image of the image of the license plate to be analyzed according to the division position between the characters on the license plate to be analyzed.
  6. 根据权利要求5所述的方法,其中,所述根据所述个数确定所述待分析车牌上的字符间的分割位置包括:The method according to claim 5, wherein said determining, according to said number, a division position between characters on said license plate to be analyzed comprises:
    将所述个数对应的数值与预设的阈值进行比较;Comparing the value corresponding to the number with a preset threshold;
    如果所述个数对应的数值大于预设的阈值,则将所述个数对应的数值作为当前相邻的字符的分割位置。If the value corresponding to the number is greater than a preset threshold, the value corresponding to the number is used as the split position of the currently adjacent character.
  7. 根据权利要求1所述的方法,其中,所述将所述原始图像进行标记得到标签图像包括:The method of claim 1 wherein said marking said original image to obtain a label image comprises:
    将所述原始图像的相邻字符之间的区域标记为第一标签值;Marking an area between adjacent characters of the original image as a first tag value;
    将除所述相邻字符之间的区域外的其它区域标记为第二标签值;Marking other areas than the area between the adjacent characters as the second tag value;
    将标记为所述第一标签值的区域和标记为所述第二标签值的区域构成所述标签图像。An area marked as the first tag value and an area marked as the second tag value constitute the tag image.
  8. 一种基于深度学习的车牌字符分割装置,所述装置包括:A license plate character segmentation device based on deep learning, the device comprising:
    标记单元,用于获取车牌的原始图像,将所述原始图像进行标记得到标签图像; a marking unit, configured to acquire an original image of the license plate, and mark the original image to obtain a label image;
    构造单元,用于根据所述原始图像和所述标签图像构造神经网络;a constructing unit, configured to construct a neural network according to the original image and the label image;
    分类后图像生成单元,用于基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域进行分类得到分类后的图像;a classified image generating unit, configured to classify the regions in the original image to obtain a classified image based on the neural network and a softmax regression loss function;
    判断单元,用于将所述分类后的图像与所述标签图像进行对比,判断所述分类后的图像与所述标签图像是否一致;a determining unit, configured to compare the classified image with the label image, and determine whether the classified image is consistent with the label image;
    训练单元,用于如果不一致,则基于所述分类后的图像进行训练得到神经网络模型。a training unit, configured to perform a neural network model based on the classified image if not inconsistent.
  9. 根据权利要求8所述的装置,其中,所述装置还包括:The apparatus of claim 8 wherein said apparatus further comprises:
    标签图像生成单元,用于基于所述神经网络模型,对待分析车牌的图像进行分析,以得到所述待分析车牌对应的标签图像。The label image generating unit is configured to analyze an image of the license plate to be analyzed based on the neural network model to obtain a label image corresponding to the license plate to be analyzed.
  10. 根据权利要求9所述的装置,其中,所述标签图像生成单元包括:The apparatus according to claim 9, wherein the label image generating unit comprises:
    概率获取单元,用于将所述待分析车牌的图像通过所述神经网络模型,得到所述待分析车牌的图像的每个像素对应的第一标签值的概率和第二标签值的概率;a probability acquisition unit, configured to pass the image of the license plate to be analyzed to the neural network model, to obtain a probability of a first label value and a probability of a second label value corresponding to each pixel of the image of the license plate to be analyzed;
    第一处理单元,用于根据所述每个像素对应的所述第一标签值的概率和所述第二标签值的概率,对所述待分析车牌的图像进行处理得到掩膜图像;a first processing unit, configured to process, according to a probability of the first label value corresponding to each pixel and a probability of the second label value, an image of the license plate to be analyzed to obtain a mask image;
    第二处理单元,用于对所述掩膜图像进行处理,得到所述待分析车牌的标签图像。And a second processing unit, configured to process the mask image to obtain a label image of the license plate to be analyzed.
  11. 根据权利要求10所述的装置,其中,所述第一处理单元包括:The apparatus of claim 10 wherein said first processing unit comprises:
    根据下式计算所述掩膜图像:The mask image is calculated according to the following formula:
    Figure PCTCN2017080128-appb-100003
    第i行和第j列的像素对应的所述第一标签值的概率,P0(i,j)为所述待分析车牌的图像的第i行和第j列的像素对应的所述第二标签值的概率,i=1,2, 3…M,j=1,2,3…N,所述M为所述待分析车牌的图像的高度,所述N为所述待分析车牌的图像的宽度。
    Figure PCTCN2017080128-appb-100003
    The probability of the first tag value corresponding to the pixels of the i-th row and the j-th column, P 0 (i, j) is the number corresponding to the pixels of the i-th row and the j-th column of the image of the license plate to be analyzed The probability of the two-tag value, i=1, 2, 3...M, j=1, 2, 3...N, the M is the height of the image of the license plate to be analyzed, and the N is the license plate to be analyzed The width of the image.
  12. 根据权利要求10所述的装置,其中,所述第二处理单元包括:The apparatus of claim 10 wherein said second processing unit comprises:
    统计单元,用于统计所述掩膜图像的每一列中像素值为第一像素值的像素的个数;a statistical unit, configured to count the number of pixels in each column of the mask image whose pixel value is the first pixel value;
    确定单元,用于根据所述个数确定所述待分析车牌上的字符间的分割位置;a determining unit, configured to determine, according to the number, a split position between characters on the license plate to be analyzed;
    标签图像获取单元,用于根据所述待分析车牌上的字符间的分割位置,得到所述待分析车牌的图像的标签图像。And a label image acquiring unit, configured to obtain a label image of the image of the license plate to be analyzed according to the split position between the characters on the license plate to be analyzed.
  13. 根据权利要求12所述的装置,其中,所述确定单元包括:The apparatus of claim 12, wherein the determining unit comprises:
    比较单元,用于将所述个数对应的数值与预设的阈值进行比较;a comparing unit, configured to compare the value corresponding to the number with a preset threshold;
    分割位置确定单元,用于在所述个数对应的数值大于预设的阈值的情况下,将所述个数对应的数值作为当前相邻的字符的分割位置。The segmentation position determining unit is configured to use the value corresponding to the number as the segmentation position of the currently adjacent character in a case where the value corresponding to the number is greater than a preset threshold.
  14. 根据权利要求8所述的装置,其中,所述标记单元包括:The apparatus of claim 8 wherein said marking unit comprises:
    第一标签值标记单元,用于将所述原始图像的相邻字符之间的区域标记为第一标签值;a first tag value marking unit, configured to mark an area between adjacent characters of the original image as a first tag value;
    第二标签值标记单元,用于将除所述相邻字符之间的区域外的其它区域标记为第二标签值;a second tag value marking unit, configured to mark other areas than the area between the adjacent characters as a second tag value;
    构成单元,用于将标记为所述第一标签值的区域和标记为所述第二标签值的区域构成所述标签图像。And a constituting unit configured to form the label image as an area marked as the first label value and an area marked as the second label value.
  15. 一种存储介质,该存储介质包括一组指令,当执行所述指令时,引起至少一个处理器执行包括以下的操作:获取车牌的原始图像,将所述原始图像进行标记得到标签图像;A storage medium comprising a set of instructions that, when executed, cause at least one processor to perform an operation comprising: acquiring an original image of a license plate, marking the original image to obtain a label image;
    根据所述原始图像和所述标签图像构造神经网络;Constructing a neural network according to the original image and the label image;
    基于所述神经网络和softmax回归损失函数,对所述原始图像中的区域 进行分类得到分类后的图像;An area in the original image based on the neural network and a softmax regression loss function Sorting to obtain a classified image;
    将所述分类后的图像与所述标签图像进行对比,判断所述分类后的图像与所述标签图像是否一致;如果不一致,则基于所述分类后的图像进行训练得到神经网络模型。 Comparing the classified image with the label image to determine whether the classified image is consistent with the label image; if not, training based on the classified image to obtain a neural network model.
PCT/CN2017/080128 2016-08-10 2017-04-11 Deep learning-based method and device for segmenting vehicle license plate characters, and storage medium WO2018028230A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610652746.XA CN106295646B (en) 2016-08-10 2016-08-10 A kind of registration number character dividing method and device based on deep learning
CN201610652746.X 2016-08-10

Publications (1)

Publication Number Publication Date
WO2018028230A1 true WO2018028230A1 (en) 2018-02-15

Family

ID=57667884

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/080128 WO2018028230A1 (en) 2016-08-10 2017-04-11 Deep learning-based method and device for segmenting vehicle license plate characters, and storage medium

Country Status (2)

Country Link
CN (1) CN106295646B (en)
WO (1) WO2018028230A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325492A (en) * 2018-08-17 2019-02-12 平安科技(深圳)有限公司 Character segmentation method, apparatus, computer equipment and storage medium
CN109858327A (en) * 2018-12-13 2019-06-07 安徽清新互联信息科技有限公司 A kind of character segmentation method based on deep learning
CN109948419A (en) * 2018-12-31 2019-06-28 上海眼控科技股份有限公司 A kind of illegal parking automatic auditing method based on deep learning
CN110399880A (en) * 2019-07-31 2019-11-01 深圳市捷顺科技实业股份有限公司 Recognition methods, device and the equipment of a kind of characters on license plate and license plate classification
CN110503716A (en) * 2019-08-12 2019-11-26 中国科学技术大学 A kind of automobile license plate generated data generation method
CN110544256A (en) * 2019-08-08 2019-12-06 北京百度网讯科技有限公司 Deep learning image segmentation method and device based on sparse features
CN111126286A (en) * 2019-12-22 2020-05-08 上海眼控科技股份有限公司 Vehicle dynamic detection method and device, computer equipment and storage medium
CN111126393A (en) * 2019-12-22 2020-05-08 上海眼控科技股份有限公司 Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN111488883A (en) * 2020-04-14 2020-08-04 上海眼控科技股份有限公司 Vehicle frame number identification method and device, computer equipment and storage medium
CN111507469A (en) * 2019-01-31 2020-08-07 斯特拉德视觉公司 Method and device for optimizing hyper-parameters of automatic labeling device
CN111681205A (en) * 2020-05-08 2020-09-18 上海联影智能医疗科技有限公司 Image analysis method, computer device, and storage medium
CN112651985A (en) * 2020-12-31 2021-04-13 康威通信技术股份有限公司 Method and system for positioning mileage signboard for tunnel inspection
CN113673511A (en) * 2021-07-30 2021-11-19 苏州鼎纳自动化技术有限公司 Character segmentation method based on OCR
CN114882727A (en) * 2022-03-15 2022-08-09 深圳市德驰微视技术有限公司 Parking space detection method based on domain controller, electronic device and storage medium

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295646B (en) * 2016-08-10 2019-08-23 东方网力科技股份有限公司 A kind of registration number character dividing method and device based on deep learning
US10262236B2 (en) * 2017-05-02 2019-04-16 General Electric Company Neural network training image generation system
CN106971556B (en) * 2017-05-16 2019-08-02 中山大学 The recognition methods again of bayonet vehicle based on dual network structure
CN107239778B (en) * 2017-06-09 2020-01-03 中国科学技术大学 Efficient and accurate license plate recognition method
CN109389116B (en) * 2017-08-14 2022-02-08 阿里巴巴(中国)有限公司 Character detection method and device
CN110348428B (en) 2017-11-01 2023-03-24 腾讯科技(深圳)有限公司 Fundus image classification method and device and computer-readable storage medium
CN108921764B (en) * 2018-03-15 2022-10-25 中山大学 Image steganography method and system based on generation countermeasure network
CN109284686A (en) * 2018-08-23 2019-01-29 国网山西省电力公司计量中心 A kind of label identification method that camera automatic pitching is taken pictures
CN110969176B (en) * 2018-09-29 2023-12-29 杭州海康威视数字技术股份有限公司 License plate sample amplification method and device and computer equipment
CN109859233B (en) * 2018-12-28 2020-12-11 上海联影智能医疗科技有限公司 Image processing method and system, and training method and system of image processing model
CN111325061B (en) * 2018-12-14 2023-05-23 顺丰科技有限公司 Vehicle detection algorithm, device and storage medium based on deep learning
CN109829453B (en) * 2018-12-29 2021-10-12 天津车之家数据信息技术有限公司 Method and device for recognizing characters in card and computing equipment
CN110120047B (en) * 2019-04-04 2023-08-08 平安科技(深圳)有限公司 Image segmentation model training method, image segmentation method, device, equipment and medium
CN110263793A (en) * 2019-06-25 2019-09-20 北京百度网讯科技有限公司 Article tag recognition methods and device
CN110414527A (en) * 2019-07-31 2019-11-05 北京字节跳动网络技术有限公司 Character identifying method, device, storage medium and electronic equipment
CN110942004A (en) * 2019-11-20 2020-03-31 深圳追一科技有限公司 Handwriting recognition method and device based on neural network model and electronic equipment
CN112926610B (en) * 2019-12-06 2024-08-02 顺丰科技有限公司 License plate image screening model construction method and license plate image screening method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104809443A (en) * 2015-05-05 2015-07-29 上海交通大学 Convolutional neural network-based license plate detection method and system
US20150347860A1 (en) * 2014-05-30 2015-12-03 Apple Inc. Systems And Methods For Character Sequence Recognition With No Explicit Segmentation
CN105335743A (en) * 2015-10-28 2016-02-17 重庆邮电大学 Vehicle license plate recognition method
CN106295646A (en) * 2016-08-10 2017-01-04 东方网力科技股份有限公司 A kind of registration number character dividing method based on degree of depth study and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101408933A (en) * 2008-05-21 2009-04-15 浙江师范大学 Method for recognizing license plate character based on wide gridding characteristic extraction and BP neural network
CN105825235B (en) * 2016-03-16 2018-12-25 新智认知数据服务有限公司 A kind of image-recognizing method based on multi-characteristic deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150347860A1 (en) * 2014-05-30 2015-12-03 Apple Inc. Systems And Methods For Character Sequence Recognition With No Explicit Segmentation
CN104809443A (en) * 2015-05-05 2015-07-29 上海交通大学 Convolutional neural network-based license plate detection method and system
CN105335743A (en) * 2015-10-28 2016-02-17 重庆邮电大学 Vehicle license plate recognition method
CN106295646A (en) * 2016-08-10 2017-01-04 东方网力科技股份有限公司 A kind of registration number character dividing method based on degree of depth study and device

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109325492B (en) * 2018-08-17 2023-12-19 平安科技(深圳)有限公司 Character cutting method, device, computer equipment and storage medium
CN109325492A (en) * 2018-08-17 2019-02-12 平安科技(深圳)有限公司 Character segmentation method, apparatus, computer equipment and storage medium
CN109858327A (en) * 2018-12-13 2019-06-07 安徽清新互联信息科技有限公司 A kind of character segmentation method based on deep learning
CN109858327B (en) * 2018-12-13 2023-06-09 安徽清新互联信息科技有限公司 Character segmentation method based on deep learning
CN109948419A (en) * 2018-12-31 2019-06-28 上海眼控科技股份有限公司 A kind of illegal parking automatic auditing method based on deep learning
CN111507469B (en) * 2019-01-31 2023-10-13 斯特拉德视觉公司 Method and device for optimizing super parameters of automatic labeling device
CN111507469A (en) * 2019-01-31 2020-08-07 斯特拉德视觉公司 Method and device for optimizing hyper-parameters of automatic labeling device
CN110399880A (en) * 2019-07-31 2019-11-01 深圳市捷顺科技实业股份有限公司 Recognition methods, device and the equipment of a kind of characters on license plate and license plate classification
CN110544256B (en) * 2019-08-08 2022-03-22 北京百度网讯科技有限公司 Deep learning image segmentation method and device based on sparse features
CN110544256A (en) * 2019-08-08 2019-12-06 北京百度网讯科技有限公司 Deep learning image segmentation method and device based on sparse features
CN110503716B (en) * 2019-08-12 2022-09-30 中国科学技术大学 Method for generating motor vehicle license plate synthetic data
CN110503716A (en) * 2019-08-12 2019-11-26 中国科学技术大学 A kind of automobile license plate generated data generation method
CN111126393A (en) * 2019-12-22 2020-05-08 上海眼控科技股份有限公司 Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN111126286A (en) * 2019-12-22 2020-05-08 上海眼控科技股份有限公司 Vehicle dynamic detection method and device, computer equipment and storage medium
CN111488883A (en) * 2020-04-14 2020-08-04 上海眼控科技股份有限公司 Vehicle frame number identification method and device, computer equipment and storage medium
CN111681205A (en) * 2020-05-08 2020-09-18 上海联影智能医疗科技有限公司 Image analysis method, computer device, and storage medium
CN112651985A (en) * 2020-12-31 2021-04-13 康威通信技术股份有限公司 Method and system for positioning mileage signboard for tunnel inspection
CN113673511A (en) * 2021-07-30 2021-11-19 苏州鼎纳自动化技术有限公司 Character segmentation method based on OCR
CN114882727A (en) * 2022-03-15 2022-08-09 深圳市德驰微视技术有限公司 Parking space detection method based on domain controller, electronic device and storage medium
CN114882727B (en) * 2022-03-15 2023-09-05 深圳市德驰微视技术有限公司 Parking space detection method based on domain controller, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106295646B (en) 2019-08-23
CN106295646A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
WO2018028230A1 (en) Deep learning-based method and device for segmenting vehicle license plate characters, and storage medium
Yu et al. Vision-based concrete crack detection using a hybrid framework considering noise effect
Wang et al. Asphalt pavement pothole detection and segmentation based on wavelet energy field
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
US9014432B2 (en) License plate character segmentation using likelihood maximization
CN110738125B (en) Method, device and storage medium for selecting detection frame by Mask R-CNN
Chen et al. Pavement crack detection and classification based on fusion feature of LBP and PCA with SVM
US9466000B2 (en) Dynamic Bayesian Networks for vehicle classification in video
US10423827B1 (en) Image text recognition
Jia et al. Region-based license plate detection
US8620078B1 (en) Determining a class associated with an image
US9400936B2 (en) Methods and systems for vehicle tag number recognition
WO2019232843A1 (en) Handwritten model training method and apparatus, handwritten image recognition method and apparatus, and device and medium
CN111507989A (en) Training generation method of semantic segmentation model, and vehicle appearance detection method and device
CN107194393B (en) Method and device for detecting temporary license plate
CN112307989B (en) Road surface object identification method, device, computer equipment and storage medium
US20150356372A1 (en) Character recognition method
Lee et al. Available parking slot recognition based on slot context analysis
Vanetti et al. Gas meter reading from real world images using a multi-net system
CN111898491B (en) Identification method and device for reverse driving of vehicle and electronic equipment
Abdellatif et al. A low cost IoT-based Arabic license plate recognition model for smart parking systems
WO2019232850A1 (en) Method and apparatus for recognizing handwritten chinese character image, computer device, and storage medium
CN112766273A (en) License plate recognition method
Maiano et al. A deep-learning–based antifraud system for car-insurance claims
Bao et al. Unpaved road detection based on spatial fuzzy clustering algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17838362

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17838362

Country of ref document: EP

Kind code of ref document: A1