CN107688784A - A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features - Google Patents

A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features Download PDF

Info

Publication number
CN107688784A
CN107688784A CN201710741294.7A CN201710741294A CN107688784A CN 107688784 A CN107688784 A CN 107688784A CN 201710741294 A CN201710741294 A CN 201710741294A CN 107688784 A CN107688784 A CN 107688784A
Authority
CN
China
Prior art keywords
feature
layer
shallow
images
recognized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710741294.7A
Other languages
Chinese (zh)
Inventor
张冬青
蔡滨海
刘坤朋
郑杭
张木连
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
FUJIAN LIUREN NETWORK SECURITY Co Ltd
Original Assignee
FUJIAN LIUREN NETWORK SECURITY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by FUJIAN LIUREN NETWORK SECURITY Co Ltd filed Critical FUJIAN LIUREN NETWORK SECURITY Co Ltd
Priority to CN201710741294.7A priority Critical patent/CN107688784A/en
Publication of CN107688784A publication Critical patent/CN107688784A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/625License plates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The invention provides a kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features, pass through training convolutional neural networks model, to detect the further feature of images to be recognized, and comprehensive shallow-layer feature is analyzed, and realizes the identification and classification to character in images to be recognized.Compared to only the mode of image character is identified by shallow-layer feature (more obvious feature, can judging to draw by human eye), the introducing of further feature identification, the discrimination to character in images to be recognized can be effectively increased.The identification of further feature and shallow-layer feature is creatively combined by the present invention, there is provided the characters on license plate method of robust a kind of improves the robustness of Recognition of License Plate Characters under monitors environment, effectively increases the accuracy rate of Car license recognition.

Description

A kind of character identifying method and storage based on further feature and shallow-layer Fusion Features Medium
Technical field
It is more particularly to a kind of based on further feature and shallow-layer Fusion Features the present invention relates to computer technology security fields Character identifying method and storage medium.
Background technology
With the quickening of safety China Reconstructs paces, monitoring camera distribution is more and more wider, and the resolution ratio of camera is more next Higher, the distribution of camera is often more random, for the image collecting device of standard bayonet socket position is arranged at, uses For the camera (being such as distributed in above road) spread all in the monitoring network in streets and lanes when carrying out car plate, there is problems. Such as car plate angle change is big, the dust of long-term use of surface of camera head attachment can make that the image that collects is fuzzy, noise increases It is more, all by the problem such as the identification difficulty for causing car plate increases, discrimination is low.
The content of the invention
For this reason, it may be necessary to a kind of technical scheme of the character recognition based on further feature and shallow-layer Fusion Features is provided, to Solve the problems such as current camera existing image when carrying out license board information identification is fuzzy, discrimination is low.
To achieve the above object, a kind of character recognition side based on further feature and shallow-layer Fusion Features is inventor provided Method, methods described include:
Some training images are received, and further feature extraction convolutional neural networks model is trained according to training image, it is described Training image includes one or more characters, the corresponding identification information of each training image;
The further feature for the further feature extraction each training image of convolutional neural networks model extraction completed according to training;
The shallow-layer feature of training image is received, and the corresponding further feature of the shallow-layer feature of training is merged, Obtain full feature corresponding to the training image;
According to the full feature of each training image and its corresponding identification information Training Support Vector Machines, a Shandong is obtained The full tagsort model of rod;
The further feature and shallow-layer feature of images to be recognized are extracted, and the further feature of extraction and shallow-layer feature are inputted entirely Tagsort model, export classification results corresponding to images to be recognized.
Further, " further feature extraction convolutional neural networks model is trained according to training image " to comprise the following steps:
Convolution operation is carried out to training image, generates several characteristic patterns;
Down-sampling operation is carried out to characteristic pattern;
Convolution operation several times and down-sampling operation are repeated, obtains abstract characteristics;
Full attended operation is carried out to abstract characteristics, extracts further feature corresponding to training image.
Further, the convolutional neural networks model includes convolutional layer and full articulamentum, described " abstract characteristics to be carried out Full attended operation " includes:If being full articulamentum before convolutional layer, the full attended operation is the convolution behaviour that convolution kernel is 1 × 1 Make;If being still convolutional layer after convolutional layer, the full attended operation is the convolution operation that convolution kernel is h × w, and h is full articulamentum The height of convolutional layer operation result before, w be full articulamentum before convolutional layer operation result width.
Further, the down-sampling operation includes performing pondization processing.
Further, the shallow-layer feature of the images to be recognized obtains in the following manner:
Images to be recognized is normalized, and the images to be recognized after normalized is carried out at gray processing Reason;
The regulation of contrast is carried out to the images to be recognized of input using Gamma correction methods;
The calculating of gradient is carried out to each pixel in images to be recognized, captures profile information;
Images to be recognized is divided into multiple chunks in units of NXN, calculates Feature Descriptor corresponding to each chunk;Often One NXN chunks include multiple MXM sub-blocks, and Feature Descriptor computational methods specifically include corresponding to chunk:Calculate respectively same The Feature Descriptor of each MXM sub-blocks in NXN chunks, and the feature description of each MXM sub-blocks in same chunk is connected Come, obtain Feature Descriptor corresponding to the NXN chunks;
Feature Descriptor corresponding to each chunk is together in series, obtains shallow-layer feature corresponding to images to be recognized.
Inventor additionally provides a kind of storage medium, and the storage medium is used to store computer program, the computer When program is performed, comprise the following steps:
Some training images are received, and further feature extraction convolutional neural networks model is trained according to training image, it is described Training image includes one or more characters, the corresponding identification information of each training image;
The further feature for the further feature extraction each training image of convolutional neural networks model extraction completed according to training;
The shallow-layer feature of training image is received, and the corresponding further feature of the shallow-layer feature of training is merged, Obtain full feature corresponding to the training image;
According to the full feature of each training image and its corresponding identification information Training Support Vector Machines, a Shandong is obtained The full tagsort model of rod;
The further feature and shallow-layer feature of images to be recognized are extracted, and the further feature of extraction and shallow-layer feature are inputted entirely Tagsort model, export classification results corresponding to images to be recognized.
Further, following steps are also specifically included when the computer program is performed:
Convolution operation is carried out to training image, generates several characteristic patterns;
Down-sampling operation is carried out to characteristic pattern;
Convolution operation several times and down-sampling operation are repeated, obtains abstract characteristics;
Full attended operation is carried out to abstract characteristics, extracts further feature corresponding to training image.
Further, the convolutional neural networks model includes convolutional layer and full articulamentum, and the computer program is held Also specifically included during row:If being full articulamentum before convolutional layer, the full attended operation is the convolution operation that convolution kernel is 1 × 1; If being still convolutional layer after convolutional layer, the full attended operation is the convolution operation that convolution kernel is h × w, before h is full articulamentum Convolutional layer operation result height, w be full articulamentum before convolutional layer operation result width.
Further, also include when the computer program is performed:Perform pondization processing.
Further, also include when the computer program is performed:
Images to be recognized is normalized, and the images to be recognized after normalized is carried out at gray processing Reason;
The regulation of contrast is carried out to the images to be recognized of input using Gamma correction methods;
The calculating of gradient is carried out to each pixel in images to be recognized, captures profile information;
Images to be recognized is divided into multiple chunks in units of NXN, calculates Feature Descriptor corresponding to each chunk;Often One NXN chunks include multiple MXM sub-blocks, and Feature Descriptor computational methods specifically include corresponding to chunk:Calculate respectively same The Feature Descriptor of each MXM sub-blocks in NXN chunks, and the feature description of each MXM sub-blocks in same chunk is connected Come, obtain Feature Descriptor corresponding to the NXN chunks;
Feature Descriptor corresponding to each chunk is together in series, obtains shallow-layer feature corresponding to images to be recognized.
The invention has the characteristics that:By training convolutional neural networks model, to detect the deep layer of images to be recognized spy Sign, and comprehensive shallow-layer feature is analyzed, and realizes the identification and classification to character in images to be recognized.Compared to only passing through shallow-layer Feature (more obvious feature, can judging draw by human eye) identifies the mode of image character, and further feature identification is drawn Enter, the discrimination to character in images to be recognized can be effectively increased.The present invention is creatively by further feature and shallow-layer feature Identification be combined, there is provided the characters on license plate method of robust a kind of improves the robustness of Recognition of License Plate Characters under monitors environment, Effectively increase the accuracy rate of Car license recognition.
Brief description of the drawings
Fig. 1 is the character identifying method based on further feature and shallow-layer Fusion Features that an embodiment of the present invention is related to Flow chart;
Fig. 2 is the character identifying method based on further feature and shallow-layer Fusion Features that another embodiment of the present invention is related to Flow chart;
Fig. 3 is the character identifying method based on further feature and shallow-layer Fusion Features that another embodiment of the present invention is related to Flow chart;
Fig. 4 is the character identifying method based on further feature and shallow-layer Fusion Features that another embodiment of the present invention is related to Application scenarios schematic diagram.
Embodiment
To describe the technology contents of technical scheme, construction feature, the objects and the effects in detail, below in conjunction with specific reality Apply example and coordinate accompanying drawing to be explained in detail.
Referring to Fig. 1, known for what an embodiment of the present invention was related to based on the character of further feature and shallow-layer Fusion Features The flow chart of other method.Methods described includes:
Initially enter step S101 and receive some training images, and further feature extraction convolution god is trained according to training image Through network model.The training image includes one or more characters, the corresponding identification information of each training image.The character Including but not limited to letter, Chinese character, numeral etc..Identification information is the physical quantity for distinguishing different type image.Identification information can be with Abstract concept according to the content of image is determined, such as when image information is license board information, for including, " Fujian A " is opened The character of head, then identification information can be set as " Foochow car plate ";The content that identification information can also be included by image Itself, such as when image information be license board information, for include " character that Fujian A " starts, then can be by identification information It is set as " Fujian A ".
Then enter the further feature that step S102 completes according to training and extract each instruction of convolutional neural networks model extraction Practice the further feature of image.For further feature is compared to shallow-layer feature, it is not easy to discover discovery for deeper level, human eye on image Some characteristic details.For a neural network model, it is typically stacked by sandwich construction and formed, including convolutional layer, Full articulamentum and pond layer.Two operations of convolution and sampling were mainly entered in depth characteristic extraction.Deep learning (Deep Learning the vision for) thinking people is layering, and bottom extraction is edge feature, and the intermediate level is that shape or target are recognized Know, it is high-rise then analyze some motion and behavior.It is an abstract process from bottom to high level, character representation is also one abstract Process, more arrive it is high-rise can more show semantic or be intended to, it is the combination of low-level image feature.The operation of convolutional layer and sliding window Some are similar for mouth, and convolution kernel acts on different regions, characteristic pattern corresponding to generation, because convolution has such speciality, gives Fixed normalized character input image, several different characteristic patterns will be produced by convolution algorithm.After convolution operation terminates Down-sampling processing will be carried out to its result.
Then enter the shallow-layer feature that step S103 receives training image, and by the corresponding depth of the shallow-layer feature of training Layer feature is merged, and obtains full feature corresponding to the training image.Shallow-layer feature is dominant character, and referring to can be by people on image Outlook another characteristic, such as so that image is license plate image as an example, for clear visible alphanumeric on image, then can be classified as Shallow-layer feature.Shallow-layer feature extraction and the process that need not be trained, it chooses according to the experience of experimenter and best suits current project Feature extracted.Both the global characteristics of image can have been extracted, such as HOG, LBP, the local feature of image can also have been extracted, such as Haar, SIFT, SURF, or the combination of local feature.Shallow-layer feature can be effectively supplemented further feature so that To the characteristic information that is preferably included on response diagram picture of full feature.In the present embodiment, the shallow-layer is characterized as HOG spies Sign.
Then enter step 104 according to the full feature of each training image and its corresponding identification information training support to Amount machine, obtain the full tagsort model of a robust.SVMs (Support Vector Machine, SVM) is Corinna Cortes and Vapnik are equal to what nineteen ninety-five proposed first, and it is solving small sample, the knowledge of non-linear and high dimensional pattern Many distinctive advantages are shown in not, and can be promoted the use of in the other machines problem concerning study such as Function Fitting.In machine In study, SVMs (SVM, going back support vector network) is the supervised learning model relevant with the learning algorithm of correlation, can With analyze data, recognition mode, for classification and regression analysis.Such as in actual application, one group of training can be given Sample, each mark be two classes, and a SVM training algorithm establishes a model, distribute new example for one kind or its His class, become non-probability binary linearity classification.The example of one SVM model, point such as in space, mapping so that institute The expression that it is division as wide as possible by an obvious gap that the example for stating different classifications, which is,.New embodiment is then mapped to phase In same space, and predict and fall to belong in the clearance side classification based on them.Except carry out linear classification, support to Amount machine can use so-called geo-nuclear tracin4, and implicit be mapped in high-dimensional feature space of their input effectively carries out non-linear point Class.
As shown in figure 4, after SVMs trains, it is possible to realize the automating sorting function of images to be recognized.Equally By taking Car license recognition as an example, different images can be classified by way of character on image identifying, such as when substantial amounts of When training image recognizes " Fujian ", " A " and the character positioned at beginning, the classification of image can be classified as " Foochow vehicle " one kind. When SVMs receives an images to be recognized again, if the feature for extracting the image includes " Fujian ", " A " and position In the characteristic information of beginning, then the image can be classified as to " Foochow vehicle " one kind.
Then enter the further feature and shallow-layer feature of step 105 extraction images to be recognized, and by the further feature of extraction Full tagsort model is inputted with shallow-layer feature, exports classification results corresponding to images to be recognized.The extraction of shallow-layer feature is too The experience and subjective consciousness of people is relied on, if determining to classify only with shallow-layer feature, there is classification accuracy is low, error is big etc. Problem.And convolutional neural networks (CNN) are based on, the feature energy of the Machine self-learning such as sparse autocoder (AutoEncoder) The profound feature (i.e. further feature) of enough automatic study images, can reduce influence of the feature selecting to grader, but deep layer The interpretation of feature extraction is poor, and feature selecting places one's entire reliance upon the selection of model.The application combines further feature and shallow The advantage of layer feature, effectively increase image characteristics extraction, identification, the accuracy rate of classification.
As shown in Fig. 2 in certain embodiments, " further feature extraction convolutional neural networks mould is trained according to training image Type " comprises the following steps:Initially enter step S201 and convolution operation is carried out to training image, generate several characteristic patterns;Then Down-sampling operation is carried out to characteristic pattern into step S202;Down-sampling reduces net by reducing the resolution ratio of convolution characteristic image Sensitivity of the network for displacement and distortion;Then convolution operation several times and down-sampling behaviour are repeated into step S203 Make, obtain abstract characteristics;Then enter step S204 and full attended operation is carried out to abstract characteristics, extract corresponding to training image Further feature.
Further, the convolutional neural networks model includes convolutional layer and full articulamentum, described " abstract characteristics to be carried out Full attended operation " includes:If being full articulamentum before convolutional layer, the full attended operation is the convolution behaviour that convolution kernel is 1 × 1 Make;If being still convolutional layer after convolutional layer, the full attended operation is the convolution operation that convolution kernel is h × w, and h is full articulamentum The height of convolutional layer operation result before, w be full articulamentum before convolutional layer operation result width.
Further, the down-sampling operation includes performing pondization processing.The method maximum pond of the pondization processing (MaxPooling), average pond (MeanPooling), Gauss pond, can training pool etc..Maximum pond refers to every on image N number of pixel is unit, extracts the pixel of pixel value maximum in N number of pixel as the pixel after down-sampling.Average pond Change refers to that every N number of pixel is unit on image, calculates the average pixel value of N number of pixel, and with the pixel of average pixel value Point is as the pixel after down-sampling.Can training pool refer to precondition function f, when N number of pixel is inputted into the function, 1 pixel of corresponding generation is exported.Gauss pond is the method for solving Gaussian Blur, and week is taken for each pixel The average pixel value of side pixel.After convolution operation several times and pondization operation, the deep layer figure of image can be extracted Picture.By taking license plate image as an example, then the abstract expression of character on image can be obtained.
As shown in figure 3, in certain embodiments, the shallow-layer is characterized as histograms of oriented gradients (Histogram of Oriented Gradient, HOG) feature.HOG features are that one kind is used for carrying out object in computer vision and image procossing The Feature Descriptor of detection.HOG features to image local area by carrying out cell (sub-block being mentioned below) and block The division of (chunk being mentioned below), is calculated and is counted to gradient orientation histogram in the above, so as to constitutive characteristic. The theoretical foundation of HOG feature extractions is that gradient and edge direction Density Distribution can be preferably to the presentations and shape of localized target It is described.Therefore, HOG features are the description preferable features of piece image.The shallow-layer feature of the images to be recognized by with Under type obtains:
Step S301 is initially entered images to be recognized is normalized, and to the figure to be identified after normalized As carrying out gray processing processing;
Then enter the regulation that step S302 carries out contrast using Gamma correction methods to the images to be recognized of input;Will Image regards the 3-D view being made up of abscissa, ordinate, gray value as, the image using Gamma correction methods to input The regulation of contrast is carried out, reduce image irradiation change and local shades influences and suppress the dry of noise to caused by follow-up work Disturb.
Then enter the calculating that step S303 carries out gradient to each pixel in images to be recognized, capture profile information. The profile information refers to the edge contour of feature on image, and by taking license plate image as an example, the feature on image is the license plate number (Chinese Word, letter, the character string of numeral composition), then profile information is then the edge lines of each character.
Images to be recognized is then divided into multiple chunks in units of NXN into step S304, calculates each chunk pair The Feature Descriptor answered.Every NXN chunks include multiple MXM sub-blocks, and Feature Descriptor computational methods are specifically wrapped corresponding to chunk Include:Calculate the Feature Descriptor of each MXM sub-blocks in same NXN chunks respectively, and by each MXM sub-blocks in same chunk Feature description be together in series, obtain Feature Descriptor corresponding to the NXN chunks;
Then Feature Descriptor corresponding to each chunk is together in series into step S305, obtains continuous each chunk Shallow-layer feature corresponding to corresponding Feature Descriptor, as images to be recognized., can be quick by small and big by the above method Extract shallow-layer feature corresponding to images to be recognized.
Inventor additionally provides a kind of storage medium, and the storage medium is used to store computer program.Storage medium is Electronic component with data storage function, described storage medium, include but is not limited to:RAM, ROM, magnetic disc, tape, light Disk, flash memory, USB flash disk, mobile hard disk, storage card, memory stick, webserver storage, network cloud storage etc..The computer program When being performed, comprise the following steps:
Some training images are received, and further feature extraction convolutional neural networks model is trained according to training image, it is described Training image includes one or more characters, the corresponding identification information of each training image;
The further feature for the further feature extraction each training image of convolutional neural networks model extraction completed according to training;
The shallow-layer feature of training image is received, and the corresponding further feature of the shallow-layer feature of training is merged, Obtain full feature corresponding to the training image;
According to the full feature of each training image and its corresponding identification information Training Support Vector Machines, a Shandong is obtained The full tagsort model of rod;
The further feature and shallow-layer feature of images to be recognized are extracted, and the further feature of extraction and shallow-layer feature are inputted entirely Tagsort model, export classification results corresponding to images to be recognized.
In certain embodiments, following steps are also specifically included when the computer program is performed:
Convolution operation is carried out to training image, generates several characteristic patterns;
Down-sampling operation is carried out to characteristic pattern;
Convolution operation several times and down-sampling operation are repeated, obtains abstract characteristics;
Full attended operation is carried out to abstract characteristics, extracts further feature corresponding to training image.
In certain embodiments, the convolutional neural networks model includes convolutional layer and full articulamentum, the computer journey Also specifically included when sequence is performed:If being full articulamentum before convolutional layer, the full attended operation is the volume that convolution kernel is 1 × 1 Product operation;If being still convolutional layer after convolutional layer, the full attended operation is the convolution operation that convolution kernel is h × w, and h is to connect entirely Connect the height of the convolutional layer operation result before layer, w be full articulamentum before convolutional layer operation result width.
In certain embodiments, also include when the computer program is performed:Perform pondization processing.The pondization processing Method maximum pond (MaxPooling), average pond (MeanPooling), Gauss pond, can training pool etc..Maximum pond It is unit that change, which refers on image per N number of pixel, after extracting the pixel of the maximum of pixel value in N number of pixel as down-sampling Pixel.Average pond refers to that every N number of pixel is unit on image, calculates the average pixel value of N number of pixel, and with average The pixel of pixel value is as the pixel after down-sampling.Can training pool refer to precondition function f, when by N number of pixel When inputting the function, 1 pixel of corresponding generation is exported.Gauss pond is the method for solving Gaussian Blur, for every One pixel all takes the average pixel value of peripheral image vegetarian refreshments.After convolution operation several times and pondization operation, it can extract Go out the deep layer image of image.By taking license plate image as an example, then the abstract expression of character on image can be obtained.
In certain embodiments, also include when the computer program is performed:
Images to be recognized is normalized, and the images to be recognized after normalized is carried out at gray processing Reason;
The regulation of contrast is carried out to the images to be recognized of input using Gamma correction methods;
The calculating of gradient is carried out to each pixel in images to be recognized, captures profile information;
Images to be recognized is divided into multiple chunks in units of NXN, calculates Feature Descriptor corresponding to each chunk;Often One NXN chunks include multiple MXM sub-blocks, and Feature Descriptor computational methods specifically include corresponding to chunk:Calculate respectively same The Feature Descriptor of each MXM sub-blocks in NXN chunks, and the feature description of each MXM sub-blocks in same chunk is connected Come, obtain Feature Descriptor corresponding to the NXN chunks;
Feature Descriptor corresponding to each chunk is together in series, obtains shallow-layer feature corresponding to images to be recognized.
The invention provides a kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features, leads to Training convolutional neural networks model is crossed, to detect the further feature of images to be recognized, and comprehensive shallow-layer feature is analyzed, and is realized Identification and classification to character in images to be recognized.Compared to only by shallow-layer feature, (more obvious feature, can be by human eye Judgement is drawn) identify the mode of image character, the introducing of further feature identification, it can be effectively increased to word in images to be recognized The discrimination of symbol.The identification of further feature and shallow-layer feature is creatively combined by the present invention, there is provided a kind of car plate of robust Character methods improve the robustness of Recognition of License Plate Characters under monitors environment, effectively increase the accuracy rate of Car license recognition.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality Body or operation make a distinction with another entity or operation, and not necessarily require or imply and deposited between these entities or operation In any this actual relation or order.Moreover, term " comprising ", "comprising" or its any other variant are intended to Nonexcludability includes, so that process, method, article or terminal device including a series of elements not only include those Key element, but also the other element including being not expressly set out, or it is this process, method, article or end also to include The intrinsic key element of end equipment.In the absence of more restrictions, limited by sentence " including ... " or " including ... " Key element, it is not excluded that other key element in the process including the key element, method, article or terminal device also be present.This Outside, herein, " being more than ", " being less than ", " exceeding " etc. are interpreted as not including this number;" more than ", " following ", " within " etc. understand It is to include this number.
It should be understood by those skilled in the art that, the various embodiments described above can be provided as method, apparatus or computer program production Product.These embodiments can use the embodiment in terms of complete hardware embodiment, complete software embodiment or combination software and hardware Form.All or part of step in the method that the various embodiments described above are related to can by program come instruct the hardware of correlation come Complete, described program can be stored in the storage medium that computer equipment can be read, for performing the various embodiments described above side All or part of step described in method.The computer equipment, include but is not limited to:Personal computer, server, general-purpose computations It is machine, special-purpose computer, the network equipment, embedded device, programmable device, intelligent mobile terminal, intelligent home device, wearable Smart machine, vehicle intelligent equipment etc.;Described storage medium, include but is not limited to:RAM, ROM, magnetic disc, tape, CD, sudden strain of a muscle Deposit, USB flash disk, mobile hard disk, storage card, memory stick, webserver storage, network cloud storage etc..
The various embodiments described above are with reference to method, equipment (system) and the computer program product according to embodiment Flow chart and/or block diagram describe.It should be understood that can be by every in computer program instructions implementation process figure and/or block diagram One flow and/or the flow in square frame and flow chart and/or block diagram and/or the combination of square frame.These computers can be provided Programmed instruction is to the processor of computer equipment to produce a machine so that passes through the finger of the computing device of computer equipment Order, which produces, to be used to realize what is specified in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames The device of function.
These computer program instructions may be alternatively stored in the computer that computer equipment can be guided to work in a specific way and set In standby readable memory so that the instruction being stored in the computer equipment readable memory produces the manufacture for including command device Product, the command device is realized to be referred in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames Fixed function.
These computer program instructions can be also loaded on computer equipment so that performed on a computing device a series of Operating procedure is to produce computer implemented processing, so as to which the instruction performed on a computing device is provided for realizing in flow The step of function of being specified in one flow of figure or multiple flows and/or one square frame of block diagram or multiple square frames.
Although the various embodiments described above are described, those skilled in the art once know basic wound The property made concept, then other change and modification can be made to these embodiments, so embodiments of the invention are the foregoing is only, Not thereby the scope of patent protection of the present invention, every equivalent structure made using description of the invention and accompanying drawing content are limited Or equivalent flow conversion, or other related technical areas are directly or indirectly used in, similarly it is included in the patent of the present invention Within protection domain.

Claims (10)

1. a kind of character identifying method based on further feature and shallow-layer Fusion Features, it is characterised in that methods described includes:
Some training images are received, and further feature extraction convolutional neural networks model, the training are trained according to training image Image includes one or more characters, the corresponding identification information of each training image;
The further feature for the further feature extraction each training image of convolutional neural networks model extraction completed according to training;
The shallow-layer feature of training image is received, and the corresponding further feature of the shallow-layer feature of training is merged, is obtained Full feature corresponding to the training image;
According to the full feature of each training image and its corresponding identification information Training Support Vector Machines, robust is obtained Full tagsort model;
The further feature and shallow-layer feature of images to be recognized are extracted, and the further feature of extraction and shallow-layer feature are inputted into full feature Disaggregated model, export classification results corresponding to images to be recognized.
2. the character identifying method based on further feature and shallow-layer Fusion Features as claimed in claim 1, it is characterised in that " training further feature extraction convolutional neural networks model according to training image " comprises the following steps:
Convolution operation is carried out to training image, generates several characteristic patterns;
Down-sampling operation is carried out to characteristic pattern;
Convolution operation several times and down-sampling operation are repeated, obtains abstract characteristics;
Full attended operation is carried out to abstract characteristics, extracts further feature corresponding to training image.
3. the character identifying method based on further feature and shallow-layer Fusion Features as claimed in claim 2, it is characterised in that institute Stating convolutional neural networks model includes convolutional layer and full articulamentum, and " the carrying out full attended operation to abstract characteristics " includes:If It is full articulamentum before convolutional layer, the full attended operation is the convolution operation that convolution kernel is 1 × 1;If still it is after convolutional layer Convolutional layer, the full attended operation are the convolution operation that convolution kernel is h × w, and h is the convolutional layer operation result before full articulamentum Height, w be full articulamentum before convolutional layer operation result width.
4. the character identifying method based on further feature and shallow-layer Fusion Features as claimed in claim 2, it is characterised in that institute Stating down-sampling operation includes performing pondization processing.
5. the character identifying method based on further feature and shallow-layer Fusion Features as claimed in claim 1, it is characterised in that institute The shallow-layer feature for stating images to be recognized obtains in the following manner:
Images to be recognized is normalized, and gray processing processing is carried out to the images to be recognized after normalized;
The regulation of contrast is carried out to the images to be recognized of input using Gamma correction methods;
The calculating of gradient is carried out to each pixel in images to be recognized, captures profile information;
Images to be recognized is divided into multiple chunks in units of NXN, calculates Feature Descriptor corresponding to each chunk;It is each NXN chunks include multiple MXM sub-blocks, and Feature Descriptor computational methods specifically include corresponding to chunk:Same NXN is calculated respectively The Feature Descriptor of each MXM sub-blocks in chunk, and the feature description of each MXM sub-blocks in same chunk is together in series, Obtain Feature Descriptor corresponding to the NXN chunks;
Feature Descriptor corresponding to each chunk is together in series, obtains shallow-layer feature corresponding to images to be recognized.
6. a kind of storage medium, it is characterised in that the storage medium is used to store computer program, the computer program quilt During execution, comprise the following steps:
Some training images are received, and further feature extraction convolutional neural networks model, the training are trained according to training image Image includes one or more characters, the corresponding identification information of each training image;
The further feature for the further feature extraction each training image of convolutional neural networks model extraction completed according to training;
The shallow-layer feature of training image is received, and the corresponding further feature of the shallow-layer feature of training is merged, is obtained Full feature corresponding to the training image;
According to the full feature of each training image and its corresponding identification information Training Support Vector Machines, robust is obtained Full tagsort model;
The further feature and shallow-layer feature of images to be recognized are extracted, and the further feature of extraction and shallow-layer feature are inputted into full feature Disaggregated model, export classification results corresponding to images to be recognized.
7. storage medium as claimed in claim 6, it is characterised in that also specifically included when the computer program is performed with Lower step:
Convolution operation is carried out to training image, generates several characteristic patterns;
Down-sampling operation is carried out to characteristic pattern;
Convolution operation several times and down-sampling operation are repeated, obtains abstract characteristics;
Full attended operation is carried out to abstract characteristics, extracts further feature corresponding to training image.
8. storage medium as claimed in claim 7, it is characterised in that the convolutional neural networks model is including convolutional layer and entirely Articulamentum, the computer program also specifically include when being performed:If before convolutional layer it is full articulamentum, the full attended operation The convolution operation for being 1 × 1 for convolution kernel;If being still convolutional layer after convolutional layer, the full attended operation is that convolution kernel is h × w Convolution operation, h be full articulamentum before convolutional layer operation result height, w be full articulamentum before convolutional layer computing knot The width of fruit.
9. storage medium as claimed in claim 7, it is characterised in that the computer program also includes when being performed:Perform Pondization processing.
10. storage medium as claimed in claim 6, it is characterised in that the computer program also includes when being performed:
Images to be recognized is normalized, and gray processing processing is carried out to the images to be recognized after normalized;
The regulation of contrast is carried out to the images to be recognized of input using Gamma correction methods;
The calculating of gradient is carried out to each pixel in images to be recognized, captures profile information;
Images to be recognized is divided into multiple chunks in units of NXN, calculates Feature Descriptor corresponding to each chunk;It is each NXN chunks include multiple MXM sub-blocks, and Feature Descriptor computational methods specifically include corresponding to chunk:Same NXN is calculated respectively The Feature Descriptor of each MXM sub-blocks in chunk, and the feature description of each MXM sub-blocks in same chunk is together in series, Obtain Feature Descriptor corresponding to the NXN chunks;
Feature Descriptor corresponding to each chunk is together in series, obtains shallow-layer feature corresponding to images to be recognized.
CN201710741294.7A 2017-08-23 2017-08-23 A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features Pending CN107688784A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710741294.7A CN107688784A (en) 2017-08-23 2017-08-23 A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710741294.7A CN107688784A (en) 2017-08-23 2017-08-23 A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features

Publications (1)

Publication Number Publication Date
CN107688784A true CN107688784A (en) 2018-02-13

Family

ID=61155123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710741294.7A Pending CN107688784A (en) 2017-08-23 2017-08-23 A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features

Country Status (1)

Country Link
CN (1) CN107688784A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764051A (en) * 2018-04-28 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device and mobile terminal
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108898156A (en) * 2018-05-28 2018-11-27 江苏大学 A kind of green green pepper recognition methods based on high spectrum image
CN108960412A (en) * 2018-06-29 2018-12-07 北京京东尚科信息技术有限公司 Image-recognizing method, device and computer readable storage medium
CN109117898A (en) * 2018-08-16 2019-01-01 新智数字科技有限公司 A kind of hand-written character recognition method and system
CN109271998A (en) * 2018-08-31 2019-01-25 摩佰尔(天津)大数据科技有限公司 Character identifying method, device, equipment and storage medium
CN110533119A (en) * 2019-09-04 2019-12-03 北京迈格威科技有限公司 The training method of index identification method and its model, device and electronic system
CN110598701A (en) * 2019-09-17 2019-12-20 中控智慧科技股份有限公司 License plate anti-counterfeiting method and device and electronic equipment
CN110826567A (en) * 2019-11-06 2020-02-21 北京字节跳动网络技术有限公司 Optical character recognition method, device, equipment and storage medium
CN111274993A (en) * 2020-02-12 2020-06-12 深圳数联天下智能科技有限公司 Eyebrow recognition method and device, computing equipment and computer-readable storage medium
CN111401139A (en) * 2020-02-25 2020-07-10 云南昆钢电子信息科技有限公司 Method for obtaining position of underground mine equipment based on character image intelligent identification
CN111414908A (en) * 2020-03-16 2020-07-14 湖南快乐阳光互动娱乐传媒有限公司 Method and device for recognizing caption characters in video
CN111639636A (en) * 2020-05-29 2020-09-08 北京奇艺世纪科技有限公司 Character recognition method and device
CN111666932A (en) * 2020-05-27 2020-09-15 平安科技(深圳)有限公司 Document auditing method and device, computer equipment and storage medium
CN111783756A (en) * 2019-04-03 2020-10-16 北京市商汤科技开发有限公司 Text recognition method and device, electronic equipment and storage medium
CN112200201A (en) * 2020-10-13 2021-01-08 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN112258487A (en) * 2020-10-29 2021-01-22 德鲁动力科技(海南)有限公司 Image detection system and method
CN112508684A (en) * 2020-12-04 2021-03-16 中信银行股份有限公司 Joint convolutional neural network-based collection risk rating method and system
CN112580628A (en) * 2020-12-22 2021-03-30 浙江智慧视频安防创新中心有限公司 License plate character recognition method and system based on attention mechanism
CN113343953A (en) * 2021-08-05 2021-09-03 南京信息工程大学 FGR-AM method and system for remote sensing scene recognition
CN113591864A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Training method, device and system for text recognition model framework
WO2021237517A1 (en) * 2020-05-27 2021-12-02 京东方科技集团股份有限公司 Handwriting recognition method and apparatus, and electronic device and storage medium
CN113837287A (en) * 2021-09-26 2021-12-24 平安科技(深圳)有限公司 Certificate abnormal information identification method, device, equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112113A (en) * 2013-04-19 2014-10-22 无锡南理工科技发展有限公司 Improved characteristic convolutional neural network image identification method
CN105512661A (en) * 2015-11-25 2016-04-20 中国人民解放军信息工程大学 Multi-mode-characteristic-fusion-based remote-sensing image classification method
CN106709568A (en) * 2016-12-16 2017-05-24 北京工业大学 RGB-D image object detection and semantic segmentation method based on deep convolution network
CN106778584A (en) * 2016-12-08 2017-05-31 南京邮电大学 A kind of face age estimation method based on further feature Yu shallow-layer Fusion Features
CN106886778A (en) * 2017-04-25 2017-06-23 福州大学 A kind of car plate segmentation of the characters and their identification method under monitoring scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112113A (en) * 2013-04-19 2014-10-22 无锡南理工科技发展有限公司 Improved characteristic convolutional neural network image identification method
CN105512661A (en) * 2015-11-25 2016-04-20 中国人民解放军信息工程大学 Multi-mode-characteristic-fusion-based remote-sensing image classification method
CN106778584A (en) * 2016-12-08 2017-05-31 南京邮电大学 A kind of face age estimation method based on further feature Yu shallow-layer Fusion Features
CN106709568A (en) * 2016-12-16 2017-05-24 北京工业大学 RGB-D image object detection and semantic segmentation method based on deep convolution network
CN106886778A (en) * 2017-04-25 2017-06-23 福州大学 A kind of car plate segmentation of the characters and their identification method under monitoring scene

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
柯 逍 等: "融合深度特征和语义邻域的自动图像标注", 《模式识别与人工智能》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764051A (en) * 2018-04-28 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device and mobile terminal
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108898156A (en) * 2018-05-28 2018-11-27 江苏大学 A kind of green green pepper recognition methods based on high spectrum image
CN108960412A (en) * 2018-06-29 2018-12-07 北京京东尚科信息技术有限公司 Image-recognizing method, device and computer readable storage medium
CN109117898A (en) * 2018-08-16 2019-01-01 新智数字科技有限公司 A kind of hand-written character recognition method and system
CN109271998A (en) * 2018-08-31 2019-01-25 摩佰尔(天津)大数据科技有限公司 Character identifying method, device, equipment and storage medium
CN111783756A (en) * 2019-04-03 2020-10-16 北京市商汤科技开发有限公司 Text recognition method and device, electronic equipment and storage medium
CN111783756B (en) * 2019-04-03 2024-04-16 北京市商汤科技开发有限公司 Text recognition method and device, electronic equipment and storage medium
CN110533119A (en) * 2019-09-04 2019-12-03 北京迈格威科技有限公司 The training method of index identification method and its model, device and electronic system
CN110598701A (en) * 2019-09-17 2019-12-20 中控智慧科技股份有限公司 License plate anti-counterfeiting method and device and electronic equipment
CN110826567A (en) * 2019-11-06 2020-02-21 北京字节跳动网络技术有限公司 Optical character recognition method, device, equipment and storage medium
CN111274993A (en) * 2020-02-12 2020-06-12 深圳数联天下智能科技有限公司 Eyebrow recognition method and device, computing equipment and computer-readable storage medium
CN111274993B (en) * 2020-02-12 2023-08-04 深圳数联天下智能科技有限公司 Eyebrow recognition method, device, computing equipment and computer readable storage medium
CN111401139A (en) * 2020-02-25 2020-07-10 云南昆钢电子信息科技有限公司 Method for obtaining position of underground mine equipment based on character image intelligent identification
CN111401139B (en) * 2020-02-25 2024-03-29 云南昆钢电子信息科技有限公司 Method for obtaining mine underground equipment position based on character image intelligent recognition
CN111414908A (en) * 2020-03-16 2020-07-14 湖南快乐阳光互动娱乐传媒有限公司 Method and device for recognizing caption characters in video
CN111414908B (en) * 2020-03-16 2023-08-29 湖南快乐阳光互动娱乐传媒有限公司 Method and device for recognizing caption characters in video
WO2021237517A1 (en) * 2020-05-27 2021-12-02 京东方科技集团股份有限公司 Handwriting recognition method and apparatus, and electronic device and storage medium
CN111666932A (en) * 2020-05-27 2020-09-15 平安科技(深圳)有限公司 Document auditing method and device, computer equipment and storage medium
CN111666932B (en) * 2020-05-27 2023-07-14 平安科技(深圳)有限公司 Document auditing method, device, computer equipment and storage medium
CN111639636A (en) * 2020-05-29 2020-09-08 北京奇艺世纪科技有限公司 Character recognition method and device
CN112200201A (en) * 2020-10-13 2021-01-08 上海商汤智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN112258487A (en) * 2020-10-29 2021-01-22 德鲁动力科技(海南)有限公司 Image detection system and method
CN112508684A (en) * 2020-12-04 2021-03-16 中信银行股份有限公司 Joint convolutional neural network-based collection risk rating method and system
CN112580628A (en) * 2020-12-22 2021-03-30 浙江智慧视频安防创新中心有限公司 License plate character recognition method and system based on attention mechanism
CN113591864A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Training method, device and system for text recognition model framework
CN113343953B (en) * 2021-08-05 2021-12-21 南京信息工程大学 FGR-AM method and system for remote sensing scene recognition
CN113343953A (en) * 2021-08-05 2021-09-03 南京信息工程大学 FGR-AM method and system for remote sensing scene recognition
CN113837287A (en) * 2021-09-26 2021-12-24 平安科技(深圳)有限公司 Certificate abnormal information identification method, device, equipment and medium
CN113837287B (en) * 2021-09-26 2023-08-29 平安科技(深圳)有限公司 Certificate abnormal information identification method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN107688784A (en) A kind of character identifying method and storage medium based on further feature and shallow-layer Fusion Features
Li et al. Scale-aware fast R-CNN for pedestrian detection
CN106960202B (en) Smiling face identification method based on visible light and infrared image fusion
CN105574550B (en) A kind of vehicle identification method and device
CN105956560B (en) A kind of model recognizing method based on the multiple dimensioned depth convolution feature of pondization
CN104616316B (en) Personage's Activity recognition method based on threshold matrix and Fusion Features vision word
CN107945153A (en) A kind of road surface crack detection method based on deep learning
CN108062543A (en) A kind of face recognition method and device
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN110765833A (en) Crowd density estimation method based on deep learning
CN109711416A (en) Target identification method, device, computer equipment and storage medium
CN109657582A (en) Recognition methods, device, computer equipment and the storage medium of face mood
CN105956570B (en) Smiling face's recognition methods based on lip feature and deep learning
CN111813997A (en) Intrusion analysis method, device, equipment and storage medium
CN105844221A (en) Human face expression identification method based on Vadaboost screening characteristic block
CN107203775A (en) A kind of method of image classification, device and equipment
Yang et al. Anomaly detection in moving crowds through spatiotemporal autoencoding and additional attention
CN106203539A (en) The method and apparatus identifying container number
Masita et al. Pedestrian detection using R-CNN object detector
CN104050460B (en) The pedestrian detection method of multiple features fusion
Sun et al. Brushstroke based sparse hybrid convolutional neural networks for author classification of Chinese ink-wash paintings
CN109086794B (en) Driving behavior pattern recognition method based on T-LDA topic model
CN104679967B (en) A kind of method for judging psychological test reliability
CN106844785A (en) A kind of CBIR method based on conspicuousness segmentation
CN106548195A (en) A kind of object detection method based on modified model HOG ULBP feature operators

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180213

RJ01 Rejection of invention patent application after publication