CN110163193A - Image processing method, device, computer readable storage medium and computer equipment - Google Patents

Image processing method, device, computer readable storage medium and computer equipment Download PDF

Info

Publication number
CN110163193A
CN110163193A CN201910228327.7A CN201910228327A CN110163193A CN 110163193 A CN110163193 A CN 110163193A CN 201910228327 A CN201910228327 A CN 201910228327A CN 110163193 A CN110163193 A CN 110163193A
Authority
CN
China
Prior art keywords
image
certificate
corner
processed
corner location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910228327.7A
Other languages
Chinese (zh)
Other versions
CN110163193B (en
Inventor
姜媚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910228327.7A priority Critical patent/CN110163193B/en
Publication of CN110163193A publication Critical patent/CN110163193A/en
Application granted granted Critical
Publication of CN110163193B publication Critical patent/CN110163193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

This application involves a kind of image processing method, device, computer readable storage medium and computer equipments, which comprises obtains image to be processed;The image input picture processing model to be processed is subjected to the extraction of certificate corner feature;Model is handled by described image to handle extracted certificate corner feature, generates corner location predicted characteristics figure corresponding with the image to be processed;Pixel in the corner location predicted characteristics figure has the pixel value for the probability for indicating to belong to certificate angle point, and corresponding with the pixel in the image to be processed;According to the corner location predicted characteristics figure, the corner location in the image to be processed is determined;In the image to be processed, certificate image region is positioned based on the corner location.The accuracy of certificate region division can be improved in scheme provided by the present application.

Description

Image processing method, device, computer readable storage medium and computer equipment
Technical field
This application involves field of computer technology, more particularly to a kind of image processing method, device, computer-readable deposit Storage media and computer equipment.
Background technique
With the development of society, nowadays more and more industries, such as communication, trip and lodging, require to progress Certificate information audit.Under scene more at present, need to carry out certificate again after positioning certificate region in the image including certificate Identification and audit.
However, traditional certificate detection algorithm based on edge is only applicable to the fairly simple image of background.For background More complicated or edge blurry scene, often there are many region of erroneous detection, the division so as to cause certificate region is accurate Rate is lower.
Summary of the invention
Based on this, it is necessary to the lower technical problem of the certificate region accuracy rate marked off is detected for traditional certificate, A kind of image processing method, device, computer readable storage medium and computer equipment are provided.
A kind of image processing method, comprising:
Obtain image to be processed;
The image input picture processing model to be processed is subjected to the extraction of certificate corner feature;
Model is handled by described image to handle extracted certificate corner feature, is generated and the figure to be processed As corresponding corner location predicted characteristics figure;Pixel in the corner location predicted characteristics figure, having indicates to belong to certificate The pixel value of the probability of angle point, and it is corresponding with the pixel in the image to be processed;
According to the corner location predicted characteristics figure, the corner location in the image to be processed is determined;
In the image to be processed, certificate image region is positioned based on the corner location.
A kind of image processing apparatus, comprising:
Module is obtained, for obtaining image to be processed;
Extraction module, for the image input picture processing model to be processed to be carried out the extraction of certificate corner feature;
Generation module handles extracted certificate corner feature for handling model by described image, generates Corner location predicted characteristics figure corresponding with image to be processed;Pixel in the corner location predicted characteristics figure, has Indicate the pixel value for belonging to the probability of certificate angle point, and corresponding with the pixel in the image to be processed;
Determining module, for determining the angle point position in the image to be processed according to the corner location predicted characteristics figure It sets;
Locating module, for positioning certificate image region based on the corner location in the image to be processed.
A kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor, So that the processor executes the step of above-mentioned image processing method.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the calculating When machine program is executed by the processor, so that the step of processor executes above-mentioned image processing method.
Above-mentioned image processing method, device, computer readable storage medium and computer equipment are getting figure to be processed As after, image input picture to be processed can be handled model and carry out the extraction of certificate corner feature, then pass through the image procossing Model handles extracted certificate corner feature, generates corner location predicted characteristics figure corresponding with image to be processed. Since the pixel in obtained angle point predicted characteristics figure has a pixel value for the probability for indicating to belong to certificate angle point, and with wait locate The pixel managed in image is corresponding, then can determine each pixel according to the pixel value of pixel in angle point predicted characteristics figure Whether certificate angle point, may thereby determine that the corner location in image to be processed, can be positioned in image to be processed in this way Certificate image region improves the accuracy that certificate region is positioned from image.
Detailed description of the invention
Fig. 1 is the applied environment figure of image processing method in one embodiment;
Fig. 2 is the schematic diagram for the certificate angle point for including in image in one embodiment;
Fig. 3 is the flow diagram of image processing method in one embodiment;
Fig. 4 is the flow diagram handled using image processing model image to be processed in one embodiment;
Fig. 5 is the corresponding relationship of the pixel of corner location predicted characteristics figure and image to be processed in one embodiment;
Fig. 6 is each corresponding position prediction characteristic pattern of certificate angle point in image to be processed in one embodiment;
Fig. 7 is the schematic diagram for positioning certificate image region in one embodiment based on corner location in image to be processed;
Fig. 8 is the data flow schematic diagram of dense connection network in one embodiment;
Fig. 9 is to be illustrated in one embodiment using the process that cascade image processing model handles image to be processed Figure;
Figure 10 is the structural block diagram of image processing apparatus in one embodiment;
Figure 11 is the structural block diagram of image processing apparatus in another embodiment;
Figure 12 is the structural block diagram of computer equipment in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and It is not used in restriction the application.
Fig. 1 is the applied environment figure of image processing method in one embodiment.Referring to Fig.1, the image processing method application In image processing system.The image processing system includes terminal 110 and server 120.Terminal 110 and server 120 pass through net Network connection.Terminal 110 specifically can be terminal console or mobile terminal, and mobile terminal specifically can be with mobile phone, tablet computer, notes At least one of this computer etc..Server 120 can use the server of the either multiple server compositions of independent server Cluster is realized.Terminal 110 and server 120 can be individually used for executing the image processing method provided in the embodiment of the present application Method.Terminal 110 and server 120 can also be cooperateed with for executing the image processing method provided in the embodiment of the present application.
It should be noted that image processing model involved in the embodiment of the present application is by having angle point after sample learning The machine learning model of predictive ability.Angle point prediction refers to the position where the angle point for predicting certificate in the picture.Here Angle point can be one, can also be more than one.For example, Fig. 2 shows the certificates for including in image in one embodiment The schematic diagram of angle point.When the certificate for including in the image is identity card, it is i.e. pre- in the images that angle point prediction is carried out to the image Measure the position where four angle points difference of identity card, that is, four angle points, 201,202,203 and indicated in figure 204。
Wherein, neural network model, such as CNN (Convolutional Neural can be used in image processing model Networks, convolutional neural networks) model etc..Network structure in CNN model can be DenseNet (Densely Connected Network, dense connection network) structure, ResNet (Residual Neural Network, residual error network) Structure or ShuffleNet (recombination channel network) structure etc..Certainly, image processing model can also use other kinds of Model, the embodiment of the present application is it is not limited here.
In the embodiment of the present application, computer equipment (terminal 110 as shown in Figure 1 or server 120) obtain to Handle image;Image input picture to be processed processing model is subjected to the extraction of certificate corner feature;Pass through image processing model pair Extracted certificate corner feature is handled, and corner location predicted characteristics figure corresponding with image to be processed is generated;Angle point position Set the pixel in predicted characteristics figure, have indicate belong to certificate angle point probability pixel value, and in image to be processed Pixel is corresponding;According to corner location predicted characteristics figure, the corner location in image to be processed is determined;In image to be processed, Certificate image region is positioned based on corner location.Wherein, image processing model can also be in terminal 110 or server 120 Training obtains.The image processing model that training obtains can also be issued to terminal 110 and used by server 120.
As shown in figure 3, in one embodiment, providing a kind of image processing method.The present embodiment is mainly in this way It is illustrated applied to the terminal 110 (or server 120) in above-mentioned Fig. 1.Referring to Fig. 3, which is specifically wrapped Include following steps:
S302 obtains image to be processed.
Wherein, image to be processed is the image of pending certificate image zone location.Certificate image zone location refers to The image-region where certificate is marked off in image.Certificate such as identity card or driver's license etc..
Specifically, computer equipment can obtain the image generated in the machine, using the image as image to be processed.It calculates Machine equipment can also crawl image from network, using the image as image to be processed.Computer equipment can also obtain other meters The image for calculating machine equipment transmitting, using the image as image to be processed.
In a specific embodiment, certificate can be placed in terminal under the scene for needing to carry out authentication by user Built-in or external connection camera collects the photo including certificate within sweep of the eye, by camera, to obtain Get image to be processed.For example, personal identification papers are placed in terminal built-in and imaged by user under the scene of application program real-name authentication Head within sweep of the eye, the photo including identity card is collected by camera, by terminal by the photo upload of acquisition extremely Server corresponding to application program carries out real-name authentication.The server obtains image to be processed, therefrom positions certificate image Region is to carry out identification.
In one embodiment, image to be processed can be the image file with vision shape.Such as JPG, JPEG or The image file of the formats such as person PNG.Image to be processed is also possible to the image data without vision shape.For example it uses and includes The numerical value set etc. of the corresponding pixel value of each pixel (Pixel).
It should be noted that it is the figure for including certificate image region that the image to be processed in the embodiment of the present application, which does not limit, Picture.It, can be in the image to be processed by embodiment provided herein when image to be processed includes certificate image region In orient certificate image region.When image to be processed does not include certificate image region, pass through implementation provided herein After example handles the image to be processed, the processing result that the image to be processed does not include certificate image region can be obtained.
In one embodiment, image to be processed can be two dimensional image, be also possible to 3-D image.
Image input picture to be processed processing model is carried out the extraction of certificate corner feature by S304.
Wherein, angle refers to the place of the side edge of object.For example identity card includes four edges, this four edges connects to being formed Four angles.Position where angle point, that is, angle.Certificate corner feature is the intrinsic feature in the angle point region of certificate image.This In angle point region can be the location of pixels where the angle for only referring to certificate image.For example, location of pixels (20,30) is Angle point region.Angle point region is also possible to extend to the outside using the location of pixels where the angle of certificate image as reference point Location of pixels range.For example, the point centered on the location of pixels where the angle of certificate image, using 20 pixels as the circle of radius Shape range areas.
It is appreciated that including the neural network with ability in feature extraction in image processing model, in training image processing When model, these neural network learnings can be trained to the certificate corner feature for extracting certificate image using specific sample.This Sample, image processing model is subsequent in use, can based on the image zooming-out certificate corner feature to be processed of input, thus Certificate corner location is predicted according to the certificate corner feature of extraction.
Specifically, the characteristic pattern input picture of image to be processed can be handled model and handled by computer equipment.Wherein, Characteristic pattern specifically can be Color Channel characteristic pattern.In this way, (there is spy by feature extraction layer included by image processing model Levy the neural network of extractability), the extraction of certificate corner feature can be carried out based on the Color Channel characteristic pattern of input.Its In, when image to be processed is color image, the Color Channel characteristic pattern of image to be processed can be tri- Color Channel feature of RGB Figure;When image to be processed is gray level image, the Color Channel characteristic pattern of image to be processed can be gray channel characteristic pattern.
Image processing model is used to handle image to be processed for example, Fig. 4 is shown in one embodiment Flow diagram.With reference to Fig. 4, it is assumed that image to be processed is color image, and having a size of H1*W1.Then image to be processed includes three A Color Channel characteristic pattern, respectively R channel characteristics figure, G channel characteristics figure and channel B characteristic pattern.Each Color Channel feature The pixel value of pixel is channel value of the pixel in the Color Channel in figure.Image processing model is then with these three colors Channel characteristics figure is that input is handled, that is to say, that the characteristic pattern of mode input is having a size of 3*H1*W1.It is appreciated that here One color reflects a feature, and a Color Channel is a feature channel.The quantity in feature channel is characteristic pattern Dimension, the feature port number of mode input at this time, i.e. characteristic dimension are 3.
S306 is handled extracted certificate corner feature by image processing model, is generated and image to be processed Corresponding corner location predicted characteristics figure;Pixel in corner location predicted characteristics figure, having indicates to belong to certificate angle point The pixel value of probability, and it is corresponding with the pixel in image to be processed.
Wherein, corner location predicted characteristics figure, be pixel level prediction pixel point whether be certificate angle point image.Angle The pixel value of pixel is for reflecting that respective pixel point belongs to the probability of certificate angle point in point position predicted characteristics figure.Corner location The pixel value of pixel is bigger in predicted characteristics figure, indicates that a possibility that respective pixel point belongs to certificate angle point is bigger.Certainly, exist Pixel in other embodiment, in corner location predicted characteristics figure, it is possible to have expression is not belonging to the general of certificate angle point The pixel value of rate, and it is corresponding with the pixel in image to be processed.That is, in corner location predicted characteristics figure pixel pixel value For reflecting that respective pixel point is not belonging to the probability of certificate angle point.The pixel value of pixel is got in corner location predicted characteristics figure Greatly, indicate that a possibility that answering pixel to be not belonging to certificate angle point is bigger.That is, as long as the pixel value that can pass through pixel will The characteristic pattern that certificate angle point and non-certificate angle point distinguish.
The corresponding relationship of the pixel in pixel and image to be processed in corner location predicted characteristics figure can be one One-to-one correspondence is also possible to one-to-many relationship.That is, a pixel in corner location predicted characteristics figure can With a pixel in correspondence image to be processed, multiple pixels in image to be processed can also be corresponded to.Wherein, work as angle point When the corresponding relationship of the pixel in pixel and image to be processed in position prediction characteristic pattern is one-to-one relationship, tool Body can be by the one-to-one relationship of location of pixels.When in the pixel and image to be processed in corner location predicted characteristics figure The corresponding relationship of pixel when being one-to-many relationship, specifically can be by the pixel corresponding pass in relative position in the picture System.It can generically be interpreted as, image processing model obtains corner location in the Color Channel characteristic pattern based on image to be processed When predicted characteristics figure, in order to reduce network model size, forecasting efficiency is improved, image resolution ratio is reduced, so that characteristic pattern exists The quantity of pixel on feature channel is reduced.For example, the image resolution ratio of image to be processed is corner location predicted characteristics figure 8 times of image resolution ratio.
Wherein, it is in the corresponding relationship of pixel and the pixel in image to be processed in corner location predicted characteristics figure When one-to-one relationship, it can be interpreted as the prediction that image processing model puts image to be processed pixel-by-pixel, root According to the pixel value of the pixel in corner location predicted characteristics figure, each picture in image to be processed can be obtained correspondingly Whether vegetarian refreshments is certificate angle point.
For example, Fig. 5 shows the pixel of corner location predicted characteristics figure and image to be processed in one embodiment Corresponding relationship.With reference to the upper figure of Fig. 5, it can be seen that the picture size of image to be processed is 8*8, corner location predicted characteristics The picture size of figure is 8*8.Pixel pass corresponding with the pixel in image to be processed in corner location predicted characteristics figure System is one-to-one relationship, and corresponds by location of pixels.With reference to Fig. 5 following figure, it can be seen that the image of image to be processed Having a size of 8*8, the picture size of corner location predicted characteristics figure is 4*4.Pixel in corner location predicted characteristics figure with to When the corresponding relationship for handling the pixel in image is one-to-many relationship, by pixel, relative position is corresponded in the picture Relationship.It is appreciated that the present embodiment can both be handled two-dimensional image, can also be to three-dimensional image at Reason, but check for convenience, it is illustrated in attached drawing using two-dimensional image.
In one embodiment, corner location predicted characteristics figure can be a frame image.Pixel so in the frame image Point has the pixel value for the probability for indicating to belong to any certificate angle point.Corner location predicted characteristics figure is also possible to one group of image. The pixel in every frame image in so this group of image has the pixel for the probability for indicating to belong to a certain specific certificate angle point Value.
In one embodiment, certificate corner feature includes certificate angle point content characteristic.By image processing model to institute The certificate corner feature of extraction is handled, and generates corner location predicted characteristics figure corresponding with image to be processed, comprising: pass through Image processing model handles extracted certificate angle point content characteristic, each certificate for including in generation and image to be processed The respective position prediction characteristic pattern of angle point;Wherein, the corresponding position prediction characteristic pattern of each certificate angle point, according to default Certificate angle point sequence arrange;Pixel in each corresponding position prediction characteristic pattern of certificate angle point, having indicates to belong to phase The pixel value of the probability of certificate angle point is answered, and corresponding with the pixel in image to be processed.
It is appreciated that certificate is significant due to its high identification, there is unified typesetting format, main body under normal conditions Structure is fixed and the neighborhood region at each angle of certificate has fixed content characteristic.Based on this feature, in embodiments herein The problem of certificate image zone location, is converted to the position prediction problem of each angle point of certificate.In this way, each in certificate has been determined After the position of angle point and sequence, it can the image-region where positioning certificate.Wherein, in certificate each angle point sequence, can To be to sort to obtain to remaining angle point clockwise or counter-clockwise for starting angle point with wherein some angle point.With continued reference to Fig. 2 can be starting angle point with the angle point in the identity card upper left corner in the present embodiment, clockwise to remaining angle point sort to obtain by The angle point 201,202,203 and 204 of sequence arrangement.
Specifically, certificate angle point content characteristic is the feature for the content for reflecting that certificate angle point region includes.Computer Equipment can learn to extract certificate angle point content characteristic by specific sample training image processing model.In this way using at image When managing model, that is, it can extract the certificate angle point content characteristic in image to be processed, it is then special to extracted certificate angle point content Sign continues to handle, and obtains and the respective position prediction characteristic pattern of each certificate angle point that includes in image to be processed.Its In, each corresponding position prediction characteristic pattern of certificate angle point is arranged according to preset certificate angle point sequence;Each certificate angle point Pixel in corresponding position prediction characteristic pattern, has a pixel value for the probability for indicating to belong to corresponding certificate angle point, and with to The pixel handled in image is corresponding.
It illustrates, it is assumed that certificate is identity card, then the certificate includes 4 angle points.It assumes again that with the certificate upper left corner Angle point be starting angle point, sort clockwise to each angle point, then special to extracted certificate angle point by image processing model Sign is handled, obtaining with the respective position prediction characteristic pattern of each certificate angle point that is including in image to be processed, successively For the corresponding position prediction characteristic pattern of angle point in the certificate upper left corner, the certificate upper right corner the corresponding position prediction characteristic pattern of angle point, The corresponding position prediction characteristic pattern of the angle point in the certificate lower right corner and the corresponding position prediction characteristic pattern of angle point in the certificate lower left corner, I.e. successively as shown in (1), (2), (3) and (4) figure in Fig. 6.Wherein, the pixel value of the upper pixel of every figure, indicates the pixel Belong to the probability of certificate angle point.For example, on position prediction characteristic pattern shown in Fig. 6 (1) pixel pixel value, indicate the pixel Point belongs to the probability of identity card upper left corner angle point.
It is appreciated that having indicates to belong to due to the pixel in the corresponding position prediction characteristic pattern of each certificate angle point The pixel value of the probability of corresponding certificate angle point, and it is corresponding with the pixel in image to be processed.The mode of the transmitted information of the figure It is similar with thermodynamic chart (Heatmap) the transmitting mode of information, then position prediction characteristic pattern generically can be known as position heat Try hard to.
In a specific embodiment, in the position prediction characteristic pattern of an angle point pixel value of each pixel value Range is (0,1).Wherein, the maximum pixel of pixel value is the angle point predicted, the location of pixels of the pixel is One corner location of prediction.
In the present embodiment, by predicting respectively each angle point, the corresponding position prediction of each angle point is obtained Characteristic pattern avoids the interference between angle point, further improves the accuracy of corner location prediction.
In one embodiment, image processing model is when obtaining the corresponding position prediction characteristic pattern of each angle point, also The background channel characteristic pattern of image to be processed can be obtained simultaneously.Background channel characteristic pattern is used to when training image handles model Enhance the binding character between each certificate angle point.
With continued reference to Fig. 4, it is assumed that certificate is identity card, three Color Channel spies of the image processing model to image to be processed Sign figure (3*H1*W1) is handled, and the characteristic pattern that size is 5*H2*W2, i.e. corner location predicted characteristics figure can be obtained.Wherein, It is each that the characteristic pattern of first four channel 4*H2*W2 is followed successively by the identity card upper left corner, the upper right corner, the lower right corner and four, lower left corner angle point Self-corresponding position prediction characteristic pattern.The maximum point of numerical value is the corner location predicted on every characteristic pattern.The last one is logical The characteristic pattern in road is the characteristic pattern of background channel.
S308 determines the corner location in image to be processed according to corner location predicted characteristics figure.
Specifically, computer equipment may compare the pixel value of each pixel in corner location predicted characteristics figure, by pixel value Meet angle point and chooses the pixel of condition as angle point to obtain corner location.Wherein, angle point selection condition can be default The forward pixel of quantity rank-ordered pixels, preset quantity are the quantity of angle point, and such condition is suitable for passing through a frame feature Figure predicts the scene of all angle points.Angle point, which chooses condition, can be the maximum pixel of pixel value, and such condition is suitable for passing through One frame characteristic pattern predicts the scene of an angle point.
In one embodiment, corner location can be the position where a pixel, be also possible to multiple pixels The position at place.For example, the pixel value of pixel Q is maximum in characteristic pattern, then the point can be determined as angle point, then can be with The pixel position is corner location, can also extend to the outside to obtain multiple pictures as reference point using the pixel position Vegetarian refreshments position is corner location.
S310 positions certificate image region based on corner location in image to be processed.
Specifically, determining each corner location can be sequentially connected by computer equipment according to putting in order for corresponding angle point, Surround closed polygon.Image-region where the closed polygon is certificate image region.
Fig. 7 shows the signal in one embodiment based on corner location positioning certificate image region in image to be processed Figure.With reference to Fig. 7, computer equipment has determined 4 angle points in image to be processed, the putting in order as 701 of this four angle points, 702,703 and 704.So computer equipment can successively be connected from by 701 positions with 702 positions, will be where 702 Position is connected with 703 positions, and 703 positions are connected with 704 positions, obtain closed quadrangle 710, as Certificate image region.
Image input picture to be processed can be handled mould after getting image to be processed by above-mentioned image processing method Type carries out the extraction of certificate corner feature, is then handled by the image processing model extracted certificate corner feature, Generate corner location predicted characteristics figure corresponding with image to be processed.Due to the pixel tool in obtained angle point predicted characteristics figure There is expression to belong to the pixel value of the probability of certificate angle point, and corresponding with the pixel in image to be processed, then can be according to angle The pixel value of pixel determines that each pixel is certificate angle point in point prediction characteristic pattern, may thereby determine that figure to be processed Corner location as in, can position certificate image region in image to be processed in this way, improve and position certificate from image The accuracy in region.
In one embodiment, S304 includes: that image input picture to be processed is handled model, passes through image processing model In it is dense connection network multilayer neural network, successively to image to be processed carry out the extraction of certificate corner feature;Wherein, dense company The output of each layer neural network included by dense connection network has been merged in the output for connecing network.
Wherein, dense connection network (Densely Connected Network) is by the addition in each layer of input The output of preamble layer, to realize the network structure of feature reuse.Specifically, dense connection network includes multilayer neural network, In every layer of neural network input not only include adjacent upper one layer of neural network output, further include before the neural network Neural network output and/or first floor neural network input.It is dense connection network output be dense connection network most The output of later layer neural network, that is to say, that multilayer included by dense connection network has been merged in the output of dense connection network The output of neural network.Therefore, dense connection network can preferably merge the characteristic information of multilayer extraction, spy abundant in this way Reference ceases the prediction for being more advantageous to certificate corner location;It can also guarantee that the output characteristic pattern of dense every layer of network of connection is protected simultaneously It holds in lower dimension.To reduce network parameter amount and forward speed while promoting network characterisation ability.
In a specific embodiment, the input of every layer of neural network not only includes adjacent upper in dense connection network The output of one layer of neural network further includes the output and first floor neural network of all layers of neural network before the neural network Input;That is the output of dense connection network, which has been merged, dense connects the defeated of all layers of neural network included by network Out, the characteristic information of all layers of extraction has been merged.
In one embodiment, by the way that image input picture to be processed is handled model, by thick in image processing model The multilayer neural network of close connection network, successively carries out the extraction of certificate corner feature to image to be processed, comprising: by figure to be processed As input picture handles model;Successively using each layer neural network of dense connection network as current layer neural network;It will be dense The output for connecting each layer neural network being located at before current layer neural network in network, with first layer mind in dense connection network Input splicing through network obtains the comprehensive input of current layer neural network;By current layer neural network to it is comprehensive input into Row processing, obtains the output of current layer neural network, until obtaining the output of the last layer neural network in dense connection network; Output by the output of the last layer neural network as dense connection network.
It is appreciated that image processing model includes the multilayer neural network being arranged in order.Computer equipment will be to be processed After image input picture handles model, the multilayer neural network in image processing model is successively successively to preceding layer neural network Output is handled, and processing result is passed to next layer of neural network and is continued with.Wherein, image processing model can wrap Include one perhaps more than one one, dense connection network dense connection network may include one layer or the mind more than one layer Through network, one layer of neural network may include one layer or the network layer more than one layer.Network layer specifically such as convolutional layer, normalization Layer or pond layer etc..
Specifically, when the data of image processing model are transferred to one of them dense connection network of image processing model When (being denoted as D1), the input of the first layer neural network (being denoted as S1) of dense connection network D1 is to be transferred to image processing model Data (being denoted as x1);Data x1 handles to obtain processing result, i.e. first layer neural network by first layer neural network S1 The output (being denoted as y1) of S1.The second layer neural network (being denoted as S2) of dense connection network D1 is then with data x1 and first layer nerve The output y1 of network S1 obtains processing result collectively as input after data processing, i.e., second layer neural network S2's is defeated (it is denoted as y2) out.The third layer neural network (being denoted as S3) of dense connection network D1 is then with data x1, first layer neural network S1 Output y1 and the output y2 of second layer neural network S2 handled collectively as input, and so on, until dense company The last layer neural network (being denoted as Sn) of network D1 is connect to make jointly with the output of n-1 layers of neural network preceding before data x1 It is handled for input, obtains the output (being denoted as yn) of dense connection network D1.Specific data flow can refer to shown in Fig. 8. Wherein, the dense output inputted with other neural networks for connecting first layer neural network in network is collectively as certain layer of nerve net The input of network, can be will output and input by feature channel progress splicing.
It should be noted that assuming that the picture size of image to be processed is H*W, image to be processed handles mould in input picture When type, input is Color Channel characteristic pattern, then the picture size of the Color Channel characteristic pattern is 3*H*W (since feature is logical Road is Color Channel, therefore 3) feature port number is.The Color Channel characteristic pattern passes through the network layer that image processing model includes Processing, the characteristics of image of extraction constantly changes, that is, network layer output characteristic pattern N*H*W feature port number N Constantly change.Certainly, the image resolution ratio (H*W) of feature channel figure may also can change.
It illustrating, it is assumed that the input of first layer neural network is 32 feature channels, is exported as 32 feature channels, then the The input of two layers of neural network is the splicing of the input and output of first layer, the i.e. channel 32+32=64.Assuming that the output of the second layer For 32 channels, then the input of third layer is the splicing of input, the output and the output of the second layer of first layer, i.e. 32+32+32 =96 channels, and so on.Since the input of neural network below is all that front layer neural network exports on feature channel It is cumulative, therefore the smaller of the feature port number setting of every layer of neural network output can be improved into network to reduce network parameter Calculating speed.
In one embodiment, the quantity of dense connection network is more than one;The convolutional layer that dense connection network includes is more In one layer.The image processing method further include: in the dense connection network of the preset quantity forward for network-order, layer sequence The convolutional layer of forward preset quantity, by the output of convolutional layer, the parallel batch normalization layer of input and example normalize layer respectively, Obtain batch normalized output and example normalized output;Will batch normalized output and the splicing of example normalized output, as with simultaneously Next layer adjacent of input of capable batch normalization layer and example normalization layer.
Wherein, convolutional layer is the network layer for including multiple convolution kernels and carrying out convolution operation to input data.Normalize layer For the network layer by data to be treated limitation within a certain range.Normalization can make subsequent data processing more square Just, moreover it is possible to guarantee that convergence is accelerated when model running.
In one embodiment, certificate corner feature includes certificate angle point content characteristic and certificate angle point appearance features.Card Part angle point content characteristic is the feature for the content that certificate angle point region includes, and certificate angle point appearance features are certificate angle points Shape feature and shape distribution.Normalizing layer includes batch normalization (Batch Normalization, BN) layer and example Normalize (Instance Normalization, IN) layer.Normalization (Batch Normalization, BN) layer is criticized to focus on In making normalized to a collection of sample (input), it is easier to extract image appearance features.Example normalizes (Instance Normalization, IN) layer emphatically be to make normalized to single sample (input), it is easier to extract picture material Feature.
It is understood that under normal conditions, what is usually connected after one layer of convolutional layer is normalization layer, and CNN network Shallow-layer feature be generally intended to capture the apparent information of image, high-level characteristic be intended to capture image content information.So at image In the dense connection network for managing the preset quantity that network-order is forward in model, after the convolutional layer of the preset quantity of layer front Parallel connection batch normalization layer and example normalize layer, and model can be made not only to have retained the image appearance features of shallow-layer, but also not shadow Ring high-rise capture image content features.Wherein, shallow-layer refers to the network layer of front in model, and high level refers to sequence in model Network layer rearward.Sequence is more forward, and level is more shallow, and level is higher more rearward for sequence.
Specifically, in the dense connection network of the preset quantity forward for network-order, the present count of layer front The convolutional layer of amount, computer equipment can the parallel batch normalization layer of input and example normalize layer respectively by the output of convolutional layer, Obtain batch normalized output and example normalized output;Normalized output and example normalized output will be criticized to spell according to feature channel It connects, as next layer of the input adjacent with parallel batch normalization layer and example normalization layer.And it is forward for network-order Preset quantity dense connection network middle layer back convolutional layer or network-order dense connection network rearward in Convolutional layer, the output of convolutional layer input batch normalization layer then directly can obtain batch normalized output by computer equipment, as Next layer of the input adjacent with batch normalization layer.Where it is assumed that image processing model includes four dense connection networks, network The dense connection network of the preset quantity of front can be the dense connection network of the first two, the forward dense company of network-order Connecing network then is two next dense connection networks.In the dense connection network of the forward preset quantity of network-order, layer sequence The convolutional layer of forward preset quantity can be first layer convolutional layer.
For example, forward one of them the dense connection network of network-order includes the convolutional layer more than one layer.Wherein It is connected in parallel batch normalization layer and example normalization layer after first layer convolutional layer, then criticizes normalization layer and example normalization layer It is commonly connected to second layer convolutional layer, batch normalization layer is then only connected after second layer convolutional layer, connects third after batch normalization layer All only connection batch normalizes layer after layer convolutional layer ... and subsequent convolutional layer.
In a specific embodiment, image processing model includes 4 dense connection networks, first dense connection net Network includes an individual convolutional layer and batch normalization layer being connected in parallel thereafter and example normalization layer.The second to four dense Connection network is Denseblock structure, and first Denseblock structure includes 2 Bottleneck units, and second Denseblock structure includes 4 Bottleneck units, and third Denseblock structure includes 4 Bottleneck mono- Member.With continued reference to Fig. 8, a Bottleneck unit networks structure, Bottleneck unit networks structure are shown in the figure Including two layers of convolutional layer, first layer convolutional layer includes the convolution kernel of 128 1*1, and second layer convolutional layer includes the convolution of 32 3*3 Core.Bottleneck unit output and input by feature channel splicing after, collectively as next Bottleneck unit Input.It is appreciated that one layer of normalization layer can be connected after lower layer of convolutional layer of usual situation, it is not shown in Bottleneck unit Normalize the structure of layer.But specifically, connected after the first layer convolutional layer of Bottleneck unit parallel batch normalization layer and Example normalizes layer.After the outputs of these two types normalization layers are by the splicing of feature channel, collectively as the input of second layer convolutional layer, I.e. criticizing for connection normalizes layer after second layer convolutional layer.Under normal conditions, in the first two Denseblock of image processing network Connected after first convolutional layer of structure parallel batch normalization layer and example normalization layer can meet demand, but the application exists This does not limit the quantity for the convolutional layer for connecting parallel batch normalization layer and example normalization layer, can satisfy and both retains shallowly The apparent information of layer, and high-rise capture content information is not influenced.
In the present embodiment, example is added after several layers of convolutional layers before image processing model and normalizes layer, so that image Processing model had not only remained the apparent information of shallow-layer, but also did not influenced high-rise capture content information, improved image processing model Characteristic present ability improves the accuracy and validity of image processing model feature extraction, further improves angle point prediction Accuracy.
In above-described embodiment, dense connection network is used in image processing model, since dense connection network can be more The characteristic information for merging all layers of extraction in the network well can be improved the accuracy of image processing model feature extraction and have Effect property;Moreover, it is also possible to which the output channel of every layer of network is made to be maintained at smaller value, dropped while promoting network characterisation ability Low network parameter amount and forward speed.
In one embodiment, S308 includes: the location prediction corner location in corner location predicted characteristics figure;It is predicting Reference point locations are chosen in the default neighborhood of corner location;When the value differences of prediction corner location and reference point locations are less than When default difference, then it will predict that corner location is deviated towards the direction of reference point locations, obtains Corner position.
It is appreciated that computer equipment is being handled to obtain corner location by image processing model to image to be processed Predicted characteristics figure, and then predict that error may be generated during corner location.Pass through the angle point to prediction in the present embodiment It is corrected position, it is possible to reduce prediction error.
Wherein, prediction corner location is determined according to the pixel value of pixel in corner location predicted characteristics figure, is pre- The corner location of survey.Reference point locations are selected out of prediction corner location default neighborhood to judge to predict angle point deviation Reference position.Computer equipment can be based on the big of the value differences and forecasted variances for predicting corner location and reference point locations Small relationship, to determine a need for correcting prediction corner location.
Specifically, computer equipment calculates the two corner locations after location prediction corner location and reference point locations Pixel value difference, then the difference is compared with default difference.When the difference being calculated reaches default difference, Then think to predict error within the scope of acceptable, without correcting to prediction corner location.When the difference being calculated It is different when being less than default difference, then it is assumed that prediction error has exceeded acceptable range, needs to rectify prediction corner location Just.Further, computer equipment, then will prediction corner location direction ginseng when judgement prediction corner location is corrected The direction of examination point position deviates pre-determined distance, obtains Corner position.
Wherein, prediction corner location specifically can be the maximum pixel place of pixel value in corner location predicted characteristics figure Position.Reference point locations are the specific can be that the pixel institute of lower pixel value secondary maximum is in place in four neighborhoods of prediction corner location It sets.Four neighborhoods of prediction corner location can be prediction four pixel positions of corner location up, down, left and right.Offset Pre-determined distance is the empirical value that many experiments obtain.
In a specific embodiment, prediction corner location is corrected to obtain the calculation formula of Corner position It is as follows:
(i, j)=arg max (F)
(i', j')=arg maxΩ(F)
X=start+j " * stride
Y=start+i " * stride
Start=stride/2-0.5 (1)
Wherein, F is the corresponding position prediction characteristic pattern (H*W) of a certain angle point of image processing model output, and (i, j) is should Location of pixels on position prediction characteristic pattern where max pixel value, F (i, j) are the max pixel value, and (i', j') is the maximum Location of pixels in four neighborhoods of pixel value where secondary maximum pixel value, F (i', j') are the secondary maximum pixel value, and thr is default Difference.Stride is the image resolution ratio multiple of the image to be processed of max pixel value and input, and start is corresponding deviation Value.(Δ x, Δ y) are the pre-determined distance of offset, and (i ", j ") is the location of pixels where the max pixel value after correction.(x,y) For the location of pixels of the image to be processed of input, that is, the corner location positioned in image to be processed.Thr, Δ x, Δ y are inclined The pre-determined distance and stride of shifting and the calculated relationship of start are the empirical values that many experiments obtain.For example, thr= 0.23, Δ x=Δ y=0.5 and start=stride/2-0.5 etc..
It is appreciated that due to four neighborhood of pixels of up, down, left and right that four neighborhoods of max pixel value are a pixel, The offset of so Δ x and Δ y are to appoint to take one.That is:
(i ", j ")=(i+ Δ x, j), (i- Δ x, j), (i, j+ Δ y), (i, j- Δ y) } (2)
It is understood that computer equipment is being handled to obtain angle point by image processing model to image to be processed When position prediction characteristic pattern, if having carried out reduction processing to image resolution ratio in treatment process, then can cause to predict error Increase.It is then corrected in such a scenario with greater need for prediction corner location.
In above-described embodiment, according to location prediction corner location in corner location predicted characteristics figure, to the prediction angle point Position, which is corrected, further improves the accuracy of angle point prediction to reduce prediction error.
In one embodiment, extracted certificate corner feature is handled by image processing model, generate with The corresponding corner location predicted characteristics figure of image to be processed, comprising: by image processing model to the certificate corner feature of extraction It is handled, generates certificate image type corresponding with image to be processed respectively by parallel output branch and corner location is pre- Survey characteristic pattern.The image processing method further include: intercept certificate image in image to be processed according to certificate image region;In conjunction with Certificate image type and certificate image carry out certificate identification.
It is appreciated that when the available different classes of processing knot of certificate corner feature extracted based on image processing model When fruit, multiple parallel output branchs can be set for image processing model, export different types of processing result respectively.At this In embodiment, certificate image type prediction can be carried out for the parallel certificate corner feature that is used to be based on of image processing model setting Output branch, and for based on certificate corner feature carry out corner location prediction output branch.
Specifically, image processing model can be trained by the image pattern of a variety of certificate image types, be instructed in this way Practising the output branch of classifying one of image processing model more.The output of more classification output branchs specifically can be a probability Vector, the value of vector element indicates that the certificate for including in input picture belongs to the general of each certificate image type in the probability vector Rate.
Illustrate, it is assumed that image processing model training when, sample image include: including the positive image of identity card, Image including the identity card back side and do not include identity card front and the back side image, then it includes three that certificate image type, which is, Kind: identity card front, the identity card back side and non-object image.The certificate image class that image processing model can identify after training Type is identity card front, the identity card back side and three kinds of non-object image, and output, that is, size of so much classification output branch is The probability vector of 1*3, as shown in Figure 4.The probability vector such as (0.1,0.2,0.7) etc. of more classification output branch outputs.
Further, certificate image type belonging to the certificate that computer equipment includes in obtaining image to be processed, with And it can intercept and come to testify in image to be processed according to certificate image region behind positioning certificate image region in image to be processed Part image carries out affine transformation to the certificate image, carries out at subsequent character recognizing process then in conjunction with certificate image type Reason, to carry out identification.
In one embodiment, the branch of image processing model output corner location predicted characteristics figure can also be more than one It is a, i.e., each respective corner location predicted characteristics figure output branch of certificate image type.It should be noted that in certificate image class In type prediction, prediction result includes non-object image type, but the branch for exporting corner location predicted characteristics figure does not then include non- The corresponding output branch of target image.That is, vector in the vector that the branch of output prediction certificate image type is exported The quantity of element is N+1, and the quantity for exporting the branch of corner location predicted characteristics figure is N.
For example, with continued reference to Fig. 4, it is assumed that the certificate image type packet that can be identified after image processing model training It includes: identity card front, the identity card back side and non-object image.So image processing model then includes three output branchs, and one Output branch is for exporting certificate image type prediction as a result, an output branch is for exporting the positive corner location of identity card Predicted characteristics figure, an output branch are used to export the corner location predicted characteristics figure of identity card reverse side.For example, input wait locate Managing image to be includes the positive image of identity card, since there is no the identity card back side, then the angle point of the identity card reverse side exported In position prediction characteristic pattern, the pixel value in first four channel is 0, and the pixel value of the last one background channel is then 1.Compare again Such as, the image to be processed of input be simultaneously including identity card front and identity card reverse side image, due to existing simultaneously identity card Front and identity card reverse side predict body then an output branch exports the positive corner location predicted characteristics figure of identity card Part card front angle point, another output branch export the corner location predicted characteristics figure at the identity card back side, predict identity card back Face angle point.
In above-described embodiment, image processing model can also include the output branch for predicting certificate image type, this Sample identifies certificate image in combination with certificate image type after positioning certificate image region, improve certificate identification Efficiency.
In one embodiment, image processing method further include: collect the target image including certificate image as figure As the training sample of processing model;The corner location characteristic pattern of each training sample is generated as corresponding training label;It will train Sample input picture processing model obtains training output;At training output and training label building loss function training image Manage model.
It should be noted that image processing model involved in the embodiment of the present application is obtained by Training, there is prison Superintending and directing training pattern can be may make at image by the training input i.e. training sample and corresponding training label to design a model The ability that reason model learning learns to desired image processing model.
Specifically, computer equipment collects training sample of the target image including certificate image as image processing model This.Certificate image can be the certificate image of various type of credential, may include certificate direct picture and certificate back side image. Each target image may include a certificate image, also may include more than one certificate image.Include in target image More than one certificate image can be the certificate image for belonging to same type of credential, be also possible to be belonging respectively to different certificates The certificate image of type.
For example, target image can only include identity card direct picture, target image can also both include identity card Identity card back side image, target image can also not only include identity card direct picture but also include driver's license front elevation direct picture again As, etc..
In one embodiment, the corner location characteristic pattern of each training sample is generated as corresponding training label, comprising: For each certificate corner sample in each training sample, respectively with certificate corner sample in the position in affiliated training sample is The heart is generated the position feature figure of certificate corner sample by default distribution mode;According to each certificate angle point sample in each training sample This position feature figure, generates the corner location characteristic pattern of each training sample, trains label accordingly as each training sample.
Specifically, computer equipment can determine respectively the training to each training sample after being collected into training sample Included certificate corner sample in sample, then by certificate corner sample in the position in affiliated training sample centered on, press Default distribution mode generates the position feature figure of certificate corner sample;Further according to each certificate corner sample in each training sample Position feature figure generates the corner location characteristic pattern of each training sample, trains label accordingly as each training sample.Wherein, Preset distribution mode can be Gaussian Profile mode.
Wherein, computer equipment can generate one for all indentations corner sample for including in a training sample Position feature figure, i.e. corner location characteristic pattern.At this point, for any corner location region in corner location characteristic pattern, It is the distribution mode centered on the angle point to external radiation.Computer equipment can also be for including in a training sample All indentations corner sample, corresponding each certificate corner sample generates a position feature figure, then these position feature figures are spelled It connects to obtain corner location characteristic pattern.At this point, for there is only an angle points in each position feature figure, centered on the angle point To the distribution mode of external radiation.
It illustrates, it is assumed that certificate is identity card, and computer equipment can upper left, upper right, bottom right and lower-left to identity card Four angle points, are obeyed respectively centered on the position where each angle point, and radius is the Gaussian Profile of sigma, generate corresponding angle point Position feature figure.Calculation formula is as follows:
Wherein, (x, y) is the position of any point in position feature figure, and g (x, y) is the value of point (x, y), (cx,cy) it is certificate The position of angle point.The position feature figure generated in this way is centered on a certain angle point position to the Gauss of external radiation point Cloth.
In the present embodiment, the mode of project training label is provided, and devises image as training label, so that figure What it is as processing model learning is mapping relations of the image space to image space, is not related to the leap (ratio of the matter on feature space Such as image space is to numerical space), such model learning is easier, and is easier study and is arrived core feature, so that model Can be more preferable, robustness and generalization are stronger.
In one embodiment, according to the position feature figure of each certificate corner sample in each training sample, each instruction is generated The corner location characteristic pattern for practicing sample trains label as each training sample, comprising: according to each in each training sample accordingly The position feature figure of certificate corner sample generates the background channel characteristic pattern of each training sample;By each card in each training sample The position feature figure of part corner sample continues to splice according to after preset certificate angle point sequential concatenation with background channel characteristic pattern, The corner location characteristic pattern of each training sample is obtained, trains label accordingly as each training sample.
Specifically, since computer equipment is for each self-generating respective positions characteristic pattern of each angle point, then belonging to The constraint relationship between the more than one angle point of one certificate is not embodied.At this point, computer equipment can be according to each trained sample The position feature figure of each certificate corner sample in this, generates the background channel characteristic pattern of each training sample, passes through background channel Characteristic pattern enhances the binding character between the more than one angle point for belonging to a certificate.
Wherein, in background channel characteristic pattern each pixel value value, with picture in the position feature figure of each certificate corner sample The value of element value is opposite.That is the pixel value of each certificate corner sample position is minimum, the pixel value far from certificate corner sample It is gradually increased.Specifically, in background channel characteristic pattern pixel pixel value, can be the position of N Yu each certificate corner sample The difference of the max pixel value of respective pixel location in characteristic pattern.Wherein, N is pixel in the position feature figure of certificate corner sample Being worth maximum can value.For example, the position feature figure of each certificate corner sample is in the location of pixels for location of pixels (X, Y) Pixel value be p1, p2, p3 and p4, then in background channel characteristic pattern the pixel value of the location of pixels be N-max (p1, p2, P3, p4).
In the present embodiment, enhance the binding character between each angle point by increasing background channel characteristic pattern, so that training The design of label is more reasonable, so as to train the high image processing model of predictablity rate.
Further, computer equipment is after obtaining training sample and the corresponding trained label of each training sample, i.e., The training for having supervision is carried out to image processing model using training sample.
Specifically, the web results of the output branch of the settable image processing model of computer equipment, so that the output point Branch can export corner location predicted characteristics figure based on the training sample of input, in the corner location predicted characteristics figure based on output Loss function is constructed with training label, carrys out training image processing model as target to minimize the loss function.
In a specific embodiment, loss function can be used MSE (Mean Squard Error) and lose to define, Shown in formula specific as follows:
Wherein, n is the quantity of pixel included by corner location characteristic pattern, giFor picture in corner location predicted characteristics figure The pixel value of vegetarian refreshments,For the pixel value of pixel in corner location characteristic pattern.It is appreciated that corner location characteristic pattern and angle point Position prediction characteristic pattern is respectively desired output and reality output, and size is identical.
It illustrates, it is assumed that certificate is identity card, the identifiable certificate image type of image processing model are as follows: identity card is just Face, the identity card back side and non-object image, the image resolution ratio of characteristic pattern is H*W, then the feature of corner location characteristic pattern is logical Road number is 5, and n=2*5*H*W.
In a further embodiment, loss function can also be using SmoothL1 loss or Focal loss etc..
In above-described embodiment, image processing model study is mapping relations of the image space to image space, is not related to The leap (for example image space arrives numerical space etc.) of matter on feature space, such model learning is easier, and is easier to learn Core feature is practised, so that model performance is more preferable, robustness and generalization are stronger.
In one embodiment, the corner location characteristic pattern of each training sample is generated as corresponding training label, comprising: The corner location characteristic pattern of each training sample is generated as corresponding first training label;By certificate image institute in each training sample The certificate image type of category is as corresponding second training label.Training sample input picture processing model is obtained training defeated Out, comprising: training sample input picture is handled into model, is respectively obtained by parallel output branch corresponding with training sample Predict certificate image type and corner sample position prediction characteristic pattern.According to training output and training label building loss function instruction Practice image processing model, comprising: first-loss function is constructed according to corner sample position prediction characteristic pattern and the first training label, And the second loss function is constructed according to prediction certificate image type and the second training label;It is damaged in conjunction with first-loss function and second It loses function training image and handles model.
It is appreciated that image processing model may include more than one output branch, then a training sample then may be used To there is more than one training label.Computer equipment can construct corresponding training sample and training label to each output branch It is right, to train the output branch.
Specifically, computer equipment produces the corner location characteristic pattern of each training sample as corresponding first training mark Label train the branch of prediction corner location according to training sample and corresponding first training label;It will be demonstrate,proved in each training sample Certificate image type belonging to part image is marked as corresponding second training label according to training sample and corresponding second training Sign the branch to train prediction certificate image type.Wherein, the corresponding loss function of different output branchs is different.
In a specific embodiment, predict that softmax damage can be used in the loss function of the branch of certificate image type It loses to define, shown in formula specific as follows:
Wherein, ajBelong to the softmax probability value of jth class certificate image type for the image that input picture handles model.ai The softmax probability value of certificate image type belonging to the image reality of model is handled for input picture.M is certificate image type Quantity (including non-object image type).zjFor the characteristic value of entrance loss layer.
So, the loss function that image processing model is integrally trained then is shown below:
L=Lcls+λLreg (6)
Since the output branch of prediction certificate image type is fairly simple, it is mainly used for the output of auxiliary prediction corner location Branch carries out angle point prediction, therefore the value of λ is specifically as follows 10.
In above-described embodiment, training image handles two class abilities of model learning, to utilize prediction certificate image type Output branch, come assist prediction corner location output branch carry out angle point prediction.In this way in image processing model in use, Available more accurate angle point prediction result.
In one embodiment, image processing method further include: actual corners point sample position is determined in training sample; Prediction corner sample position is determined in corner sample position prediction characteristic pattern;According to actual corners point sample position and prediction angle point Sample position constructs third loss function.Model is handled in conjunction with first-loss function and the second loss function training image, comprising: Model is handled in conjunction with first-loss function, the second loss function and third loss function training image.
It is appreciated that inevitably there is error in the prediction due to image processing model, then can scheme in training When as processing model, increases the loss function part of prediction error in the loss function of image processing model, can instruct in this way Practice image processing model and is reduced as far as prediction error.
In a specific embodiment, predict that SmoothL1 loss can be used to define, specifically in the loss function of error It is shown below:
Loffset=| xp-xg|+|yp-yg| (7)
Wherein, xpFor the x coordinate of the angle point of prediction, xgFor the x coordinate of actual angle point, that is, train the x of the angle point in label Coordinate;ypFor the y-coordinate of the angle point of prediction, ygFor the y-coordinate of actual angle point, that is, train the y-coordinate of the angle point in label.
So, the loss function that image processing model is integrally trained then is shown below:
L=Lcls+λLreg+βLoffset (8)
In the present embodiment, training image is handled into the part that loss function used by model increases prediction error, Model can be handled with training image in this way and is reduced as far as prediction error, thus in image processing model in use, can obtain To more accurate angle point prediction result.
In a specific embodiment, embodiment provided herein is suitable for the card with fixed typesetting format Part, such as identity card, driving license or driver's license etc..Due to having the certificate angle neighborhood of a point of the certificate of fixed typesetting format Region has fixed feature, and for image processing model in sample learning, the feature by learning certificate angle neighborhood of a point can The purpose of prediction certificate angle point is reached, so as to position certificate image region in image to be processed.Secondly, image procossing mould Type is using certificate corner location characteristic pattern in training as training label, and training is that image processing model is empty from image Between (feature space where image to be processed) reflecting to image space (feature space where certificate corner location characteristic pattern) Study is penetrated, compared to other mapping study from image space to numerical space etc., the learning process of image processing model is more It is easy, and do not need a large amount of training sample can relatively easily learn to core feature.Again, the model of image processing model Structure uses dense connection network, and the input of every layer of neural network is the output of all layers of front in spy in dense connection network The series connection on channel is levied, enables image processing model preferably to merge the feature that all neural networks are extracted, while again may be used So that the feature channel of the output of every layer of neural network keeps smaller value in image processing model, characterized in this way in lift scheme Network parameter amount and forward speed are reduced while ability.Finally, for the dense connection of the forward preset quantity of network-order In network, the convolutional layer of the preset quantity of layer front, the parallel batch normalization layer of connection and example normalize layer behind, The image appearance features extracted can be not only captured by batch normalization layer in this way, but also layer capture can be normalized by example and mentioned The image content features got.So as to preferably retain the apparent information of shallow-layer, while not influencing high-rise content information again, The Generalization Capability of image processing model is enhanced, the robustness of image processing model is improved.In conclusion provided herein Embodiment the certificate image arbitrarily put in image to be processed can be positioned by the quick image processing model of light weight in real time Region can then extract certificate image and carry out subsequent identification operation.
For example, under the scene that certificate is audited automatically, the image including certificate of user or trade company's upload or shooting In, certificate is usually optionally placed in background.It can be quickly and accurately positioned at this time using embodiment provided herein It certificate image region in image in this way can be effective so as to intercept certificate image according to the certificate image region of positioning Ground reduces subsequent process flow, reduces the interference of background information, to improve the precision of certificate identification.
In a further embodiment, image resolution ratio in image to be processed is larger, certificate image region accounts for figure to be processed Under the lesser scene of the ratio of picture, image is exported again since the biggish image to be processed of resolution ratio needs to be down sampled to smaller size Model is handled, thus more image detail can be lost, the precision for causing certificate angle point to be predicted substantially reduces.At this point it is possible to adopt With cascade image processing model, i.e., the certificate angle point in original image to be processed is predicted using an image processing model first Position, but due in original image to be processed certificate image region proportion it is smaller, the certificate corner location of prediction may There is relatively large deviation;Based on this, then continue to intercept out certificate image according to the result that first image processing model is predicted, then will cut The certificate image of taking-up inputs the certificate corner location in the certificate image that second image processing model prediction intercepts out, reaches Correct the effect of positioning result.Specific flow chart is as shown in Figure 9.Wherein, first image processing model and second image The model structure for handling model can be identical, but model parameter is different, and only training sample when model training is different.First The training sample of a image processing model can be the complicated image including any background, the training of second image processing model Sample is then the lesser image in background area intercepted out from the training sample of first image processing model.
Generally speaking, embodiment provided herein can quickly and efficiently meet any angle in image, arbitrarily large Small certificate positioning function reduces the complexity of subsequent Text region module, promotes whole certificate Text region precision.This Shen Please provided by embodiment corner location position error can reach on own test set lower than 10-4~10-5It is average fixed Position error, has significant locating effect.Wherein, position error is the error in the case where normalizing location of pixels.
It is specific that test result is as follows shown in table one:
Model Position error Nicety of grading (%)
Image processing model (identity card positioning) 0.0000665 99.86
Image processing model (positioning of single layer driving license) 0.000284 100
It cascades image processing model (driving license positioning) 0.000276 100
It should be understood that although each step in the flow chart of the various embodiments described above is successively shown according to the instruction of arrow Show, but these steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, this There is no stringent sequences to limit for the execution of a little steps, these steps can execute in other order.Moreover, above-mentioned each implementation At least part step in example may include that perhaps these sub-steps of multiple stages or stage be not necessarily for multiple sub-steps It is to execute completion in synchronization, but can execute at different times, the execution sequence in these sub-steps or stage It is not necessarily and successively carries out, but can be at least part wheel of the sub-step or stage of other steps or other steps Stream alternately executes.
As shown in Figure 10, in one embodiment, a kind of image processing apparatus 1000 is provided.Referring to Fig.1 0, the image Processing unit 1000 includes: to obtain module 1001, extraction module 1002, generation module 1003, determining module 1004 and positioning mould Block 1005.
Module 1001 is obtained, for obtaining image to be processed.
Extraction module 1002, for image input picture to be processed processing model to be carried out the extraction of certificate corner feature.
Generation module 1003 is generated for being handled by image processing model extracted certificate corner feature Corner location predicted characteristics figure corresponding with image to be processed;Pixel in corner location predicted characteristics figure, having indicates Belong to the pixel value of the probability of certificate angle point, and corresponding with the pixel in image to be processed.
Determining module 1004, for determining the corner location in image to be processed according to corner location predicted characteristics figure.
Locating module 1005, for positioning certificate image region based on corner location in image to be processed.
In one embodiment, extraction module 1002 is also used to image input picture to be processed handling model, passes through figure As the multilayer neural network of connection network dense in processing model, the extraction of certificate corner feature successively is carried out to image to be processed; The output of each layer neural network included by dense connection network has been merged in the output of dense connection network.
In one embodiment, extraction module 1002 is also used to image input picture to be processed handling model;Successively will Each layer neural network of dense connection network is as current layer neural network;Current layer nerve net will be located in dense connection network The input of first layer neural network is spliced in the output of each layer neural network before network, with dense connection network, is obtained current The comprehensive input of layer neural network;Comprehensive input is handled by current layer neural network, obtains current layer neural network Output, until obtain it is dense connection network in the last layer neural network output;By the output of the last layer neural network Output as dense connection network.
In one embodiment, the quantity of dense connection network is more than one;The convolutional layer that dense connection network includes is more In one layer.Extraction module 1002 is also used in the dense connection network of the preset quantity forward for network-order, and layer sequence is leaned on The convolutional layer of preceding preset quantity, by the output of convolutional layer, the parallel batch normalization layer of input and example normalize layer respectively, obtain To batch normalized output and example normalized output;Will batch normalized output and the splicing of example normalized output, as with it is parallel Batch normalization layer and adjacent next layer of the input of example normalization layer.
In one embodiment, generation module 1003 is also used to through image processing model in extracted certificate angle point Hold feature to be handled, the respective position prediction characteristic pattern of each certificate angle point for including in generation and image to be processed;Its In, each corresponding position prediction characteristic pattern of certificate angle point is arranged according to preset certificate angle point sequence;Each certificate angle point Pixel in corresponding position prediction characteristic pattern, has a pixel value for the probability for indicating to belong to corresponding certificate angle point, and with to The pixel handled in image is corresponding.
In one embodiment, determining module 1004 is also used to the location prediction angle point position in corner location predicted characteristics figure It sets;Reference point locations are chosen in the default neighborhood of prediction corner location;When the pixel of prediction corner location and reference point locations Value difference is different when being less than default difference, then will predict that corner location is deviated towards the direction of reference point locations, obtains target angle point It sets.
In one embodiment, generation module 1003 is also used to the certificate corner feature by image processing model to extraction It is handled, generates certificate image type corresponding with image to be processed respectively by parallel output branch and corner location is pre- Survey characteristic pattern.Image processing apparatus 1000 further include: identification module 1006 is used for according to certificate image region in image to be processed Middle interception certificate image;Certificate identification is carried out in conjunction with certificate image type and certificate image.
As shown in figure 11, in one embodiment, image processing apparatus 1000 further include: identification module 1006 and training mould Block 1007.
Training module 1007, for collecting training sample of the target image including certificate image as image processing model This;The corner location characteristic pattern of each training sample is generated as corresponding training label;Training sample input picture is handled into mould Type obtains training output;Model is handled according to training output and training label building loss function training image.
In one embodiment, training module 1007 is also used to for each certificate corner sample in each training sample, point Centered on not by certificate corner sample in the position in affiliated training sample, certificate corner sample is generated by default distribution mode Position feature figure;According to the position feature figure of each certificate corner sample in each training sample, the angle point of each training sample is generated Position feature figure trains label as each training sample accordingly.
In one embodiment, training module 1007 is also used to the position according to each certificate corner sample in each training sample Characteristic pattern is set, the background channel characteristic pattern of each training sample is generated;By the position of each certificate corner sample in each training sample Characteristic pattern continues to splice, obtains each training sample according to after preset certificate angle point sequential concatenation with background channel characteristic pattern Corner location characteristic pattern trains label as each training sample accordingly.
In one embodiment, training module 1007 is also used to generate the corner location characteristic pattern of each training sample as phase The the first training label answered;Using certificate image type belonging to certificate image in each training sample as corresponding second training mark Label;Training sample input picture is handled into model, prediction corresponding with training sample is respectively obtained by parallel output branch Certificate image type and corner sample position prediction characteristic pattern;According to corner sample position prediction characteristic pattern and the first training label First-loss function is constructed, and the second loss function is constructed according to prediction certificate image type and the second training label;In conjunction with One loss function and the second loss function training image handle model.
In one embodiment, training module 1007 is also used to determine actual corners point sample position in training sample;? Prediction corner sample position is determined in corner sample position prediction characteristic pattern;According to actual corners point sample position and prediction angle point sample This position constructs third loss function;At first-loss function, the second loss function and third loss function training image Manage model.
Above-mentioned image processing apparatus 1000 can will be at image input picture to be processed after getting image to be processed Manage model carry out the extraction of certificate corner feature, then by the image processing model to extracted certificate corner feature at Reason generates corner location predicted characteristics figure corresponding with image to be processed.Due to the pixel in obtained angle point predicted characteristics figure Point has the pixel value for the probability for indicating to belong to certificate angle point, and corresponding with the pixel in image to be processed, then can root Determine that each pixel is certificate angle point according to the pixel value of pixel in angle point predicted characteristics figure, may thereby determine that wait locate The corner location in image is managed, certificate image region can be positioned in image to be processed in this way, improve and positioned from image The accuracy in certificate region.
Figure 12 shows the internal structure chart of computer equipment in one embodiment.The computer equipment specifically can be figure Terminal 110 or server 120 in 1.As shown in figure 12, it includes total by system which, which includes the computer equipment, Processor, memory and the network interface of line connection.Wherein, memory includes non-volatile memory medium and built-in storage.It should The non-volatile memory medium of computer equipment is stored with operating system, can also be stored with computer program, the computer program When being executed by processor, processor may make to realize image processing method.Computer program can also be stored in the built-in storage, When the computer program is executed by processor, processor may make to execute image processing method.Those skilled in the art can manage It solves, structure shown in Figure 12, only the block diagram of part-structure relevant to application scheme, is not constituted to the application side The restriction for the computer equipment that case is applied thereon, specific computer equipment may include more more or less than as shown in the figure Component, perhaps combine certain components or with different component layouts.
In one embodiment, image processing apparatus 1000 provided by the present application can be implemented as a kind of computer program Form, computer program can be run in computer equipment as shown in figure 12.Group can be stored in the memory of computer equipment At each program module of the image processing apparatus, for example, acquisition module 1001 shown in Fig. 10, extraction module 1002, generation Module 1003, determining module 1004 and locating module 1005.The computer program that each program module is constituted holds processor Step in the image processing method of the row each embodiment of the application described in this specification.
For example, computer equipment shown in Figure 12 can pass through the acquisition in image processing apparatus 1000 as shown in Figure 10 Module 1001 executes step and obtains image to be processed.Executing step by extraction module 1002 will be at image input picture to be processed It manages model and carries out the extraction of certificate corner feature.Step is executed by image processing model to extracted by generation module 1003 Certificate corner feature is handled, and corner location predicted characteristics figure corresponding with image to be processed is generated;Corner location prediction Pixel in characteristic pattern, have indicate belong to certificate angle point probability pixel value, and with the pixel in image to be processed It is corresponding.Step is executed according to corner location predicted characteristics figure by determining module 1004, determines the angle point position in image to be processed It sets.Step is executed in image to be processed by locating module 1005, and certificate image region is positioned based on corner location.
In one embodiment, a kind of computer equipment, including memory and processor are provided, memory is stored with meter Calculation machine program, when computer program is executed by processor, so that the step of processor executes above-mentioned image processing method.Scheme herein It can be the step in the image processing method of above-mentioned each embodiment as the step of processing method.
In one embodiment, a kind of computer readable storage medium is provided, computer program, computer journey are stored with When sequence is executed by processor, so that the step of processor executes above-mentioned image processing method.The step of image processing method herein It can be the step in the image processing method of above-mentioned each embodiment.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, provided herein Each embodiment used in any reference to memory, storage, database or other media, may each comprise non-volatile And/or volatile memory.Nonvolatile memory may include that read-only memory (ROM), programming ROM (PROM), electricity can be compiled Journey ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) directly RAM (RDRAM), straight Connect memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.

Claims (15)

1. a kind of image processing method, comprising:
Obtain image to be processed;
The image input picture processing model to be processed is subjected to the extraction of certificate corner feature;
Model is handled by described image to handle extracted certificate corner feature, is generated and the image pair to be processed The corner location predicted characteristics figure answered;Pixel in the corner location predicted characteristics figure, having indicates to belong to certificate angle point Probability pixel value, and it is corresponding with the pixel in the image to be processed;
According to the corner location predicted characteristics figure, the corner location in the image to be processed is determined;
In the image to be processed, certificate image region is positioned based on the corner location.
2. the method according to claim 1, wherein described handle model for the image input picture to be processed Carry out the extraction of certificate corner feature, comprising:
The image input picture to be processed is handled into model, the multilayer of dense connection network in model is handled by described image Neural network successively carries out the extraction of certificate corner feature to the image to be processed;The output fusion of the dense connection network The output of each layer neural network included by the dense connection network.
3. according to the method described in claim 2, it is characterized in that, by the way that the image input picture to be processed is handled mould Type is handled the multilayer neural network of dense connection network in model by described image, successively carried out to the image to be processed Certificate corner feature extracts, comprising:
The image input picture to be processed is handled into model;
Successively using each layer neural network of the dense connection network as current layer neural network;
The output of each layer neural network before current layer neural network will be located in the dense connection network, and it is described dense The input splicing for connecting first layer neural network in network obtains the comprehensive input of current layer neural network;
The comprehensive input is handled by current layer neural network, obtains the output of current layer neural network, until The output of the last layer neural network into the dense connection network;
Output by the output of the last layer neural network as the dense connection network.
4. according to the method described in claim 2, it is characterized in that, the quantity of the dense connection network is more than one;It is described The convolutional layer that dense connection network includes is more than one layer;The method also includes:
In the dense connection network of the preset quantity forward for network-order, the convolutional layer of the preset quantity of layer front, By the output of the convolutional layer, the parallel batch normalization layer of input and example normalize layer respectively, obtain batch normalized output and reality Example normalized output;
Normalized output and the splicing of example normalized output described will be criticized, will be returned as with parallel batch normalization layer and example One changes next layer adjacent of input of layer.
5. the method according to claim 1, wherein the certificate corner feature includes that certificate angle point content is special Sign;Model is handled by described image to handle extracted certificate corner feature, is generated and the image pair to be processed The corner location predicted characteristics figure answered, comprising:
Model is handled by described image to handle extracted certificate angle point content characteristic, is generated and the figure to be processed The respective position prediction characteristic pattern of each certificate angle point for including as in;
Wherein, each corresponding position prediction characteristic pattern of the certificate angle point is arranged according to preset certificate angle point sequence;Often Pixel in a corresponding position prediction characteristic pattern of the certificate angle point has the probability for indicating to belong to corresponding certificate angle point Pixel value, and it is corresponding with the pixel in the image to be processed.
6. the method according to claim 1, wherein described according to the corner location predicted characteristics figure, determination Corner location in the image to be processed, comprising:
The location prediction corner location in the corner location predicted characteristics figure;
Reference point locations are chosen in the default neighborhood of the prediction corner location;
When the value differences of the prediction corner location and the reference point locations are less than default difference, then by the prediction Corner location is deviated towards the direction of the reference point locations, obtains Corner position.
7. the method according to claim 1, wherein described handle model to extracted card by described image Part corner feature is handled, and corner location predicted characteristics figure corresponding with the image to be processed is generated, comprising:
Model is handled by described image to handle the certificate corner feature of extraction, is given birth to respectively by parallel output branch At certificate image type corresponding with the image to be processed and corner location predicted characteristics figure;
The method also includes:
Certificate image is intercepted in the image to be processed according to the certificate image region;
Certificate identification is carried out in conjunction with the certificate image type and the certificate image.
8. method according to any one of claims 1-7, which is characterized in that the method also includes:
Collect includes that the target image of certificate image handles the training sample of model as described image;
The corner location characteristic pattern of each training sample is generated as corresponding training label;
It obtains training sample input described image processing model to train output;
Model is handled according to the training output and the trained label building loss function training described image.
9. according to the method described in claim 8, it is characterized in that, the corner location feature for generating each training sample Figure is as corresponding training label, comprising:
For each certificate corner sample in each training sample, respectively with the certificate corner sample in affiliated training sample Centered on position, the position feature figure of the certificate corner sample is generated by default distribution mode;
According to the position feature figure of each certificate corner sample in each training sample, the angle point of each training sample is generated Position feature figure trains label as each training sample accordingly.
10. according to the method described in claim 9, it is characterized in that, each certificate angle according in each training sample The position feature figure of point sample generates the corner location characteristic pattern of each training sample, corresponding as each training sample Training label, comprising:
According to the position feature figure of each certificate corner sample in each training sample, the background of each training sample is generated Channel characteristics figure;
By the position feature figure of each certificate corner sample in each training sample, according to preset certificate angle point sequential concatenation Continue to splice with the background channel characteristic pattern afterwards, the corner location characteristic pattern of each training sample is obtained, as each described Training sample trains label accordingly.
11. according to the method described in claim 8, it is characterized in that, the corner location for generating each training sample is special Sign figure is as corresponding training label, comprising:
The corner location characteristic pattern of each training sample is generated as corresponding first training label;
Using certificate image type belonging to certificate image in each training sample as corresponding second training label;
It is described to obtain training sample input described image processing model to train output, comprising:
Training sample input described image is handled into model, is respectively obtained and the trained sample by parallel output branch This corresponding prediction certificate image type and corner sample position prediction characteristic pattern;
It is described to train described image to handle model according to the training output and the trained label building loss function, comprising:
First-loss function is constructed according to the corner sample position prediction characteristic pattern and the first training label, and according to institute It states prediction certificate image type and the second training label constructs the second loss function;
Model is handled in conjunction with the first-loss function and second loss function training described image.
12. according to the method described in claim 10, it is characterized in that, the method also includes:
Actual corners point sample position is determined in the training sample;
Prediction corner sample position is determined in the corner sample position prediction characteristic pattern;
Third loss function is constructed according to the actual corners point sample position and the prediction corner sample position;
First-loss function described in the combination and second loss function training described image handle model, comprising:
Mould is handled in conjunction with the first-loss function, second loss function and the third loss function training described image Type.
13. a kind of image processing apparatus, comprising:
Module is obtained, for obtaining image to be processed;
Extraction module, for the image input picture processing model to be processed to be carried out the extraction of certificate corner feature;
Generation module, for by described image handle model extracted certificate corner feature is handled, generate with to Handle the corresponding corner location predicted characteristics figure of image;Pixel in the corner location predicted characteristics figure, having indicates Belong to the pixel value of the probability of certificate angle point, and corresponding with the pixel in the image to be processed;
Determining module, for determining the corner location in the image to be processed according to the corner location predicted characteristics figure;
Locating module, for positioning certificate image region based on the corner location in the image to be processed.
14. a kind of computer readable storage medium is stored with computer program, when the computer program is executed by processor, So that the processor is executed such as the step of any one of claims 1 to 12 the method.
15. a kind of computer equipment, including memory and processor, the memory is stored with computer program, the calculating When machine program is executed by the processor, so that the processor is executed such as any one of claims 1 to 12 the method Step.
CN201910228327.7A 2019-03-25 2019-03-25 Image processing method, image processing device, computer-readable storage medium and computer equipment Active CN110163193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910228327.7A CN110163193B (en) 2019-03-25 2019-03-25 Image processing method, image processing device, computer-readable storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910228327.7A CN110163193B (en) 2019-03-25 2019-03-25 Image processing method, image processing device, computer-readable storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN110163193A true CN110163193A (en) 2019-08-23
CN110163193B CN110163193B (en) 2021-08-06

Family

ID=67638999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910228327.7A Active CN110163193B (en) 2019-03-25 2019-03-25 Image processing method, image processing device, computer-readable storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN110163193B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110659726A (en) * 2019-09-24 2020-01-07 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN110738164A (en) * 2019-10-12 2020-01-31 北京猎户星空科技有限公司 Part abnormity detection method, model training method and device
CN110738602A (en) * 2019-09-12 2020-01-31 北京三快在线科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN110765795A (en) * 2019-09-24 2020-02-07 北京迈格威科技有限公司 Two-dimensional code identification method and device and electronic equipment
CN110929732A (en) * 2019-11-27 2020-03-27 中国建设银行股份有限公司 Certificate image intercepting method, storage medium and certificate image intercepting device
CN111080338A (en) * 2019-11-11 2020-04-28 中国建设银行股份有限公司 User data processing method and device, electronic equipment and storage medium
CN111611947A (en) * 2020-05-25 2020-09-01 济南博观智能科技有限公司 License plate detection method, device, equipment and medium
WO2020220516A1 (en) * 2019-04-30 2020-11-05 深圳市商汤科技有限公司 Image generation network training and image processing methods, apparatus, electronic device and medium
CN112651395A (en) * 2021-01-11 2021-04-13 上海优扬新媒信息技术有限公司 Image processing method and device
CN113177885A (en) * 2021-03-30 2021-07-27 新东方教育科技集团有限公司 Method, device, storage medium and electronic equipment for correcting image
CN113269197A (en) * 2021-04-25 2021-08-17 南京三百云信息科技有限公司 Certificate image vertex coordinate regression system and identification method based on semantic segmentation
US11410344B2 (en) 2019-02-02 2022-08-09 Shenzhen Sensetime Technology Co., Ltd. Method for image generation, electronic device, and storage medium
CN116994002A (en) * 2023-09-25 2023-11-03 杭州安脉盛智能技术有限公司 Image feature extraction method, device, equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506763B (en) * 2017-09-05 2020-12-01 武汉大学 Multi-scale license plate accurate positioning method based on convolutional neural network
CN108564120B (en) * 2018-04-04 2022-06-14 中山大学 Feature point extraction method based on deep neural network
CN109035184A (en) * 2018-06-08 2018-12-18 西北工业大学 A kind of intensive connection method based on the deformable convolution of unit
CN108960115B (en) * 2018-06-27 2021-11-09 电子科技大学 Multidirectional text detection method based on angular points
CN109118473B (en) * 2018-07-03 2022-04-12 深圳大学 Angular point detection method based on neural network, storage medium and image processing system
CN109034050B (en) * 2018-07-23 2022-05-03 顺丰科技有限公司 Deep learning-based identification card image text recognition method and device
CN109101963A (en) * 2018-08-10 2018-12-28 深圳市碧海扬帆科技有限公司 Certificate image automatic positive method, image processing apparatus and readable storage medium storing program for executing
CN109241894B (en) * 2018-08-28 2022-04-08 南京安链数据科技有限公司 Bill content identification system and method based on form positioning and deep learning
CN109376589B (en) * 2018-09-07 2022-01-14 中国海洋大学 ROV deformation small target identification method based on convolution kernel screening SSD network
CN109344727B (en) * 2018-09-07 2020-11-27 苏州创旅天下信息技术有限公司 Identity card text information detection method and device, readable storage medium and terminal
CN109446900A (en) * 2018-09-21 2019-03-08 平安科技(深圳)有限公司 Certificate authenticity verification method, apparatus, computer equipment and storage medium

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11410344B2 (en) 2019-02-02 2022-08-09 Shenzhen Sensetime Technology Co., Ltd. Method for image generation, electronic device, and storage medium
WO2020220516A1 (en) * 2019-04-30 2020-11-05 深圳市商汤科技有限公司 Image generation network training and image processing methods, apparatus, electronic device and medium
CN110738602A (en) * 2019-09-12 2020-01-31 北京三快在线科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN110738602B (en) * 2019-09-12 2021-01-01 北京三快在线科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN110765795A (en) * 2019-09-24 2020-02-07 北京迈格威科技有限公司 Two-dimensional code identification method and device and electronic equipment
CN110765795B (en) * 2019-09-24 2023-12-12 北京迈格威科技有限公司 Two-dimensional code identification method and device and electronic equipment
CN110659726A (en) * 2019-09-24 2020-01-07 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN110738164A (en) * 2019-10-12 2020-01-31 北京猎户星空科技有限公司 Part abnormity detection method, model training method and device
CN110738164B (en) * 2019-10-12 2022-08-12 北京猎户星空科技有限公司 Part abnormity detection method, model training method and device
CN111080338A (en) * 2019-11-11 2020-04-28 中国建设银行股份有限公司 User data processing method and device, electronic equipment and storage medium
CN111080338B (en) * 2019-11-11 2024-05-24 建信金融科技有限责任公司 User data processing method and device, electronic equipment and storage medium
CN110929732A (en) * 2019-11-27 2020-03-27 中国建设银行股份有限公司 Certificate image intercepting method, storage medium and certificate image intercepting device
CN111611947A (en) * 2020-05-25 2020-09-01 济南博观智能科技有限公司 License plate detection method, device, equipment and medium
CN111611947B (en) * 2020-05-25 2024-04-09 济南博观智能科技有限公司 License plate detection method, device, equipment and medium
CN112651395A (en) * 2021-01-11 2021-04-13 上海优扬新媒信息技术有限公司 Image processing method and device
CN113177885A (en) * 2021-03-30 2021-07-27 新东方教育科技集团有限公司 Method, device, storage medium and electronic equipment for correcting image
CN113269197A (en) * 2021-04-25 2021-08-17 南京三百云信息科技有限公司 Certificate image vertex coordinate regression system and identification method based on semantic segmentation
CN113269197B (en) * 2021-04-25 2024-03-08 南京三百云信息科技有限公司 Certificate image vertex coordinate regression system and identification method based on semantic segmentation
CN116994002A (en) * 2023-09-25 2023-11-03 杭州安脉盛智能技术有限公司 Image feature extraction method, device, equipment and storage medium
CN116994002B (en) * 2023-09-25 2023-12-19 杭州安脉盛智能技术有限公司 Image feature extraction method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110163193B (en) 2021-08-06

Similar Documents

Publication Publication Date Title
CN110163193A (en) Image processing method, device, computer readable storage medium and computer equipment
CN108960211B (en) Multi-target human body posture detection method and system
CN105719188B (en) The anti-method cheated of settlement of insurance claim and server are realized based on plurality of pictures uniformity
CN102800148B (en) RMB sequence number identification method
CN110309876A (en) Object detection method, device, computer readable storage medium and computer equipment
CN111680746B (en) Vehicle damage detection model training, vehicle damage detection method, device, equipment and medium
CN110163197A (en) Object detection method, device, computer readable storage medium and computer equipment
CN107944450A (en) A kind of licence plate recognition method and device
CN106446873A (en) Face detection method and device
CN110502986A (en) Identify character positions method, apparatus, computer equipment and storage medium in image
CN109740548A (en) A kind of reimbursement bill images dividing method and system
CN109190513A (en) In conjunction with the vehicle of saliency detection and neural network again recognition methods and system
JP2022025008A (en) License plate recognition method based on text line recognition
CN110348331A (en) Face identification method and electronic equipment
CN109360190A (en) Building based on image superpixel fusion damages detection method and device
CN110084743B (en) Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN107610097A (en) Instrument localization method, device and terminal device
CN103743750B (en) A kind of generation method of distribution diagram of surface damage of heavy calibre optical element
CN109074663A (en) Object volume measuring method, related device and computer readable storage medium
CN110135268A (en) Face comparison method, device, computer equipment and storage medium
CN113971764A (en) Remote sensing image small target detection method based on improved YOLOv3
CN107862314A (en) A kind of coding recognition methods and identification device
CN116343095A (en) Vehicle track extraction method based on video stitching and related equipment
CN115860139A (en) Deep learning-based multi-scale ship target detection method
CN114529488A (en) Image fusion method, device and equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant