CN106372651A - Picture quality detection method and device - Google Patents

Picture quality detection method and device Download PDF

Info

Publication number
CN106372651A
CN106372651A CN201610704799.1A CN201610704799A CN106372651A CN 106372651 A CN106372651 A CN 106372651A CN 201610704799 A CN201610704799 A CN 201610704799A CN 106372651 A CN106372651 A CN 106372651A
Authority
CN
China
Prior art keywords
photo
training
resolution
convolutional neural
neural networks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610704799.1A
Other languages
Chinese (zh)
Other versions
CN106372651B (en
Inventor
王健宗
马进
刘铭
郭卉
梁浩
李佳琳
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201610704799.1A priority Critical patent/CN106372651B/en
Publication of CN106372651A publication Critical patent/CN106372651A/en
Priority to PCT/CN2017/091306 priority patent/WO2018036276A1/en
Application granted granted Critical
Publication of CN106372651B publication Critical patent/CN106372651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Abstract

The invention relates to a picture quality detection method and device. The picture quality detection method comprises the following steps of after a car insurance claims settlement server receives a claims settlement photo uploaded by a user terminal, performing definition identification on the received claims settlement photo by using a pretrained and generated deep convolutional neural network model so as to determine the definition grade of the claims settlement photo; and if the definition grade of the claims settlement photo is lower than the preset definition grade, sending first reminding information to the user terminal so as to remind the user to upload the claims settlement photo again. The claims settlement photo is subjected to the definition identification through the pretrained and generated deep convolutional neural network model; each claims settlement photo uploaded by the user is enabled to be the claims settlement photo capable of being accurately analyzed to obtain the car insurance site information; therefore the work efficiency improvement of a self-help claims settlement system is facilitated; and the experience of the user is improved.

Description

The detection method of picture quality and device
Technical field
The present invention relates to technical field of image processing, more particularly, to a kind of detection method of picture quality and device.
Background technology
At present, in vehicle insurance intelligent self-service Claims Resolution system, the quality of picture ceases manner of breathing with the accurate rate of vehicle image identification Close.Specifically, if user uploads clearly vehicle image, this Claims Resolution system can analyze the feelings at vehicle insurance scene very accurately Condition;If conversely, the vehicle image that user is uploaded is not clear, Claims Resolution system cannot draw according to the analysis of described vehicle image Vehicle insurance field data is it is impossible to normal work.Therefore, whether how to identify the definition of the vehicle image that user uploads exactly Meet the requirements, meet the needs of analysis, become a problem demanding prompt solution.
Content of the invention
The technical problem to be solved is to provide a kind of detection method of picture quality and device.
The technical scheme is that a kind of detection method of picture quality, described picture The detection method of quality includes:
S1, vehicle insurance Claims Resolution server, after receiving the Claims Resolution photo of user terminal uploads, is generated using training in advance Depth convolutional neural networks model enters line definition identification to the Claims Resolution photo receiving, to determine the clear of described Claims Resolution photo Degree grade;
S2, if the levels of sharpness of described Claims Resolution photo is less than default levels of sharpness, sends the first information extremely Described user terminal, to remind user again to upload Claims Resolution photo.
Preferably, before described step s1, the method also includes:
S01, the Claims Resolution photo of predetermined number is classified by predetermined levels of sharpness, and is extracted under each classification In Claims Resolution photo, as training photo, under each classification of extraction, remaining Claims Resolution photo is as checking for the Claims Resolution photo of preset ratio Photo;
S02, carries out feature extraction to each training photo under each classification, to obtain depth convolution to be entered to described The first pixel vectors in neural network model, and will be defeated for corresponding first pixel vectors of each training photo under each classification Enter to described depth convolutional neural networks model, to train the depth convolutional neural networks model generating for identification;
S03, carries out feature extraction to each checking photo under each classification, to obtain the depth that input generates to training The second pixel vectors in convolutional neural networks model, and by each classification under corresponding second pixel of each checking photo to In the depth convolutional neural networks model that amount input generates to training, to verify the depth convolutional neural networks model that training generates Accuracy rate;
S04, if the accuracy rate of the depth convolutional neural networks model of training generation is more than or equal to predetermined threshold value, training knot Bundle.
Preferably, after described step s03, the method also includes: s05, if the depth convolutional neural networks that training generates The accuracy rate of model is less than predetermined threshold value, then send the second prompting message to described user terminal, to remind user to increase Claims Resolution The sample size of photo.
Preferably, the described step bag that each training photo under each classification or checking photo are carried out with feature extraction Include:
For the lower each training photo of each classification or checking photo, using different convolution kernels from each train photo or First block of pixels of checking photo begins stepping through and carries out convolution algorithm to last block of pixels, is shone with extracting each training Piece or checking photo corresponding different characteristic figure;
Pond and rasterization process are carried out to the characteristic pattern of each training photo extracting or checking photo, will extract The each training photo eigen figure going out is processed into the first consistent pixel vectors of dimension, by each checking photo eigen extracting Figure is processed into the second consistent pixel vectors of dimension.
Preferably, in described step s02, the parameter of described depth convolutional neural networks model is by using back kick Broadcast what the estimation of bp method obtained.
The technical scheme that the present invention solves above-mentioned technical problem is also as follows: a kind of detection means of picture quality, described figure The detection means of piece quality includes:
Identification module, for the depth after receiving the Claims Resolution photo of user terminal uploads, being generated using training in advance Convolutional neural networks model enters line definition identification to the Claims Resolution photo receiving, to determine definition of described Claims Resolution photo etc. Level;
Prompting module, if the levels of sharpness for described Claims Resolution photo is less than default levels of sharpness, sends first Information extremely described user terminal, to remind user again to upload Claims Resolution photo.
Preferably, the detection means of described picture quality also includes:
Sort module, for the Claims Resolution photo of predetermined number is classified by predetermined levels of sharpness, and extracts every In Claims Resolution photo under one classification, the Claims Resolution photo of preset ratio, as training photo, extracts remaining Claims Resolution under each classification and shines Piece is as checking photo;
Training module, for each classification under each training photo carry out feature extraction, with obtain to be entered to institute State the first pixel vectors in depth convolutional neural networks model, and by each training photo corresponding first under each classification Pixel vectors input to described depth convolutional neural networks model, to train the depth convolutional neural networks generating for identification Model;
Authentication module, for carrying out feature extraction to each checking photo under each classification, to obtain input to training The second pixel vectors in the depth convolutional neural networks model generating, and will be corresponding for each checking photo under each classification Second pixel vectors input in the depth convolutional neural networks model generating to training, to verify the depth convolution god that training generates Accuracy rate through network model;
Terminate module, if the accuracy rate of the depth convolutional neural networks model generating for training is more than or equal to default threshold Value, then training terminates.
Preferably, the detection means of described picture quality also includes:
Loop module, if the accuracy rate of the depth convolutional neural networks model generating for training is less than predetermined threshold value, Send the second prompting message to described user terminal, to remind user to increase the sample size of Claims Resolution photo.
Preferably, described training module is specifically for for each training photo under each classification, using different convolution Core begins stepping through from first block of pixels of each training photo and carries out convolution algorithm to last block of pixels, every to extract One training photo corresponding different characteristic figure;The characteristic pattern of each training photo extracting is carried out at pond and rasterisation Reason, each training photo eigen figure extracting is processed into the first consistent pixel vectors of dimension;
Described authentication module is specifically for for each checking photo under each classification, using different convolution kernels from every First block of pixels of one checking photo begins stepping through and carries out convolution algorithm to last block of pixels, to extract each checking Photo corresponding different characteristic figure;Pond and rasterization process are carried out to the characteristic pattern of each checking photo extracting, will The each checking photo eigen figure extracting is processed into the second consistent pixel vectors of dimension.
Preferably, described training module is specifically for estimating to obtain described depth convolution by using back-propagation bp method The parameter of neural network model.
The invention has the beneficial effects as follows: compared to prior art, the detection method of picture quality of the present invention and dress Put when processing vehicle insurance Claims Resolution photo, Claims Resolution photo is uploaded to vehicle insurance Claims Resolution server from user terminal, is given birth to by training in advance The depth convolutional neural networks model becoming enters line definition analysis to Claims Resolution photo, and whether the levels of sharpness of determination Claims Resolution photo Satisfaction is actually needed, if the levels of sharpness of described Claims Resolution photo is unsatisfactory for being actually needed, sends to user terminal and reminds Information, to remind it again to upload Claims Resolution photo.The depth convolutional neural networks model pair that the present invention is generated by training in advance Claims Resolution photo enters line definition identification it is ensured that the Claims Resolution photo that user is uploaded is all can to analyze exactly to draw vehicle insurance scene The Claims Resolution photo of information, so, is favorably improved the work efficiency of self-service Claims Resolution system, improves Consumer's Experience.
Brief description
Fig. 1 is the schematic flow sheet of the detection method first embodiment of picture quality of the present invention;
Fig. 2 is the schematic flow sheet of the detection method second embodiment of picture quality of the present invention;
Fig. 3 is the schematic flow sheet of the detection method 3rd embodiment of picture quality of the present invention;
Fig. 4 is to carry out carrying out convolution during feature extraction to each training photo under each classification or checking photo in Fig. 2 Schematic diagram;
Fig. 5 is the structural representation of detection means one embodiment of picture quality of the present invention.
Specific embodiment
Below in conjunction with accompanying drawing, the principle of the present invention and feature are described, example is served only for explaining the present invention, and Non- for limiting the scope of the present invention.
As shown in figure 1, Fig. 1 is the schematic flow sheet of detection method one embodiment of picture quality of the present invention, this picture product The detection method of matter comprises the following steps:
Step s1, vehicle insurance Claims Resolution server, after receiving the Claims Resolution photo of user terminal uploads, is given birth to using training in advance The depth convolutional neural networks model becoming enters line definition identification to the Claims Resolution photo receiving, to determine described Claims Resolution photo Levels of sharpness;
In the present embodiment, the Claims Resolution needing into line definition identification of server receive user terminal upload of being settled a claim by vehicle insurance Photo.User terminal can be mobile phone, panel computer, personal digital assistant (personal digital assistant, Pda), the intelligent terminal such as wearable device (for example, intelligent watch, intelligent glasses etc.) or other any suitable electronics set Standby.
In the present embodiment, training in advance generates depth convolutional neural networks model, the depth being generated using this training in advance Convolutional neural networks model enters line definition identification to the Claims Resolution photo uploading, and specifically, training in advance generates depth convolution god It is the neutral net of a multilamellar through network model, including feature extraction layer (c layer), Feature Mapping layer (s layer), every layer by multiple Two dimensional surface forms, and each plane is made up of multiple independent neurons, each feature extraction layer (c layer) followed by It is used for seeking the computation layer (s layer) of local average and second extraction, this distinctive structure of feature extraction twice makes it in identification There is higher distortion tolerance to input sample.Claims Resolution photo inputs the depth convolutional neural networks generating to this training in advance During Model Identification, in c layer, local experiences region (i.e. the sub-fraction of the photo) phase of the input of each neuron and preceding layer Even, extract the feature in this local experiences region, after extraction, the position relationship between itself and other feature also determines therewith;In s layer, It is made up of multiple Feature Mapping, and each Feature Mapping is a plane, and in plane, the weights of all neurons are equal.Feature is reflected Penetrating structure can adopt the little sigmoid function of influence function core as the activation primitive of convolutional network so that Feature Mapping has There is shift invariant.Claims Resolution photo enters after line definition identification through the depth convolutional neural networks model that this training in advance generates, Photo of settling a claim exports respectively according to levels of sharpness, and wherein, levels of sharpness can be divided into fine definition grade, middle definition Grade, low definition grade etc., it is of course also possible to distinguish levels of sharpness in other ways.
Step s2, if the levels of sharpness of described Claims Resolution photo is less than default levels of sharpness, sends the first prompting letter Cease to described user terminal, to remind user again to upload Claims Resolution photo.
In the present embodiment, if the levels of sharpness of the Claims Resolution photo going out through depth convolutional neural networks Model Identification is less than When the Claims Resolution photo of default levels of sharpness, such as upload is low definition grade, then the Claims Resolution of explanation user terminal uploads is shone Piece is undesirable photo it is impossible to for the live situation of analysis vehicle insurance exactly, at this moment send first to user terminal Prompting message, to remind user again to upload Claims Resolution photo;Certainly, if having higher requirement to the definition of Claims Resolution photo When, need the Claims Resolution photo of user terminal uploads more fine definition grade, for example, upload the Claims Resolution photo of fine definition grade, and The Claims Resolution photo of user terminal uploads is if middle levels of sharpness or low definition grade, all undesirable, needs Send the first prompting message to user terminal, to remind user again to upload Claims Resolution photo.
Compared with prior art, the present embodiment uploads Claims Resolution photo from user terminal to vehicle insurance Claims Resolution server, by pre- The depth convolutional neural networks model first training generation enters line definition analysis to Claims Resolution photo, determines the definition of Claims Resolution photo Whether grade meets is actually needed, if the levels of sharpness of described Claims Resolution photo is unsatisfactory for being actually needed, to user terminal Send prompting message, to remind it again to upload Claims Resolution photo.The depth convolutional Neural that the present embodiment is generated by training in advance Network model enters line definition to Claims Resolution photo and identifies it is ensured that the Claims Resolution photo that user is uploaded is all to analyze exactly Go out the Claims Resolution photo of vehicle insurance field data, so, be favorably improved the work efficiency of self-service Claims Resolution system, improve Consumer's Experience.
In a preferred embodiment, as shown in Fig. 2 on the basis of the embodiment of above-mentioned Fig. 1, above-mentioned step s1 it Front inclusion:
S01, the Claims Resolution photo of predetermined number is classified by predetermined levels of sharpness, and is extracted under each classification In Claims Resolution photo, as training photo, under each classification of extraction, remaining Claims Resolution photo is as checking for the Claims Resolution photo of preset ratio Photo;
S02, carries out feature extraction to each training photo under each classification, to obtain depth convolution to be entered to described The first pixel vectors in neural network model, and will be defeated for corresponding first pixel vectors of each training photo under each classification Enter to described depth convolutional neural networks model, to train the depth convolutional neural networks model generating for identification;
S03, carries out feature extraction to each checking photo under each classification, to obtain the depth that input generates to training The second pixel vectors in convolutional neural networks model, and by each classification under corresponding second pixel of each checking photo to In the depth convolutional neural networks model that amount input generates to training, to verify the depth convolutional neural networks model that training generates Accuracy rate;
S04, if the accuracy rate of the depth convolutional neural networks model of training generation is more than or equal to predetermined threshold value, training knot Bundle.
In the present embodiment, when training generates depth convolutional neural networks model, can be in advance according to predetermined definition Grade is classified to the Claims Resolution photo of predetermined number, for example, carry out levels of sharpness classification to 500,000 Claims Resolution photos, permissible Classified according to predetermined levels of sharpness according to the resolution of Claims Resolution photo in advance, high-resolution Claims Resolution photo is high definition Clear degree grade, the Claims Resolution photo of intermediate resolution are middle levels of sharpness, the Claims Resolution photo of low resolution is low definition grade. After the completion of classification, the Claims Resolution photo that the Claims Resolution photo under each classification is respectively extracted with preset ratio, as training photo, for example will Predetermined number Claims Resolution photo in 70% Claims Resolution photo as training photo, using each classification remaining Claims Resolution photo as Checking photo, such as using remaining 30% Claims Resolution photo as training photo.
Then, feature extraction is carried out to each training photo under each classification, obtain different characteristic patterns to extract, right Characteristic pattern carries out process of convolution, to finally give the first pixel vectors to depth convolutional neural networks model for the input, utilizes The training of this first pixel vectors generates the depth convolutional neural networks model for identification;Each checking under each classification is shone Piece carries out feature extraction, obtains different characteristic patterns to extract, carries out process of convolution to characteristic pattern, to finally give input to depth The second pixel vectors in degree convolutional neural networks model, the depth convolution god being generated using the checking training of this second pixel vectors Accuracy rate through network model.If checking obtains training the accuracy rate of the depth convolutional neural networks model generating to be more than or equal to Preset value, is greater than equal to 0.95, then the depth convolutional neural networks model that explanation training generates can reach expected clear Clear degree recognition effect, training terminates, and the depth convolutional neural networks model that subsequently can be generated using this training is entered to Claims Resolution photo Line definition identifies.
In a preferred embodiment, as shown in figure 3, on the basis of the embodiment of above-mentioned Fig. 2, in above-mentioned steps s03 Also include afterwards:
S05, if the accuracy rate of the depth convolutional neural networks model of training generation is less than predetermined threshold value, to described user Terminal sends the second prompting message, to remind user to increase the sample size of Claims Resolution photo, is back to described step s01 and follows Ring.
In the present embodiment, if the accuracy rate of the depth convolutional neural networks model of training generation is less than predetermined threshold value, example Such as less than 0.95, then the depth convolutional neural networks model that explanation training generates can not reach expected definition identification effect Really, send the second prompting message to user terminal, to remind user to increase the sample size of Claims Resolution photo, based on increased Claims Resolution Photo continues depth convolutional neural networks model is trained, and specifically, is receiving being increased of user terminal uploads After Claims Resolution photo, may return in above-mentioned step s01, increased Claims Resolution photo is carried out point by predetermined levels of sharpness Class, till the accuracy rate of the depth convolutional neural networks model that training generates is more than or equal to predetermined threshold value.
In a preferred embodiment, on the basis of the embodiment of above-mentioned Fig. 2, above-mentioned steps s02 are under each classification Each training photo step of carrying out feature extraction include:
For each training photo under each classification, using different convolution kernels from each first pixel training photo BOB(beginning of block) travels through and carries out convolution algorithm to last block of pixels, to extract each training photo corresponding different characteristic figure;
Pond and rasterization process are carried out to the characteristic pattern of each training photo extracting, by each instruction extracting Practice photo eigen figure and be processed into the first consistent pixel vectors of dimension;
The step carrying out feature extraction to each checking photo under each classification in step s03 includes:
For each checking photo under each classification, using different convolution kernels from each first pixel verifying photo BOB(beginning of block) travels through and carries out convolution algorithm to last block of pixels, to extract each checking photo corresponding different characteristic figure;
Pond and rasterization process are carried out to the characteristic pattern of each checking photo extracting, each tests extract License piece characteristic pattern is processed into the second consistent pixel vectors of dimension.
In the present embodiment, for each training photo under each classification or checking photo, using different convolution kernels from every First block of pixels of one training photo or checking photo begins stepping through and carries out convolution algorithm to last block of pixels, such as Fig. 4 Shown, for each block of pixels to be entered [(0,0), (1,0), (2,0), (0,1), (1,1), (2,1), (0,2), (1,2), (2,2)], using convolution kernel (i, h, g, f, e, d, c, b, a) carry out convolution algorithm, from the first block of pixels carry out convolution algorithm and time Go through to last block of pixels, each block of pixels correspondence obtains output vector (1,1), and the set of all output vectors can be instructed Practice photo or checking photo corresponding different characteristic figure.Then, pond and rasterisation are carried out to the characteristic pattern of each training photo Process, each training photo eigen figure extracting is processed into the first consistent pixel vectors of dimension, and, to each checking The characteristic pattern of photo carries out pond and rasterization process, each checking photo eigen figure extracting is processed into dimension consistent Second pixel vectors.
In a preferred embodiment, on the basis of the embodiment of above-mentioned Fig. 2, in above-mentioned steps s02, depth convolution god Parameter through network model is estimated to obtain by using back-propagation bp method.Wherein, in training, for each training photo Sequentially carry out convolution, Chi Hua, raster manipulation, the residual error between the actual value of Claims Resolution photo and estimated value can be obtained, using every The residual error of secondary generation is inversely updated to parameter by rasterisation, Chi Hua, convolution, repeatedly forward direction inversely carry out above-mentioned Operation is till global error restrains.Obtain the parameter of depth convolutional neural networks model when global error restrains.Wherein, When training for the first time, the parameter of this depth convolutional neural networks model uses default parameterss, will not be described here.
As shown in figure 5, Fig. 5 is the structural representation of detection means one embodiment of picture quality of the present invention, this picture product The detection means of matter includes:
Identification module 101, for the depth after receiving the Claims Resolution photo of user terminal uploads, being generated using training in advance Degree convolutional neural networks model enters line definition identification to the Claims Resolution photo receiving, to determine the definition of described Claims Resolution photo Grade;
In the present embodiment, the detection means of picture quality is integrated in vehicle insurance Claims Resolution server.User terminal can be handss Machine, panel computer, personal digital assistant (personal digital assistant, pda), wearable device (for example, intelligence Wrist-watch, intelligent glasses etc.) etc. intelligent terminal or other any suitable electronic equipment.
In the present embodiment, training in advance generates depth convolutional neural networks model, the depth being generated using this training in advance Convolutional neural networks model enters line definition identification to the Claims Resolution photo uploading, and specifically, training in advance generates depth convolution god It is the neutral net of a multilamellar through network model, including feature extraction layer (c layer), Feature Mapping layer (s layer), every layer by multiple Two dimensional surface forms, and each plane is made up of multiple independent neurons, each feature extraction layer (c layer) followed by It is used for seeking the computation layer (s layer) of local average and second extraction, this distinctive structure of feature extraction twice makes it in identification There is higher distortion tolerance to input sample.Claims Resolution photo inputs the depth convolutional neural networks generating to this training in advance During Model Identification, in c layer, local experiences region (i.e. the sub-fraction of the photo) phase of the input of each neuron and preceding layer Even, extract the feature in this local experiences region, after extraction, the position relationship between itself and other feature also determines therewith;In s layer, It is made up of multiple Feature Mapping, and each Feature Mapping is a plane, and in plane, the weights of all neurons are equal.Feature is reflected Penetrating structure can adopt the little sigmoid function of influence function core as the activation primitive of convolutional network so that Feature Mapping has There is shift invariant.Claims Resolution photo enters after line definition identification through the depth convolutional neural networks model that this training in advance generates, Photo of settling a claim exports respectively according to levels of sharpness, and wherein, levels of sharpness can be divided into fine definition grade, middle definition Grade, low definition grade etc., it is of course also possible to distinguish levels of sharpness in other ways.
Prompting module 102, if the levels of sharpness for described Claims Resolution photo is less than default levels of sharpness, sends the One information extremely described user terminal, to remind user again to upload Claims Resolution photo.
In the present embodiment, if the levels of sharpness of the Claims Resolution photo going out through depth convolutional neural networks Model Identification is less than When the Claims Resolution photo of default levels of sharpness, such as upload is low definition grade, then the Claims Resolution of explanation user terminal uploads is shone Piece is undesirable photo it is impossible to for the live situation of analysis vehicle insurance exactly, at this moment send first to user terminal Prompting message, to remind user again to upload Claims Resolution photo;Certainly, if having higher requirement to the definition of Claims Resolution photo When, need the Claims Resolution photo of user terminal uploads more fine definition grade, for example, upload the Claims Resolution photo of fine definition grade, and The Claims Resolution photo of user terminal uploads is if middle levels of sharpness or low definition grade, all undesirable, needs Send the first prompting message to user terminal, to remind user again to upload Claims Resolution photo.
In a preferred embodiment, on the basis of the embodiment of above-mentioned Fig. 5, the detection means of above-mentioned picture quality is also Including:
Sort module, for the Claims Resolution photo of predetermined number is classified by predetermined levels of sharpness, and extracts every In Claims Resolution photo under one classification, the Claims Resolution photo of preset ratio, as training photo, extracts remaining Claims Resolution under each classification and shines Piece is as checking photo;
Training module, for each classification under each training photo carry out feature extraction, with obtain to be entered to institute State the first pixel vectors in depth convolutional neural networks model, and by each training photo corresponding first under each classification Pixel vectors input to described depth convolutional neural networks model, to train the depth convolutional neural networks generating for identification Model;
Authentication module, for carrying out feature extraction to each checking photo under each classification, to obtain input to training The second pixel vectors in the depth convolutional neural networks model generating, and will be corresponding for each checking photo under each classification Second pixel vectors input in the depth convolutional neural networks model generating to training, to verify the depth convolution god that training generates Accuracy rate through network model;
Terminate module, if the accuracy rate of the depth convolutional neural networks model generating for training is more than or equal to default threshold Value, then training terminates.
In the present embodiment, when training generates depth convolutional neural networks model, can be in advance according to predetermined definition Grade is classified to the Claims Resolution photo of predetermined number, for example, carry out levels of sharpness classification to 500,000 Claims Resolution photos, permissible Classified according to predetermined levels of sharpness according to the resolution of Claims Resolution photo in advance, high-resolution Claims Resolution photo is high definition Clear degree grade, the Claims Resolution photo of intermediate resolution are middle levels of sharpness, the Claims Resolution photo of low resolution is low definition grade. After the completion of classification, the Claims Resolution photo that the Claims Resolution photo under each classification is respectively extracted with preset ratio, as training photo, for example will Predetermined number Claims Resolution photo in 70% Claims Resolution photo as training photo, using each classification remaining Claims Resolution photo as Checking photo, such as using remaining 30% Claims Resolution photo as training photo.
Then, feature extraction is carried out to each training photo under each classification, obtain different characteristic patterns to extract, right Characteristic pattern carries out process of convolution, to finally give the first pixel vectors to depth convolutional neural networks model for the input, utilizes The training of this first pixel vectors generates the depth convolutional neural networks model for identification;Each checking under each classification is shone Piece carries out feature extraction, obtains different characteristic patterns to extract, carries out process of convolution to characteristic pattern, to finally give input to depth The second pixel vectors in degree convolutional neural networks model, the depth convolution god being generated using the checking training of this second pixel vectors Accuracy rate through network model.If checking obtains training the accuracy rate of the depth convolutional neural networks model generating to be more than or equal to Preset value, is greater than equal to 0.95, then the depth convolutional neural networks model that explanation training generates can reach expected clear Clear degree recognition effect, training terminates, and the depth convolutional neural networks model that subsequently can be generated using this training is entered to Claims Resolution photo Line definition identifies.
In a preferred embodiment, on the basis of the above embodiments, the detection means of above-mentioned picture quality is also wrapped Include: loop module, if the accuracy rate of the depth convolutional neural networks model generating for training is less than predetermined threshold value, to described User terminal sends the second prompting message, to remind user to increase the sample size of Claims Resolution photo, triggers described identification module simultaneously Circulation.
In the present embodiment, if the accuracy rate of the depth convolutional neural networks model of training generation is less than predetermined threshold value, example Such as less than 0.95, then the depth convolutional neural networks model that explanation training generates can not reach expected definition identification effect Really, send the second prompting message to user terminal, to remind user to increase the sample size of Claims Resolution photo, based on increased Claims Resolution Photo continues depth convolutional neural networks model is trained, and specifically, is receiving being increased of user terminal uploads After Claims Resolution photo, above-mentioned identification module can be triggered and circulate, and increased Claims Resolution photo is entered by predetermined levels of sharpness Row classification, till the accuracy rate of the depth convolutional neural networks model that training generates is more than or equal to predetermined threshold value.
In a preferred embodiment, on the basis of the above embodiments, described training module is specifically for for every Each training photo under one classification, is begun stepping through to from first block of pixels of each training photo using different convolution kernels A block of pixels carries out convolution algorithm afterwards, to extract each training photo corresponding different characteristic figure;Each to extract The characteristic pattern of training photo carries out pond and rasterization process, and each training photo eigen figure extracting is processed into dimension The first consistent pixel vectors;
Described authentication module is specifically for for each checking photo under each classification, using different convolution kernels from every First block of pixels of one checking photo begins stepping through and carries out convolution algorithm to last block of pixels, to extract each checking Photo corresponding different characteristic figure;Pond and rasterization process are carried out to the characteristic pattern of each checking photo extracting, will The each checking photo eigen figure extracting is processed into the second consistent pixel vectors of dimension.
In the present embodiment, for each training photo under each classification or checking photo, using different convolution kernels from every First block of pixels of one training photo or checking photo begins stepping through and carries out convolution algorithm to last block of pixels, such as Fig. 4 Shown, for each block of pixels to be entered [(0,0), (1,0), (2,0), (0,1), (1,1), (2,1), (0,2), (1,2), (2,2)], using convolution kernel (i, h, g, f, e, d, c, b, a) carry out convolution algorithm, from the first block of pixels carry out convolution algorithm and time Go through to last block of pixels, each block of pixels correspondence obtains output vector (1,1), and the set of all output vectors can be instructed Practice photo or checking photo corresponding different characteristic figure.Then, pond and rasterisation are carried out to the characteristic pattern of each training photo Process, each training photo eigen figure extracting is processed into the first consistent pixel vectors of dimension, and, to each checking The characteristic pattern of photo carries out pond and rasterization process, each checking photo eigen figure extracting is processed into dimension consistent Second pixel vectors.
In a preferred embodiment, on the basis of the above embodiments, described training module is specifically for by profit Estimate to obtain the parameter of described depth convolutional neural networks model with back-propagation bp method.Wherein, in training, for each Training photo sequentially carries out convolution, Chi Hua, raster manipulation, can obtain residual between the actual value of Claims Resolution photo and estimated value Difference, is inversely updated to parameter by rasterisation, Chi Hua, convolution using each residual error producing, repeatedly positive reverse Ground carries out aforesaid operations till global error convergence.Obtain depth convolutional neural networks model when global error restrains Parameter.Wherein, when training for the first time, the parameter of this depth convolutional neural networks model uses default parameterss, and here is not done Repeat.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all spirit in the present invention and Within principle, any modification, equivalent substitution and improvement made etc., should be included within the scope of the present invention.

Claims (10)

1. a kind of detection method of picture quality is it is characterised in that the detection method of described picture quality includes:
S1, vehicle insurance settles a claim server after receiving the Claims Resolution photo of user terminal uploads, the depth being generated using training in advance Convolutional neural networks model enters line definition identification to the Claims Resolution photo receiving, to determine definition of described Claims Resolution photo etc. Level;
S2, if the levels of sharpness of described Claims Resolution photo is less than default levels of sharpness, sends the first information extremely described User terminal, to remind user again to upload Claims Resolution photo.
2. according to claim 1 the detection method of picture quality it is characterised in that before described step s1, the method is also Including:
S01, the Claims Resolution photo of predetermined number is classified by predetermined levels of sharpness, and is extracted the Claims Resolution under each classification In photo, the Claims Resolution photo of preset ratio, as training photo, extracts remaining Claims Resolution photo under each classification and shines as checking Piece;
S02, carries out feature extraction to each training photo under each classification, to obtain depth convolutional Neural to be entered to described The first pixel vectors in network model, and by each classification under corresponding first pixel vectors of each training photo input to In described depth convolutional neural networks model, to train the depth convolutional neural networks model generating for identification;
S03, carries out feature extraction to each checking photo under each classification, to obtain the depth convolution that input generates to training The second pixel vectors in neural network model, and will be defeated for corresponding second pixel vectors of each checking photo under each classification Enter in the depth convolutional neural networks model generating to training, to verify the standard of the depth convolutional neural networks model of training generation Really rate;
S04, if the accuracy rate of the depth convolutional neural networks model of training generation is more than or equal to predetermined threshold value, training terminates.
3. according to claim 2 the detection method of picture quality it is characterised in that after described step s03, the method is also Including:
S05, if the accuracy rate of the depth convolutional neural networks model of training generation is less than predetermined threshold value, to described user terminal Send the second prompting message, to remind user to increase the sample size of Claims Resolution photo.
4. according to claim 2 picture quality detection method it is characterised in that described to each classification under each instruction Practice photo or checking photo carry out the step of feature extraction and includes:
For each training photo under each classification or checking photo, using different convolution kernels from each training photo or checking First block of pixels of photo begins stepping through and carries out convolution algorithm to last block of pixels, with extract each training photo or Checking photo corresponding different characteristic figure;
Pond and rasterization process are carried out to the characteristic pattern of each training photo extracting or checking photo, by extract Each training photo eigen figure is processed into the first consistent pixel vectors of dimension, at each checking photo eigen figure extracting Manage into the second consistent pixel vectors of dimension.
5. according to claim 2 the detection method of picture quality it is characterised in that in described step s02, described depth The parameter of convolutional neural networks model is estimated to obtain by using back-propagation bp method.
6. a kind of detection means of picture quality is it is characterised in that the detection means of described picture quality includes:
Identification module, for the depth convolution after receiving the Claims Resolution photo of user terminal uploads, being generated using training in advance Neural network model enters line definition identification to the Claims Resolution photo receiving, to determine the levels of sharpness of described Claims Resolution photo;
Prompting module, if the levels of sharpness for described Claims Resolution photo is less than default levels of sharpness, sends the first prompting Information extremely described user terminal, to remind user again to upload Claims Resolution photo.
7. according to claim 6 picture quality detection means it is characterised in that the detection means of described picture quality also Including:
Sort module, for the Claims Resolution photo of predetermined number is classified by predetermined levels of sharpness, and extracts each point In Claims Resolution photo under class, the Claims Resolution photo of preset ratio, as training photo, extracts remaining Claims Resolution photo under each classification and makees For verifying photo;
Training module, for carrying out feature extraction to each training photo under each classification, to obtain depth to be entered to described Degree convolutional neural networks model in the first pixel vectors, and by each classification under corresponding first pixel of each training photo Vector inputs to described depth convolutional neural networks model, to train the depth convolutional neural networks mould generating for identification Type;
Authentication module, for carrying out feature extraction to each checking photo under each classification, is generated to training with obtaining input Depth convolutional neural networks model in the second pixel vectors, and by each classification under each checking photo corresponding second Pixel vectors input in the depth convolutional neural networks model generating to training, to verify the depth convolutional Neural net that training generates The accuracy rate of network model;
Terminate module, if the accuracy rate of the depth convolutional neural networks model generating for training is more than or equal to predetermined threshold value, Training terminates.
8. according to claim 7 picture quality detection means it is characterised in that the detection means of described picture quality also Including:
Loop module, if the accuracy rate of the depth convolutional neural networks model generating for training is less than predetermined threshold value, to institute State user terminal and send the second prompting message, to remind user to increase the sample size of Claims Resolution photo.
9. according to claim 7 the detection means of picture quality it is characterised in that described training module is specifically for right Each training photo under each classification, is begun stepping through from first block of pixels of each training photo using different convolution kernels Carry out convolution algorithm to last block of pixels, to extract each training photo corresponding different characteristic figure;To extract The characteristic pattern of each training photo carries out pond and rasterization process, and each training photo eigen figure extracting is processed into The first consistent pixel vectors of dimension;
Described authentication module is specifically for for each checking photo under each classification, being tested from each using different convolution kernels First block of pixels of license piece begins stepping through and carries out convolution algorithm to last block of pixels, to extract each checking photo Corresponding different characteristic figure;Pond and rasterization process are carried out to the characteristic pattern of each checking photo extracting, will extract The each checking photo eigen figure going out is processed into the second consistent pixel vectors of dimension.
10. according to claim 7 the detection means of picture quality it is characterised in that described training module is specifically for logical Cross and estimate to obtain the parameter of described depth convolutional neural networks model using back-propagation bp method.
CN201610704799.1A 2016-08-22 2016-08-22 The detection method and device of picture quality Active CN106372651B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610704799.1A CN106372651B (en) 2016-08-22 2016-08-22 The detection method and device of picture quality
PCT/CN2017/091306 WO2018036276A1 (en) 2016-08-22 2017-06-30 Image quality detection method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610704799.1A CN106372651B (en) 2016-08-22 2016-08-22 The detection method and device of picture quality

Publications (2)

Publication Number Publication Date
CN106372651A true CN106372651A (en) 2017-02-01
CN106372651B CN106372651B (en) 2018-03-06

Family

ID=57878027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610704799.1A Active CN106372651B (en) 2016-08-22 2016-08-22 The detection method and device of picture quality

Country Status (2)

Country Link
CN (1) CN106372651B (en)
WO (1) WO2018036276A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107342962A (en) * 2017-07-03 2017-11-10 北京邮电大学 Deep learning intelligence Analysis On Constellation Map method based on convolutional neural networks
WO2018036276A1 (en) * 2016-08-22 2018-03-01 平安科技(深圳)有限公司 Image quality detection method, device, server and storage medium
CN107918916A (en) * 2017-09-13 2018-04-17 平安科技(深圳)有限公司 Self-service Claims Resolution application processing method, device, computer equipment and storage medium
WO2018191435A1 (en) * 2017-04-11 2018-10-18 Alibaba Group Holding Limited Picture-based vehicle loss assessment method and apparatus, and electronic device
CN108764261A (en) * 2018-05-31 2018-11-06 努比亚技术有限公司 A kind of image processing method, mobile terminal and storage medium
CN109785312A (en) * 2019-01-16 2019-05-21 创新奇智(广州)科技有限公司 A kind of image fuzzy detection method, system and electronic equipment
CN110689322A (en) * 2019-09-27 2020-01-14 成都知识视觉科技有限公司 Artificial intelligence auxiliary claims checking system suitable for insurance claims settlement process
CN110766033A (en) * 2019-05-21 2020-02-07 北京嘀嘀无限科技发展有限公司 Image processing method, image processing device, electronic equipment and storage medium
US10789786B2 (en) 2017-04-11 2020-09-29 Alibaba Group Holding Limited Picture-based vehicle loss assessment
US10817956B2 (en) 2017-04-11 2020-10-27 Alibaba Group Holding Limited Image-based vehicle damage determining method and apparatus, and electronic device
WO2021012891A1 (en) * 2019-07-23 2021-01-28 平安科技(深圳)有限公司 Vehicle loss assessment method, device, apparatus, and storage medium
CN112788131A (en) * 2020-12-31 2021-05-11 平安科技(深圳)有限公司 Method, system and storage medium for generating early warning picture based on artificial intelligence
CN114241180A (en) * 2021-12-15 2022-03-25 平安科技(深圳)有限公司 Image detection method and device for vehicle damage claims, computer equipment and storage medium
US11544914B2 (en) 2021-02-18 2023-01-03 Inait Sa Annotation of 3D models with signs of use visible in 2D images

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911393B (en) * 2018-07-24 2023-08-01 广州虎牙信息科技有限公司 Method, device, terminal and storage medium for identifying part
CN109145903A (en) * 2018-08-22 2019-01-04 阿里巴巴集团控股有限公司 A kind of image processing method and device
CN109948625A (en) * 2019-03-07 2019-06-28 上海汽车集团股份有限公司 Definition of text images appraisal procedure and system, computer readable storage medium
CN109949323B (en) * 2019-03-19 2022-12-20 广东省农业科学院农业生物基因研究中心 Crop seed cleanliness judgment method based on deep learning convolutional neural network
CN110705847A (en) * 2019-09-18 2020-01-17 中国南方电网有限责任公司超高压输电公司广州局 Intelligent substation inspection method and system based on image recognition technology
CN110795579B (en) * 2019-10-29 2022-11-18 Oppo广东移动通信有限公司 Picture cleaning method and device, terminal and storage medium
CN111327831B (en) * 2020-03-30 2021-09-10 北京智美智学科技有限公司 Image acquisition method and device for UGC, electronic equipment and system
CN111709906A (en) * 2020-04-13 2020-09-25 北京深睿博联科技有限责任公司 Medical image quality evaluation method and device
CN111563663B (en) * 2020-04-16 2023-03-21 五邑大学 Robot, service quality evaluation method and system
CN112365451A (en) * 2020-10-23 2021-02-12 微民保险代理有限公司 Method, device and equipment for determining image quality grade and computer readable medium
CN112803341A (en) * 2020-12-31 2021-05-14 国网浙江省电力有限公司嘉兴供电公司 Non-invasive cable anti-breaking monitoring device and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544705A (en) * 2013-10-25 2014-01-29 华南理工大学 Image quality testing method based on deep convolutional neural network
CN104866524A (en) * 2015-04-10 2015-08-26 大连交通大学 Fine classification method for commodity images
CN105160678A (en) * 2015-09-02 2015-12-16 山东大学 Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN105719188A (en) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 Method and server for achieving insurance claim anti-fraud based on consistency of multiple pictures

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609500B (en) * 2008-12-01 2012-07-25 公安部第一研究所 Quality estimation method of exit-entry digital portrait photos
KR101906827B1 (en) * 2012-04-10 2018-12-05 삼성전자주식회사 Apparatus and method for taking a picture continously
CN105405054A (en) * 2015-12-11 2016-03-16 平安科技(深圳)有限公司 Insurance claim antifraud implementation method based on claim photo deep learning and server
CN106372651B (en) * 2016-08-22 2018-03-06 平安科技(深圳)有限公司 The detection method and device of picture quality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544705A (en) * 2013-10-25 2014-01-29 华南理工大学 Image quality testing method based on deep convolutional neural network
CN104866524A (en) * 2015-04-10 2015-08-26 大连交通大学 Fine classification method for commodity images
CN105160678A (en) * 2015-09-02 2015-12-16 山东大学 Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN105512624A (en) * 2015-12-01 2016-04-20 天津中科智能识别产业技术研究院有限公司 Smile face recognition method and device for human face image
CN105719188A (en) * 2016-01-22 2016-06-29 平安科技(深圳)有限公司 Method and server for achieving insurance claim anti-fraud based on consistency of multiple pictures

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018036276A1 (en) * 2016-08-22 2018-03-01 平安科技(深圳)有限公司 Image quality detection method, device, server and storage medium
US10817956B2 (en) 2017-04-11 2020-10-27 Alibaba Group Holding Limited Image-based vehicle damage determining method and apparatus, and electronic device
WO2018191435A1 (en) * 2017-04-11 2018-10-18 Alibaba Group Holding Limited Picture-based vehicle loss assessment method and apparatus, and electronic device
US11049334B2 (en) 2017-04-11 2021-06-29 Advanced New Technologies Co., Ltd. Picture-based vehicle loss assessment
WO2018191437A1 (en) * 2017-04-11 2018-10-18 Alibaba Group Holding Limited Image-based vehicle loss assessment method, apparatus, and system, and electronic device
US10789786B2 (en) 2017-04-11 2020-09-29 Alibaba Group Holding Limited Picture-based vehicle loss assessment
CN107342962A (en) * 2017-07-03 2017-11-10 北京邮电大学 Deep learning intelligence Analysis On Constellation Map method based on convolutional neural networks
WO2019052226A1 (en) * 2017-09-13 2019-03-21 平安科技(深圳)有限公司 Processing method and apparatus for self-service claim settlement application, computer device and storage medium
CN107918916A (en) * 2017-09-13 2018-04-17 平安科技(深圳)有限公司 Self-service Claims Resolution application processing method, device, computer equipment and storage medium
CN108764261A (en) * 2018-05-31 2018-11-06 努比亚技术有限公司 A kind of image processing method, mobile terminal and storage medium
CN109785312B (en) * 2019-01-16 2020-10-09 创新奇智(广州)科技有限公司 Image blur detection method and system and electronic equipment
CN109785312A (en) * 2019-01-16 2019-05-21 创新奇智(广州)科技有限公司 A kind of image fuzzy detection method, system and electronic equipment
CN110766033A (en) * 2019-05-21 2020-02-07 北京嘀嘀无限科技发展有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2021012891A1 (en) * 2019-07-23 2021-01-28 平安科技(深圳)有限公司 Vehicle loss assessment method, device, apparatus, and storage medium
CN110689322A (en) * 2019-09-27 2020-01-14 成都知识视觉科技有限公司 Artificial intelligence auxiliary claims checking system suitable for insurance claims settlement process
CN112788131A (en) * 2020-12-31 2021-05-11 平安科技(深圳)有限公司 Method, system and storage medium for generating early warning picture based on artificial intelligence
US11544914B2 (en) 2021-02-18 2023-01-03 Inait Sa Annotation of 3D models with signs of use visible in 2D images
CN114241180A (en) * 2021-12-15 2022-03-25 平安科技(深圳)有限公司 Image detection method and device for vehicle damage claims, computer equipment and storage medium

Also Published As

Publication number Publication date
CN106372651B (en) 2018-03-06
WO2018036276A1 (en) 2018-03-01

Similar Documents

Publication Publication Date Title
CN106372651B (en) The detection method and device of picture quality
CN110674688B (en) Face recognition model acquisition method, system and medium for video monitoring scene
CN111612807B (en) Small target image segmentation method based on scale and edge information
CN108229479A (en) The training method and device of semantic segmentation model, electronic equipment, storage medium
CN108805016B (en) Head and shoulder area detection method and device
CN106415594A (en) A method and a system for face verification
CN115063573A (en) Multi-scale target detection method based on attention mechanism
CN107563274A (en) A kind of vehicle checking method and method of counting of the video based on confrontation e-learning
CN104408728A (en) Method for detecting forged images based on noise estimation
CN109325435B (en) Video action recognition and positioning method based on cascade neural network
CN110956080A (en) Image processing method and device, electronic equipment and storage medium
CN113361567B (en) Image processing method, device, electronic equipment and storage medium
CN114529687A (en) Image reconstruction method and device, electronic equipment and computer readable storage medium
Guo et al. Haze visibility enhancement for promoting traffic situational awareness in vision-enabled intelligent transportation
CN111275070B (en) Signature verification method and device based on local feature matching
CN113807237B (en) Training of in vivo detection model, in vivo detection method, computer device, and medium
CN116630917A (en) Lane line detection method
CN115661611A (en) Infrared small target detection method based on improved Yolov5 network
CN115661803A (en) Image definition detection method, electronic device, and computer-readable storage medium
CN114743045A (en) Small sample target detection method based on double-branch area suggestion network
CN114596609A (en) Audio-visual counterfeit detection method and device
CN113780241A (en) Acceleration method and device for detecting salient object
CN112733686A (en) Target object identification method and device used in image of cloud federation
Varkentin et al. Development of an application for vehicle classification using neural networks technologies
CN112699928B (en) Non-motor vehicle detection and identification method based on deep convolutional network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant