CN114120343A - Certificate image quality evaluation method and terminal - Google Patents

Certificate image quality evaluation method and terminal Download PDF

Info

Publication number
CN114120343A
CN114120343A CN202111438708.1A CN202111438708A CN114120343A CN 114120343 A CN114120343 A CN 114120343A CN 202111438708 A CN202111438708 A CN 202111438708A CN 114120343 A CN114120343 A CN 114120343A
Authority
CN
China
Prior art keywords
area
calculating
certificate
image
fraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111438708.1A
Other languages
Chinese (zh)
Inventor
朱光强
罗富章
李山路
莫家源
龚小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maxvision Technology Corp
Original Assignee
Maxvision Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maxvision Technology Corp filed Critical Maxvision Technology Corp
Priority to CN202111438708.1A priority Critical patent/CN114120343A/en
Publication of CN114120343A publication Critical patent/CN114120343A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a certificate image quality evaluation method, which comprises the following steps: acquiring a certificate image to be subjected to quality evaluation; resolution fraction S is solved based on Tenengrad gradient functionc(ii) a Performing semantic segmentation on the non-shielding part of the certificate image, and calculating a shielding fraction S according to the semantic segmentation result of the non-shielding parto(ii) a Detecting the reflective area of the evidence image F and calculating the reflective fraction S according to the reflective arear(ii) a Calculating a luminance anomaly score S based on a pixel value weighted average deviationb(ii) a Computing a distortion score S based on a minimum bounding rectangle of a document region in a document imagea(ii) a Semantic segmentation of dirty parts is carried out on the certificate image, and a dirty fraction S is calculated according to the result of the semantic segmentation of the dirty partsd(ii) a Calculating an area fraction S of a document region in a document imaget(ii) a Andcomposite sharpness score ScOcclusion fraction SoLight reflection fraction SrBrightness abnormality score SbDistortion score SaDirty fraction SdFractional sum area StThe final evaluation score was obtained. The application also provides a terminal.

Description

Certificate image quality evaluation method and terminal
Technical Field
The present application relates to the field of image processing, and more particularly, to a method and a terminal for evaluating image quality of a document.
Background
In customs, airports, railway stations and other occasions, when a handheld terminal is used for certificate inspection, certificates without built-in chips are generally identified by a method of shooting certificate pictures and then performing OCR identification, and the light, angle, dirt of the certificates and the like during shooting all influence the result of the OCR identification, so that a method for evaluating the certificate shooting quality is needed to judge the certificate shooting quality so as to obtain a high-quality clear certificate image to ensure that the subsequent certificate identification can be successfully completed.
Disclosure of Invention
Aiming at the prior art, the technical problem to be solved by the application is to provide a certificate image quality evaluation method and a terminal which can effectively and comprehensively evaluate the certificate image quality from multiple aspects.
In order to solve the technical problem, the present application provides a method for evaluating quality of a document image, including:
acquiring a certificate image F to be subjected to quality evaluation;
calculating definition fraction S of the certificate image based on Tenengrad gradient functionc
Performing semantic segmentation on the non-shielding part of the certificate image F, and calculating a shielding fraction S of the certificate image F according to the semantic segmentation result of the non-shielding parto
Detecting the reflective area of the certificate image F and calculating the reflective fraction S of the certificate image F according to the reflective arear
Calculating the brightness anomaly score S of the certificate image F based on the pixel value weighted average deviationb
Calculating the distortion score S of the certificate image F based on the minimum circumscribed rectangle frame of the certificate area in the certificate image Fa
Semantic segmentation of dirty parts is carried out on the certificate image F, and the dirty fraction S of the certificate image is calculated according to the result of the semantic segmentation of the dirty partsd
Calculating the area percentage S of the certificate area in the certificate image Ft(ii) a And the number of the first and second groups,
composite sharpness score ScOcclusion fraction SoLight reflection fraction SrBrightness abnormality score SbDistortion score SaDirty fraction SdFractional sum area StThe final evaluation score was obtained.
In one possible implementation, the sharpness score S of the document image is determined based on a Tenengrad gradient functioncComprises the following steps:
acquiring N reference images under a standard condition;
calculating the definition value of each reference image in the N reference images by using a Tenengrad function;
sorting the definition values of the N reference images from high to low, selecting front M reference images in the sorting result, and calculating the definition average value C of the M reference imagest
Calculating the definition value C of the certificate image F by using a Tenengrad functionp
According to the mean value of sharpness CtAnd a sharpness value C of said document image FpCalculating a sharpness score Sc:Sc=min(1,Cp/Ct)×100。
In a possible implementation mode, the semantic segmentation of the non-shielding part is carried out on the certificate image F, and the shielding fraction S of the certificate image F is calculated according to the semantic segmentation result of the non-shielding partoComprises the following steps:
detecting a certificate area of the certificate image F and four angular points of the certificate area;
calculating the pixel area A of the certificate area formed by the four corner pointso
Performing F-pass on the certificate image by using a BiSeNet deep learning modelObtaining mask M of non-shielding part by line non-shielding semantic segmentationuAnd calculating the mask MuArea A of the regionu
According to the pixel area AoAnd area AuCalculating an occlusion score So:So=min(1,Ao/Au)×100。
In a possible implementation manner, the light reflection area of the certificate image F is detected and the light reflection fraction S of the certificate image F is calculated according to the light reflection arearComprises the following steps:
obtaining a mask M of a light reflecting part for the light reflecting area of the certificate image Ff
Mask M for calculating light reflecting partfAnd a mask M of non-shielding partuOverlap region M ofufCalculating the overlapping area MufArea A ofuf
According to the overlapping region MufArea A ofufAnd pixel area AoCalculating the reflection fraction Sr:Sr=(1-Auf/Ao)×100。
In one possible implementation, the brightness anomaly score S of the document image F is determined on the basis of a pixel-value-weighted average deviationbComprises the following steps:
acquiring N reference images under a standard condition;
calculating the average pixel value T of the gray level images of the N reference images;
calculating an average pixel value t of a gray scale image of the certificate image F;
calculating a brightness anomaly score S according to the average pixel value T and the average pixel value Tb:Sb=max[lm,1-(abs(t-T)/T)];
Wherein lmIs a lower limit value ofmThe value range of (1) is (0); abs (T-T) represents the absolute value of the difference T-T.
In one possible implementation, the distortion score S of the document image F is calculated based on the minimum bounding rectangle of the document region in the document image FaComprises the following steps:
calculating a minimum external rectangular frame of the certificate area of the certificate image F;
calculating the area A of the minimum bounding rectangler
According to the area A of the minimum circumscribed rectangular framerAnd pixel area AoCalculating a distortion score Sa:Sa=(Ao/Ar)×100。
In a possible implementation manner, semantic segmentation of a dirty part is performed on the certificate image F, and a dirty score S of the certificate image is calculated according to a result of the semantic segmentation of the dirty partdComprises the following steps:
performing semantic segmentation on the dirty part of the certificate image F by using a BiSeNet deep learning model to obtain a mask M of the dirty partd
Calculating the mask M of the dirty partdAnd a mask M of non-shielding partuOf intersection region Min,Min=Md×Mu
And calculating the intersection region MinArea A ofd
According to the mask MdArea A of the regiondAnd pixel area AoCalculating a dirt fraction Sd:Sd=(1-(Ad/Ao))×100。
In one possible implementation, the area fraction S of the document region in the document image F is calculatedtComprises the following steps:
calculating the total area A of the certificate imageall
According to the total area AallAnd a mask M of non-shielding partuArea A ofuCalculating area fraction St:St=(Mu/Aall)×100。
In one possible implementation, the sharpness score S is integratedcOcclusion fraction SoLight reflection fraction SrBrightness abnormality score SbDistortion score SaDirty fraction SdSum areaFraction StThe final evaluation score S was obtained as:
S=(Sc×Wc+So×Wo+Sr×Wr+Sb×Wb+Sa×Wa+Sd×Wd+St×Wt)/(Wc+Wo+Wr+Wb+Wa+Wd+Wt);
wherein, Wc、Wo、Wr、Wb、Wa、WdAnd WtRespectively, the integrated sharpness score ScOcclusion fraction SoLight reflection fraction SrBrightness abnormality score SbDistortion score SaDirty fraction SdFractional sum area StA weight coefficient of (W), a weight coefficient ofc、Wo、Wr、Wb、Wa、WdAnd WtThe value range of (1) is (0).
In the certificate image evaluation method, the certificate image evaluation method effectively evaluates the certificate image from multiple aspects, such as certificate image definition, whether the certificate image is shielded or not, whether the certificate image area reflects light or not, whether the certificate image has abnormal brightness or not, whether the certificate image is distorted or not, whether the certificate image is dirty or not and whether the certificate area size of the certificate image is proper or not.
The application also provides a terminal, which comprises a processor, a memory and an acquisition device; the acquisition device is used for shooting certificate images; the memory is used for storing executable codes; the processor is configured to execute executable code stored in the memory to perform the credential image quality evaluation method.
In the terminal, the certificate image evaluation method is used for evaluating the certificate image from multiple aspects, such as the certificate image definition, whether the certificate is shielded or not, whether the certificate image area reflects light or not, whether the certificate image is abnormal in brightness or not, whether the certificate image is distorted or not, whether the certificate image is dirty or not and whether the certificate area size of the certificate image is proper or not, so that the certificate image with high quality and clarity can be obtained, and the difficulty of image analysis in the subsequent certificate identification can be reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is an overall flow diagram of a document image quality assessment method of an embodiment of the present application;
FIG. 2 is a flow chart of a method of solving for a sharpness score according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for solving for occlusion scores, glint scores, and distortion scores according to an embodiment of the present application.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present application clearer, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element.
It will be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like, as used herein, refer to an orientation or positional relationship indicated in the drawings that is solely for the purpose of facilitating the description and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be considered as limiting the present application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
The certificate image quality evaluation method and the terminal of the present application will now be described in detail with reference to the accompanying drawings.
Referring to fig. 1, the method for evaluating the quality of a document image provided by the embodiment of the application comprises the following steps:
step S100: acquiring a certificate image F to be subjected to quality evaluation;
step S200: calculating definition fraction S of the certificate image based on Tenengrad gradient functionc
Step S300: performing semantic segmentation on the non-shielding part of the certificate image F, and calculating a shielding fraction S of the certificate image F according to the semantic segmentation result of the non-shielding parto
Step S400: detecting the reflective area of the certificate image F and calculating the reflective fraction S of the certificate image F according to the reflective arear
Step S500: calculating the brightness anomaly score S of the certificate image F based on the pixel value weighted average deviationb
Step S600: calculating the distortion score S of the certificate image F based on the minimum circumscribed rectangle frame of the certificate area in the certificate image Fa
Step S700: semantic segmentation of dirty parts is carried out on the certificate image F, and the dirty fraction S of the certificate image is calculated according to the result of the semantic segmentation of the dirty partsd
Step (ii) ofS800: calculating the area percentage S of the certificate area in the certificate image Ft
Step S900: composite sharpness score ScOcclusion fraction SoLight reflection fraction SrBrightness abnormality score SbDistortion score SaDirty fraction SdFractional sum area StThe final evaluation score was obtained.
Referring to FIG. 2, in the embodiment, the sharpness score S of the certificate image is obtained based on the Tenengrad gradient functioncComprises the following steps:
step S201: acquiring N reference images under a standard condition;
step S202: calculating the definition value of each reference image in the N reference images by using a Tenengrad function;
step S203: sorting the definition values of the N reference images from high to low, selecting front M reference images in the sorting result, and calculating the definition average value C of the M reference imagest
Step S204: calculating the definition value C of the certificate image F by using a Tenengrad functionp
Step S205: according to the mean value of sharpness CtAnd a sharpness value C of said document image FpCalculating a sharpness score Sc:Sc=min(1,Cp/Ct)×100。
In the above steps, the standard condition may be understood as an environmental condition where light is soft and bright and there is no reflection, and an environmental condition where a photo is perpendicular to the camera and the area of the photo accounts for more than 80% of the total image area; the reference image is the image acquired under the standard condition.
In the above step S202 and step S203, the image sharpness is calculated using the Tenengrad gradient function as:
D(f)=∑yx|G(x,y)|(G(x,y)>T)
Figure BDA0003379133870000071
wherein, (x, y) is any pixel point of the definition image to be solved, Gx(x, y) is the horizontal gradient value of the resolution image to be solved, GyAnd (x, y) is a gradient value in the vertical direction of the definition image to be solved, T is a threshold value, T is 0, and the value of T can be set to other values according to actual requirements.
In an embodiment of the application, the gradient values in the horizontal direction and the gradient values in the vertical direction are calculated by, but not limited to, using a sobel operator or a roberts operator.
In step S203, the larger N is, the better, at least 100; m can be adjusted according to actual conditions, when the requirement is stricter, X is reduced, and when the requirement is relaxed, X is increased. In one application embodiment, the first M reference images are set as the first 40 reference images.
Referring to fig. 3, in the step S300, semantic segmentation is performed on the non-occlusion part of the certificate image F, and an occlusion score S of the certificate image F is calculated according to the semantic segmentation result of the non-occlusion partoComprises the following steps:
step S301: detecting a certificate area of the certificate image F and four angular points of the certificate area;
step S302: calculating the pixel area A of the certificate area formed by the four corner pointso
Step S303: carrying out non-occlusion semantic segmentation on the certificate image F by using a BiSeNet deep learning model to obtain a mask M of a non-occlusion partuAnd calculating the mask MuArea A of the regionu
Step S304: according to the pixel area AoAnd area AuCalculating an occlusion score So:So=min(1,Ao/Au)×100。
It will be appreciated that when certain areas of the document image F are occluded, particularly areas that are subsequently identified, this necessarily affects the successful performance of subsequent document identification.
With further reference to fig. 3, in the step S400, the retroreflective regions of the document image F are detected and based on the retroreflective regionsCalculating the reflection fraction S of the certificate image FrComprises the following steps:
step S401: obtaining a mask M of a light reflecting part for the light reflecting area of the certificate image Ff
Step S402: mask M for calculating light reflecting partfAnd a mask M of non-shielding partuOverlap region M ofufCalculating the overlapping area MufArea A ofuf
Step S403: according to the overlapping region MufArea A ofufAnd pixel area AoCalculating the reflection fraction Sr:Sr=(1-Auf/Ao)×100。
In step S500, a brightness anomaly score S of the document image F is determined based on the pixel value weighted average deviationbComprises the following steps:
step S501: acquiring N reference images under a standard condition;
step S502: calculating the average pixel value T of the gray level images of the N reference images;
step S503: calculating an average pixel value t of a gray scale image of the certificate image F;
step S504: calculating a brightness anomaly score S according to the average pixel value T and the average pixel value Tb:Sb=max[lm,1-(abs(t-T)/T)];
Wherein lmIs a lower limit value ofmThe value range of (1) is (0); abs (T-T) represents the absolute value of the difference T-T.
Understandably, brightness abnormality is understood as that the certificate image is too bright or too dark, the image is exposed when the brightness is too bright, and the image is not clearly seen when the brightness is too dark, so that the image with the brightness abnormality is easy to increase the image processing difficulty in the subsequent certificate identification.
With further reference to FIG. 3, in the step S600, a distortion score S of the document image F is calculated based on a minimum bounding rectangle of the document region in the document image FaComprises the following steps:
step S601: calculating a minimum external rectangular frame of the certificate area of the certificate image F;
step S602: calculating the area A of the minimum bounding rectangler
Step S603: according to the area A of the minimum circumscribed rectangular framerAnd pixel area AoCalculating a distortion score Sa:Sa=(Ao/Ar)×100。
In the step S700, semantic segmentation of a dirty portion is performed on the certificate image F, and a dirty score S of the certificate image is calculated according to a result of the semantic segmentation of the dirty portiond
Step S701: performing semantic segmentation on the dirty part of the certificate image F by using a BiSeNet deep learning model to obtain a mask M of the dirty partd
Step S702: calculating the mask M of the dirty partdAnd a mask M of non-shielding partuOf intersection region Min,Min=Md×Mu
Step S703: and calculating the intersection region MinArea A ofd
Step S704: according to the mask MdArea A of the regiondAnd pixel area AoCalculating a dirt fraction Sd:Sd=(1-(Ad/Ao))×100。
It will be appreciated that when the document is distorted, the document regions of the document image are irregularly shaped, as opposed to conventional document shapes.
In the step S800, the area percentage S of the certificate area in the certificate image F is calculatedtComprises the following steps:
calculating the total area A of the certificate imageall
According to the total area AallAnd a mask M of non-shielding partuArea A ofuCalculating area fraction St:St=(Mu/Aall)×100。
In the step S900, the sharpness score S is integratedcOcclusion fraction SoLight reflection fraction SrDifferent brightnessConstant score SbDistortion score SaDirty fraction SdFractional sum area StThe final evaluation score S was obtained as:
S=(Sc×Wc+So×Wo+Sr×Wr+Sb×Wb+Sa×Wa+Sd×Wd+St×Wt)/(Wc+Wo+Wr+Wb+Wa+Wd+Wt);
wherein, Wc、Wo、Wr、Wb、Wa、WdAnd WtRespectively, the integrated sharpness score ScOcclusion fraction SoLight reflection fraction SrBrightness abnormality score SbDistortion score SaDirty fraction SdFractional sum area StA weight coefficient of (W), a weight coefficient ofc、Wo、Wr、Wb、Wa、WdAnd WtThe value range of (1) is (0).
In the present embodiment, each weight coefficient Wc、Wo、Wr、Wb、Wa、WdAnd WtThe specific value of (a) can be set according to different emphasis points in the practical application process, in an application embodiment, Wc、Wo、Wr、Wb、Wa、WdAnd WtThe weight values of (a) are set to 0.20, 0.2, 0.15, 0.1, and 0.1, respectively.
It can be understood that, the larger the proportion of the certificate area of the certificate image in the certificate image is, the larger the certificate area in the acquired certificate image is, which indicates that the certificate area information is easier to be acquired at this time. When the proportion of the certificate area of the certificate image in the certificate image is too small, the certificate area in the acquired certificate image is too small, so that the information of the certificate area is not clear.
In the embodiment, the larger the score S value evaluated by the certificate image evaluation method is, the better the image quality is. The certificate image evaluation method effectively evaluates the certificate image in multiple aspects of the definition of the certificate, whether the certificate is shielded or not, whether the certificate image area reflects light or not, whether the certificate image is abnormal in brightness or not, whether the certificate image is distorted or not, whether the certificate image is dirty or not and whether the certificate area size of the certificate image is proper or not.
The embodiment of the application also provides a terminal, which comprises a processor, a memory and a collecting device; the acquisition device is used for shooting certificate images; the memory is used for storing executable codes; the processor is configured to execute executable code stored in the memory to perform the credential image quality evaluation method.
In this embodiment, the capturing device may be, but is not limited to, a camera. The memory may be, but is not limited to, an electronic, magnetic, optical, or semiconductor system, apparatus, or device, and in particular, but not limited to, a magnetic disk, hard disk, read only memory, random access memory, or erasable programmable read only memory. The processor may be a central processing unit, but may also be other general purpose processors, digital signal processors, application specific integrated circuits, off-the-shelf programmable gate arrays or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or the like.
In the terminal, the certificate image evaluation method is used for evaluating the certificate image from multiple aspects, such as the certificate image definition, whether the certificate is shielded or not, whether the certificate image area reflects light or not, whether the certificate image is abnormal in brightness or not, whether the certificate image is distorted or not, whether the certificate image is dirty or not and whether the certificate area size of the certificate image is proper or not, so that the certificate image with high quality and clarity can be obtained, and the difficulty of image analysis in the subsequent certificate identification can be reduced.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for evaluating the quality of a document image, comprising:
acquiring a certificate image F to be subjected to quality evaluation;
calculating definition fraction S of the certificate image based on Tenengrad gradient functionc
Performing semantic segmentation on the non-shielding part of the certificate image F, and calculating a shielding fraction S of the certificate image F according to the semantic segmentation result of the non-shielding parto
Detecting the reflective area of the certificate image F and calculating the reflective fraction S of the certificate image F according to the reflective arear
Calculating the brightness anomaly score S of the certificate image F based on the pixel value weighted average deviationb
Calculating the distortion score S of the certificate image F based on the minimum circumscribed rectangle frame of the certificate area in the certificate image Fa
Semantic segmentation of dirty parts is carried out on the certificate image F, and the dirty fraction S of the certificate image is calculated according to the result of the semantic segmentation of the dirty partsd
Calculating the area percentage S of the certificate area in the certificate image Ft(ii) a And
composite sharpness score ScOcclusion fraction SoLight reflection fraction SrBrightness abnormality score SbDistortion score SaDirty fraction SdFractional sum area StThe final evaluation score was obtained.
2. The document image quality evaluation method of claim 1, wherein the sharpness score S of the document image is found based on a Tenengrad gradient functioncComprises the following steps:
acquiring N reference images under a standard condition;
calculating the definition value of each reference image in the N reference images by using a Tenengrad function;
sorting the definition values of the N reference images from high to low, selecting front M reference images in the sorting result, and calculating the definition average value C of the M reference imagest
Calculating the definition value C of the certificate image F by using a Tenengrad functionp
According to the mean value of sharpness CtAnd a sharpness value C of said document image FpCalculating a sharpness score Sc:Sc=min(1,Cp/Ct)×100。
3. The document image quality evaluation method according to claim 1, characterized in that the document image F is subjected to semantic segmentation of the non-occluded part, and the occlusion score S of the document image F is calculated from the result of the semantic segmentation of the non-occluded partoComprises the following steps:
detecting a certificate area of the certificate image F and four angular points of the certificate area;
calculating the pixel area A of the certificate area formed by the four corner pointso
Carrying out non-occlusion semantic segmentation on the certificate image F by using a BiSeNet deep learning model to obtain a mask M of a non-occlusion partuAnd calculating the mask MuArea A of the regionu
According to the pixel area AoAnd area AuCalculating an occlusion score So:So=min(1,Ao/Au)×100。
4. The document image quality evaluation method of claim 1 wherein the retroreflective regions of the document image F are detected and the retroreflection fraction S of the document image F is calculated from the retroreflective regionsrComprises the following steps:
obtaining a mask M of a light reflecting part for the light reflecting area of the certificate image Ff
Mask M for calculating light reflecting partfAnd a mask M of non-shielding partuOverlap region M ofufCalculating the overlap regionDomain MufArea A ofuf
According to the overlapping region MufArea A ofufAnd pixel area AoCalculating the reflection fraction Sr:Sr=(1-Auf/Ao)×100。
5. The document image quality evaluation method according to claim 1, characterized in that the brightness anomaly score S of the document image F is found on the basis of a pixel value weighted mean deviationbComprises the following steps:
acquiring N reference images under a standard condition;
calculating the average pixel value T of the gray level images of the N reference images;
calculating an average pixel value t of a gray scale image of the certificate image F;
calculating a brightness anomaly score S according to the average pixel value T and the average pixel value Tb:Sb=max[lm,1-(abs(t-T)/T)];
Wherein lmIs a lower limit value ofmThe value range of (1) is (0); abs (T-T) represents the absolute value of the difference T-T.
6. The document image quality evaluation method according to claim 3, wherein the distortion score S of the document image F is calculated based on a minimum bounding rectangle of the document region in the document image FaComprises the following steps:
calculating a minimum external rectangular frame of the certificate area of the certificate image F;
calculating the area A of the minimum bounding rectangler
According to the area A of the minimum circumscribed rectangular framerAnd pixel area AoCalculating a distortion score Sa:Sa=(Ao/Ar)×100。
7. The document image quality evaluation method according to claim 3, characterized in that the document image F is subjected to a semantic segmentation of the dirty part, based on the semantic segmentation of the dirty partCalculating the dirty mark S of the certificate image according to the segmentation resultdComprises the following steps:
performing semantic segmentation on the dirty part of the certificate image F by using a BiSeNet deep learning model to obtain a mask M of the dirty partd
Calculating the mask M of the dirty partdAnd a mask M of non-shielding partuOf intersection region Min,Min=Md×Mu
And calculating the intersection region MinArea A ofd
According to the mask MdArea A of the regiondAnd pixel area AoCalculating a dirt fraction Sd:Sd=(1-(Ad/Ao))×100。
8. The document image quality evaluation method of claim 3 wherein an area fraction S of a document region in the document image F is calculatedtComprises the following steps:
calculating the total area A of the certificate imageall
According to the total area AallAnd a mask M of non-shielding partuArea A ofuCalculating area fraction St:St=(Mu/Aall)×100。
9. The document image quality evaluation method of claim 1 wherein the composite sharpness score ScOcclusion fraction SoLight reflection fraction SrBrightness abnormality score SbDistortion score SaDirty fraction SdFractional sum area StThe final evaluation score S was obtained as:
S=(Sc×Wc+So×Wo+Sr×Wr+Sb×Wb+Sa×Wa+Sd×Wd+St×Wt)/(Wc+Wo+Wr+Wb+Wa+Wd+Wt);
wherein, Wc、Wo、Wr、Wb、Wa、WdAnd WtRespectively, the integrated sharpness score ScOcclusion fraction SoLight reflection fraction SrBrightness abnormality score SbDistortion score SaDirty fraction SdFractional sum area StA weight coefficient of (W), a weight coefficient ofc、Wo、Wr、Wb、Wa、WdAnd WtThe value range of (1) is (0).
10. A terminal is characterized by comprising a processor, a memory and an acquisition device; the acquisition device is used for shooting certificate images; the memory is used for storing executable codes; the processor is configured to execute executable code stored in the memory to perform the credential image quality evaluation method of any one of claims 1 to 9.
CN202111438708.1A 2021-11-29 2021-11-29 Certificate image quality evaluation method and terminal Pending CN114120343A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111438708.1A CN114120343A (en) 2021-11-29 2021-11-29 Certificate image quality evaluation method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111438708.1A CN114120343A (en) 2021-11-29 2021-11-29 Certificate image quality evaluation method and terminal

Publications (1)

Publication Number Publication Date
CN114120343A true CN114120343A (en) 2022-03-01

Family

ID=80367988

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111438708.1A Pending CN114120343A (en) 2021-11-29 2021-11-29 Certificate image quality evaluation method and terminal

Country Status (1)

Country Link
CN (1) CN114120343A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958152A (en) * 2023-09-21 2023-10-27 中科航迈数控软件(深圳)有限公司 Part size measurement method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116958152A (en) * 2023-09-21 2023-10-27 中科航迈数控软件(深圳)有限公司 Part size measurement method, device, equipment and medium
CN116958152B (en) * 2023-09-21 2024-01-12 中科航迈数控软件(深圳)有限公司 Part size measurement method, device, equipment and medium

Similar Documents

Publication Publication Date Title
US10885644B2 (en) Detecting specified image identifiers on objects
US10789496B2 (en) Mobile image quality assurance in mobile document image processing applications
US10909719B2 (en) Image processing method and apparatus
US20150294523A1 (en) Document image capturing and processing
US20120294528A1 (en) Method of Detecting and Correcting Digital Images of Books in the Book Spine Area
JP2013065304A (en) High-speed obstacle detection
JP2001133418A (en) Method and apparatus for defect detection based on shape feature
CN111222507B (en) Automatic identification method for digital meter reading and computer readable storage medium
US20220084189A1 (en) Information processing apparatus, information processing method, and storage medium
CN112396050B (en) Image processing method, device and storage medium
CN111898610B (en) Card unfilled corner detection method, device, computer equipment and storage medium
US7535501B1 (en) Testing of digital cameras
WO2021195873A1 (en) Method and device for identifying region of interest in sfr test chart image, and medium
CN111080542B (en) Image processing method, device, electronic equipment and storage medium
CN114120343A (en) Certificate image quality evaluation method and terminal
CN113902740A (en) Construction method of image blurring degree evaluation model
US8897538B1 (en) Document image capturing and processing
CN108876845B (en) Fresnel pattern center determining method and device
JP2006047077A (en) Method and device for detecting line defects of screen
CN115147413B (en) Ghost image detection method, device, equipment and readable storage medium
US11900643B2 (en) Object detection method and object detection system
EP3872707A1 (en) Automatic detection, counting, and measurement of lumber boards using a handheld device
Fischer et al. Forensic analysis of interdependencies between vignetting and radial lens distortion
CN115830134A (en) Camera calibration method and device, electronic equipment and medium
CN116416232A (en) Target detection method, target detection device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination