CN111598884A - Image data processing method, apparatus and computer storage medium - Google Patents

Image data processing method, apparatus and computer storage medium Download PDF

Info

Publication number
CN111598884A
CN111598884A CN202010437523.8A CN202010437523A CN111598884A CN 111598884 A CN111598884 A CN 111598884A CN 202010437523 A CN202010437523 A CN 202010437523A CN 111598884 A CN111598884 A CN 111598884A
Authority
CN
China
Prior art keywords
image
detected
determining
parameter
definition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010437523.8A
Other languages
Chinese (zh)
Inventor
项宇泽
何小坤
王劲君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Century TAL Education Technology Co Ltd
Original Assignee
Beijing Century TAL Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Century TAL Education Technology Co Ltd filed Critical Beijing Century TAL Education Technology Co Ltd
Priority to CN202010437523.8A priority Critical patent/CN111598884A/en
Publication of CN111598884A publication Critical patent/CN111598884A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The embodiment of the application provides an image data processing method, equipment and a computer storage medium, wherein the image data processing method comprises the following steps: determining at least one image block according to an image to be detected; inputting at least one image block into a preset grading model to obtain the definition grade of the at least one image block; determining the definition parameter of the image to be detected according to the definition score of at least one image block; and determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the image quality parameters of the image to be detected comprise the definition parameters of the image to be detected. Manpower is reduced, processing efficiency is improved, dependence on a reference image is not needed, and adaptability is better.

Description

Image data processing method, apparatus and computer storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an image data processing method, image data processing equipment and a computer storage medium.
Background
Image processing is widely used in various fields of life, such as face recognition, online paper marking, image search, and the like. Generally, image acquisition is firstly needed in image processing, but the quality of acquired images is poor due to the influence of angles, light and shade during acquisition, such as image blurring, image inclination and the like. Taking online paper marking as an example, image acquisition is performed on student test paper, the obtained images may appear blurry, inclined and the like, images with poor quality need to be manually screened out, a large amount of manpower is consumed, and the processing efficiency is low.
Disclosure of Invention
In view of the above, an object of the present invention is to provide an image data processing method, an image data processing apparatus, and a computer storage medium, so as to overcome the defects of the prior art that screening of poor quality images consumes a lot of manpower and is inefficient.
An embodiment of the present application provides an image data processing method, including:
determining at least one image block according to an image to be detected; inputting at least one image block into a preset grading model to obtain the definition grade of the at least one image block; determining the definition parameter of the image to be detected according to the definition score of at least one image block; and determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the image quality parameters of the image to be detected comprise the definition parameters of the image to be detected.
Optionally, in an embodiment of the present application, determining a definition parameter of an image to be detected according to a definition score of at least one image block includes:
and determining the definition parameter of the image to be detected according to the average value of the definition scores of at least one image block.
Optionally, in an embodiment of the present application, determining a definition parameter of an image to be detected according to a definition score of at least one image block includes:
determining the definition parameter of the image to be detected according to the number of the first type of image blocks and the number of the second type of image blocks, wherein the first type of image blocks are image blocks with definition scores lower than a preset score threshold, and the second type of image blocks are image blocks with definition scores greater than or equal to the preset score threshold.
Optionally, in an embodiment of the present application, determining at least one image block according to an image to be detected includes:
determining an effective area in an image to be detected; the active area is divided into at least one image block.
Optionally, in an embodiment of the present application, determining an effective region in the image to be measured includes:
initializing a left boundary, a right boundary, an upper boundary and a lower boundary of an effective area of an image to be detected to obtain an initial left boundary, an initial right boundary, an initial upper boundary and an initial lower boundary of the effective area; traversing at least one external rectangular frame of a connected domain in the image to be detected, and determining the upper boundary, the lower boundary, the left boundary and the right boundary of the effective area based on the left boundary, the upper boundary and the lower boundary of the external rectangular frame of each connected domain, the initial left boundary and the initial right boundary; and determining the effective area according to the upper and lower boundaries and the left and right boundaries of the effective area.
Optionally, in an embodiment of the present application, the method further includes: determining at least one connected domain in the image to be detected by utilizing a run length smoothing algorithm, and determining the inclination angle of the at least one connected domain;
and performing rotation correction on the image to be detected according to the inclination angle of at least one connected domain.
Optionally, in an embodiment of the present application, determining at least one connected domain in the image to be measured by using a run-length smoothing algorithm, and determining an inclination angle of the at least one connected domain, includes: determining at least one initial connected domain in the image to be detected by using a run length smoothing algorithm; rejecting non-text regions in at least one initial connected domain to obtain at least one connected domain;
and rotationally correcting the image to be measured according to the inclination angle of at least one connected domain, comprising the following steps of: calculating the average value of the inclination angles of the circumscribed rectangles of the at least one connected domain to obtain an average inclination angle; and performing rotation correction on the image to be detected according to the average inclination angle.
Optionally, in an embodiment of the present application, the method further includes: inputting the image to be detected into a preset angle classification model to obtain a deflection angle of the image to be detected;
and performing rotation correction on the image to be detected according to the deflection angle of the image to be detected.
Optionally, in an embodiment of the present application, the method further includes: determining the brightness parameter of the image to be detected;
determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the determining comprises the following steps: and determining the image quality of the image to be detected according to the definition parameter of the image to be detected and the brightness parameter of the image to be detected, wherein the image quality parameter of the image to be detected comprises the definition parameter of the image to be detected and the brightness parameter of the image to be detected.
Optionally, in an embodiment of the present application, determining a brightness parameter of an image to be measured includes:
determining the gray value of at least one pixel in the image to be detected, and determining the average offset of the gray value according to the gray value of the at least one pixel; and determining the brightness parameter of the image to be detected according to the gray value and the average offset of at least one pixel.
Optionally, in an embodiment of the present application, determining a brightness parameter of the image to be measured according to the gray-level value and the average offset of at least one pixel includes:
if the average offset is greater than the first offset and the number of pixels with the gray value greater than or equal to the second gray threshold is greater than a first preset number, determining the brightness parameter of the image to be detected as a parameter corresponding to overexposure; and if the average offset is smaller than the second offset and the number of the pixels of which the gray value is smaller than or equal to the first gray threshold is larger than a second preset number, determining the brightness parameter of the image to be detected as a parameter corresponding to the over-dark of the image.
Optionally, in an embodiment of the present application, the method further includes: determining shadow parameters of an image to be detected;
determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the determining comprises the following steps:
and determining the image quality of the image to be detected according to the definition parameter of the image to be detected and the shadow parameter of the image to be detected, wherein the image quality parameter of the image to be detected comprises the definition parameter of the image to be detected and the shadow parameter of the image to be detected.
Optionally, in an embodiment of the present application, determining a shadow parameter of an image to be measured includes:
carrying out binarization processing on an image to be detected to obtain a binarized image; determining the area ratio of the shadow area to the image to be detected according to the binary image; and when the area ratio is larger than or equal to a preset ratio, determining the shadow parameter of the image to be detected as the parameter corresponding to the shadow-containing area, and when the area ratio is smaller than the preset ratio, determining the shadow parameter of the image to be detected as the parameter corresponding to the shadow-free area, and determining the shadow parameter of the image to be detected according to the binary image.
An embodiment of the present application provides an image data processing apparatus, including: the device comprises a segmentation module, a grading module, a definition calculation module and a quality calculation module;
the segmentation module is used for determining at least one image block according to the image to be detected;
the grading module is used for inputting the at least one image block into a preset grading model to obtain the definition grade of the at least one image block;
the definition calculating module is used for determining the definition parameter of the image to be detected according to the definition score of at least one image block;
and the quality calculation module is used for determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, and the image quality parameters of the image to be detected comprise the definition parameters of the image to be detected.
An embodiment of the present application provides an electronic device, including: the image processing device comprises a processor and a memory, wherein the processor is connected with the memory, and the memory stores a computer program, and the processor is used for executing the computer program to realize the image data processing method as described in any embodiment of the application.
An embodiment of the present application provides a computer storage medium, including: the computer storage medium stores a computer program that, when executed by a processor, implements the image data processing method as described in any of the embodiments of the present application.
In the embodiment of the application, at least one image block is determined according to an image to be detected; inputting at least one image block into a preset grading model to obtain the definition grade of the at least one image block; determining the definition parameter of the image to be detected according to the definition score of at least one image block; and determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the image quality parameters of the image to be detected comprise the definition parameters of the image to be detected. The image quality of the image to be detected is determined by performing definition grading on the image to be detected blocks and then determining the definition parameter of the image to be detected according to the definition grading, so that the labor is reduced, the processing efficiency is improved, a reference image is not required to be relied on, and the adaptability is better.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
fig. 1 is a flowchart of an image data processing method according to an embodiment of the present application;
fig. 2 is a flowchart of an image data processing method according to a second embodiment of the present application;
fig. 2a is a schematic diagram of an image to be measured according to a second embodiment of the present application;
fig. 2b is a schematic diagram of a binarization effect of a maximum inter-class variance method according to the second embodiment of the present application;
fig. 2c is a schematic diagram of a binarization effect of an adaptive binarization algorithm provided in the second embodiment of the present application;
fig. 2d is a schematic diagram of an effect of a shadow binarization image provided in the second embodiment of the present application;
fig. 3 is a structural diagram of an image data processing apparatus according to a third embodiment of the present application;
fig. 4 is a structural diagram of an image data processing apparatus according to a third embodiment of the present application;
fig. 5 is a structural diagram of an image data processing apparatus according to a third embodiment of the present application;
fig. 6 is a structural diagram of an image data processing apparatus according to a third embodiment of the present application;
fig. 7 is a structural diagram of an electronic device according to a fourth embodiment of the present application.
Detailed Description
The following further describes specific implementation of the embodiments of the present invention with reference to the drawings.
The first embodiment,
Fig. 1 shows a flowchart of an image data processing method provided in an embodiment of the present application, where fig. 1 is a flowchart of an image data processing method provided in an embodiment of the present application. The image data processing method includes the steps of:
step 101, determining at least one image block according to an image to be detected.
The image to be detected can be any image, and in the application, the image to be detected is an image with the quality of the image to be detected. For example, the image to be detected may be a test paper image collected in an online paper marking scene; the image to be detected may also be a face image collected in a face recognition scene, which is not limited in this application.
Optionally, in an embodiment of the present application, determining at least one image block according to an image to be detected includes: determining an effective area in an image to be detected; the active area is divided into at least one image block. It should be noted that, in the present application, the effective region refers to a region of interest for image quality detection, for example, if the image to be detected is an acquired test paper image, the effective region may be a text region; for another example, if the image to be measured is a face image, the effective region may be a face region. Only the effective area is divided to obtain at least one image block, and then detection is carried out, so that unnecessary data operation can be reduced.
Optionally, in an embodiment of the present application, determining an effective region in the image to be measured includes:
initializing a left boundary, a right boundary, an upper boundary and a lower boundary of an effective area of an image to be detected to obtain an initial left boundary, an initial right boundary, an initial upper boundary and an initial lower boundary of the effective area; traversing at least one external rectangular frame of a connected domain in the image to be detected, and determining the upper boundary, the lower boundary, the left boundary and the right boundary of the effective area based on the left boundary, the upper boundary and the lower boundary of the external rectangular frame of each connected domain, the initial left boundary and the initial right boundary; and determining the effective area according to the upper and lower boundaries and the left and right boundaries of the effective area. At least one connected domain can be determined in the image to be detected by using an image expansion algorithm, and the image expansion algorithm can be a run-length smoothing algorithm. Of course, this is merely an example and does not represent a limitation of the present application. The upper and lower boundaries and the left and right boundaries of the effective area can be quickly determined by using the circumscribed rectangle frame of the connected domain.
And 102, inputting the at least one image block into a preset grading model to obtain the definition grade of the at least one image block.
The preset scoring model is a neural network model and can score the definition of the input image. For example, the scoring model may have two types, the first type is clear and is represented by "1", the second type is fuzzy and is represented by "0", that is, after the image is input into the preset scoring model, if the obtained result is 1, the image is clear, and if the obtained result is 0, the image is fuzzy; for another example, the scoring model may have four classes, where the first class represents severe blur, and is represented by "001", the second class represents normal blur, and is represented by "010", the third class represents slight blur, and is represented by "011", and the fourth class represents clear, and is represented by "100", and of course, there may be more classes, for example, the fifth class represents background, and is represented by "000", and an image block determined as the background may not be used to determine the sharpness of the image to be measured. Of course, this is merely an example and does not represent a limitation of the present application.
It should be noted that the preset scoring model needs to be trained by using at least one sample and a sample label, and the sample label may be a classification label for sample definition labeled manually or a machine label, which is not limited in the present application.
And 103, determining the definition parameter of the image to be detected according to the definition score of at least one image block.
It should be noted that the definition parameter may include two levels of definition, may also include three levels of definition, and may also include more, which is not limited in the present application, and two specific examples are listed here to respectively describe:
optionally, in the first example, the sharpness parameter includes two levels of sharpness, where the two levels of sharpness are sharpness and blur, and in particular, the sharpness parameter may be implemented by various implementations, and two implementations are described here:
optionally, in a first implementation manner, determining a definition parameter of an image to be detected according to a definition score of at least one image block includes: and determining the definition parameter of the image to be detected according to the average value of the definition scores of at least one image block. For example, when the average value of the sharpness scores is greater than or equal to a preset value, the image to be detected is determined to be sharp, and the sharpness parameter of the image to be detected is determined to be a parameter corresponding to the sharpness of the image, for example, the parameter corresponding to the sharpness of the image is 1, and the parameter corresponding to the blur of the image is 0.
Optionally, in a second implementation manner, determining a definition parameter of the image to be detected according to the definition score of the at least one image block includes: determining the definition parameter of the image to be detected according to the number of the first type of image blocks and the number of the second type of image blocks, wherein the first type of image blocks are image blocks with definition scores lower than a preset score threshold, and the second type of image blocks are image blocks with definition scores greater than or equal to the preset score threshold. For example, the first type of image blocks are fuzzy image blocks, the second type of image blocks are clear image blocks, if the number of the clear image blocks is greater than that of the fuzzy image blocks, the image to be detected is determined to be a clear image, and the definition parameters of the image to be detected are determined to be parameters corresponding to the image clarity; for another example, if the number of the clear image blocks is greater than 5 times the number of the blurred image blocks, the image to be detected is determined to be a clear image, otherwise, the image to be detected is a blurred image. Of course, this is merely an example, and there may be many determination methods, which are not limited in this application.
Alternatively, in the second example, the sharpness parameter includes three levels of sharpness, which are sharpness, local blur, and overall blur, and specifically, the sharpness parameter can be implemented by various implementations, which are described by taking two implementations as an example:
optionally, in a first implementation manner, determining a definition parameter of an image to be detected according to a definition score of at least one image block includes: and determining the definition parameter of the image to be detected according to the average value of the definition scores of at least one image block. For example, when the average value of the sharpness scores is greater than or equal to a first preset value, the image to be detected is determined to be sharp, when the average value of the sharpness scores is greater than or equal to a second preset value and less than a first preset input value, the image to be detected is determined to be locally blurred, and when the average value of the sharpness scores is less than the second preset value, the image to be detected is determined to be wholly blurred.
Optionally, in a second implementation manner, determining a definition parameter of the image to be detected according to the definition score of the at least one image block includes: determining the definition parameter of the image to be detected according to the number of the first type of image blocks and the number of the second type of image blocks, wherein the first type of image blocks are image blocks with definition scores lower than a preset score threshold, and the second type of image blocks are image blocks with definition scores greater than or equal to the preset score threshold. For example, the first type image blocks are blurred image blocks, the second type image blocks are clear image blocks, if the number of the second type image blocks is less than or equal to the number of the first type image blocks, it is determined that the image to be detected is wholly blurred, if the number of the second type image blocks is greater than the number of the first type image blocks and is less than or equal to 5 times the number of the first type image blocks, it is determined that the image to be detected is locally blurred, and if the number of the second type image blocks is greater than 5 times the number of the first type image blocks, it is determined that the image to be detected is clear.
Of course, this is merely an exemplary illustration, and the determination may be performed by combining the average of the sharpness scores, the number of the first type image blocks, and the number of the second type image blocks, which is not limited in this application, for example, it is determined that the image to be detected is blurred or sharp according to the average of the sharpness scores, and if the image to be detected is blurred, it is determined that the image to be detected is locally blurred or wholly blurred according to the number of the first type image blocks and the number of the second type image blocks; for another example, whether the image to be detected is fuzzy or clear is judged according to the number of the first type image blocks and the number of the second type image blocks, and if the image to be detected is fuzzy, whether the image to be detected is local fuzzy or overall fuzzy is judged according to the average value of the definition scores. This is merely an example and does not represent a limitation of the present application.
And step 104, determining the image quality of the image to be detected according to the image quality parameters of the image to be detected.
The image quality parameter of the image to be detected comprises a definition parameter of the image to be detected. The image quality parameters of the image to be measured may further include a brightness parameter and/or a shadow parameter of the image to be measured, and three examples are listed here for detailed description:
optionally, in a first example of the present application, the method further includes: determining the brightness parameter of the image to be detected;
determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the determining comprises the following steps: and determining the image quality of the image to be detected according to the definition parameter of the image to be detected and the brightness parameter of the image to be detected.
Optionally, determining a brightness parameter of the image to be measured includes: determining the gray value of at least one pixel in the image to be detected, and determining the average offset of the gray value according to the gray value of the at least one pixel; and determining the brightness parameter of the image to be detected according to the gray value and the average offset of at least one pixel.
Further optionally, determining a brightness parameter of the image to be measured according to the gray value and the average offset of at least one pixel, including: if the average offset is greater than the first offset and the number of pixels with the gray value greater than or equal to the second gray threshold is greater than a first preset number, determining the brightness parameter of the image to be detected as a parameter corresponding to overexposure; and if the average offset is smaller than the second offset and the number of the pixels of which the gray value is smaller than or equal to the first gray threshold is larger than a second preset number, determining the brightness parameter of the image to be detected as a parameter corresponding to the over-dark of the image.
Of course, this is only an exemplary description, and the brightness parameter may also be determined in other manners, for example, the average offset is greater than the first offset, the brightness parameter of the image to be measured is determined to be a parameter corresponding to overexposure, and if the average offset is less than the second offset, the brightness parameter of the image to be measured is determined to be a parameter corresponding to overexposure; for another example, determining a pixel with a gray value less than or equal to the first gray threshold as an over-dark pixel, determining a pixel with a gray value greater than or equal to the second gray threshold as an over-exposed pixel, and if the average offset is greater than the first offset and the number of the over-exposed pixels is greater than 20 times the number of the over-dark pixels, determining the darkness parameter of the image to be measured as a parameter corresponding to the over-exposure; and if the average offset is smaller than the second offset and the number of the over-dark pixels is larger than the number of the over-exposed pixels, determining the brightness parameter of the image to be detected as a parameter corresponding to the over-dark of the image. Of course, this is merely an example and does not represent a limitation of the present application.
The image quality of the image to be detected is determined according to the definition parameter and the brightness parameter of the image to be detected, and the image quality is comprehensively evaluated by combining the definition and the brightness, so that the evaluation of the image quality is more accurate and comprehensive.
Optionally, in a second example of the present application, the image quality parameter of the image to be measured includes a sharpness parameter of the image to be measured and a shadow parameter of the image to be measured, and the method further includes: determining shadow parameters of an image to be detected;
determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the determining comprises the following steps: and determining the image quality of the image to be detected according to the definition parameter of the image to be detected and the shadow parameter of the image to be detected.
Optionally, determining a shadow parameter of the image to be measured includes: carrying out binarization processing on an image to be detected to obtain a binarized image; and determining the shadow parameters of the image to be detected according to the binary image. Further optionally, determining a shadow parameter of the image to be measured includes: carrying out binarization processing on an image to be detected to obtain a binarized image; determining the area ratio of the shadow area to the image to be detected according to the binary image; and when the area ratio is larger than or equal to a preset ratio, determining the shadow parameter of the image to be detected as the parameter corresponding to the shadow-containing area, and when the area ratio is smaller than the preset ratio, determining the shadow parameter of the image to be detected as the parameter corresponding to the shadow-free area, and determining the shadow parameter of the image to be detected according to the binary image. For example, the parameter corresponding to the shaded area is 1, and the parameter corresponding to the non-shaded area is 0, which is only an exemplary illustration here, and does not represent that the present application is limited thereto.
The image quality of the image to be detected is determined according to the definition parameter of the image to be detected and the shadow parameter of the image to be detected, the image quality is evaluated by combining the definition and the shadow area condition in the image to be detected, and the evaluation on the image quality is more accurate and comprehensive.
Optionally, in a third example of the present application, the image quality parameter of the image to be detected includes a sharpness parameter of the image to be detected, a darkness parameter of the image to be detected, and a shadow parameter of the image to be detected; determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the determining comprises the following steps: and determining the image quality of the image to be detected according to the definition parameter of the image to be detected, the brightness parameter of the image to be detected and the shadow parameter of the image to be detected.
It should be noted that, the first example and the second example can be referred to for the calculation manner of the brightness parameter and the shadow parameter, and details are not repeated here. The image quality is evaluated by integrating the definition parameter, the brightness parameter and the shadow parameter of the image to be measured, and the image quality is more accurately and comprehensively evaluated by combining the definition parameter, the brightness parameter and the shadow parameter of the image to be measured and the conditions of a shadow area.
With reference to the above three examples, if the quality parameter of the image to be measured satisfies at least one of the following, it is determined that the image to be measured is poor in quality: the definition parameter of the image to be detected is less than or equal to a first preset parameter; the brightness parameter of the image to be detected is within a preset brightness range; the shadow parameter of the image to be detected is a second preset parameter.
The definition parameter of the image to be measured is smaller than or equal to a first preset parameter and represents the definition difference of the image to be measured; the brightness parameter of the image to be detected represents the condition that the image to be detected has over exposure or over darkness within a preset brightness range, and the preset brightness range can include a first preset brightness which is greater than or equal to the first preset brightness and a second preset brightness which is less than or equal to the second preset brightness; the shadow parameter of the image to be detected is a second preset parameter which represents that the shadow area exists in the image to be detected, and the shadow parameter of the image to be detected is a first preset parameter which represents that the shadow area does not exist in the image to be detected or the shadow area is very small.
It should be noted that before or after the sharpness scoring is performed on the image to be measured, the image to be measured may be subjected to rotation correction.
For example, optionally, in an embodiment of the present application, the method further comprises: determining at least one connected domain in the image to be detected by utilizing a run length smoothing algorithm, and determining the inclination angle of the at least one connected domain; and performing rotation correction on the image to be detected according to the inclination angle of at least one connected domain. It should be noted that an average tilt angle of the tilt angles of at least one connected domain may be calculated, and the image to be measured is subjected to rotation correction according to the average tilt angle.
For example, in one implementation, determining at least one connected domain in the image to be measured by using a run-length smoothing algorithm, and determining an inclination angle of the at least one connected domain, includes: determining at least one initial connected domain in the image to be detected by using a run length smoothing algorithm; rejecting non-text regions in at least one initial connected domain to obtain at least one connected domain;
and rotationally correcting the image to be measured according to the inclination angle of at least one connected domain, comprising the following steps of: calculating the average value of the inclination angles of the circumscribed rectangles of the at least one connected domain to obtain an average inclination angle; and performing rotation correction on the image to be detected according to the average inclination angle.
It should be noted that, rejecting the non-text region in the at least one initial connected component results in that the at least one connected component can be rejected by the minimum bounding rectangle line of the at least one initial connected component, for example, the initial connected component satisfying any one of the following conditions is retained: the area of the minimum circumscribed rectangle is smaller than the preset area; the length of the long side of the minimum circumscribed rectangle is larger than a first preset length; the length of the short side of the minimum external connecting rectangle is greater than a second preset length; the length of the long side of the minimum circumscribed rectangle is more than 2 times the length of the short side. And eliminating the initial connected domain which does not meet all the conditions, and considering the reserved connected domain as a text region. The accuracy of subsequent processing (such as rotation correction and definition evaluation) can be improved by deleting the initial connected domain.
As another example, optionally, in an embodiment of the present application, the method further includes: inputting the image to be detected into a preset angle classification model to obtain a deflection angle of the image to be detected; and performing rotation correction on the image to be detected according to the deflection angle of the image to be detected. It should be noted that the preset angle classification model may be a neural network model, and may be divided into four classes, each of which is 0 degree, 90 degrees, 180 degrees, and 270 degrees, after the image to be detected is input, four classes of results may be output, and may be respectively represented as "00, 01, 10, and 11" corresponding to the four angles, if the output result is 00, it indicates that the image to be detected does not need rotation correction, and if the output result is 01, it indicates that the image to be detected needs 90 degree rotation correction, that is, the image to be detected deflects by 90 degrees; if the output result is 10, the image to be detected needs 180-degree rotation correction, namely the image to be detected deflects 180 degrees; if the output result is 11, it indicates that the image to be measured needs 270 degrees rotation correction, that is, the image to be measured deflects 270 degrees, and it should be noted that the image to be measured deflects 90 degrees and 270 degrees only in different deflection directions, for example, the deflection 90 degrees may be clockwise deflection 90 degrees, and the deflection 270 degrees may be clockwise deflection 270 degrees, that is, counterclockwise deflection 90 degrees.
After the image to be detected is subjected to rotation correction, the image definition scoring is carried out, so that the definition scoring can be ensured to be more accurate.
In the embodiment of the application, at least one image block is determined according to an image to be detected; inputting at least one image block into a preset grading model to obtain the definition grade of the at least one image block; determining the definition parameter of the image to be detected according to the definition score of at least one image block; and determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the image quality parameters of the image to be detected comprise the definition parameters of the image to be detected. The image quality of the image to be detected is determined by performing definition grading on the image to be detected blocks and then determining the definition parameter of the image to be detected according to the definition grading, so that the labor is reduced, the processing efficiency is improved, a reference image is not required to be relied on, and the adaptability is better.
Example II,
Based on the image data processing method provided by the first embodiment, a second embodiment of the present application provides an image data processing method, which is a further detailed description of the image data processing method described in the first embodiment, and the method can be applied to an application scenario of online paper marking or online correction, as shown in fig. 2, and fig. 2 is a flowchart of the image data processing method provided by the second embodiment of the present application. The image data processing method includes the steps of:
step 201, acquiring an image to obtain an image to be detected, and performing size standardization processing on the image to be detected.
The image acquisition mode includes, but is not limited to, uploading by taking a picture with a camera of the mobile device or scanning by a scanner. The size normalization processing is carried out on the image to be measured, and particularly, the image data volume can be reduced for some images with higher resolution, so that the evaluation is convenient. For example, the normalized image width Nw and the normalized image height Nh. And when the width of the image to be measured is greater than the standard image width Nw and the height of the image to be measured is greater than the standard image height Nh, carrying out same-scale scaling on the image to be measured, wherein the scaling ratio ZR is min (NW/width, NH/height), and min () represents the minimum value of the two input values l.
And step 202, carrying out overexposure and overexposure evaluation on the image to be detected.
The image to be detected can be converted into a single-channel image, the gray value range of each pixel is [0,255], wherein the gray value of 0 represents that the pixel is darkest, and the gray value of 255 represents that the pixel is brightest. The average shift of the gray value of the entire image to be measured can be calculated according to formula 1:
Figure BDA0002502842320000111
wherein avg represents the average offset of the gray value, height and width respectively represent the height and width of the image to be measured (in this application, the height of the image to be measured refers to the number of pixels in a column in the height direction, and the width of the image to be measured refers to the number of pixels in a row in the width direction), I (I, j) represents the gray value of a pixel point with pixel coordinates (I, j), and 128 is the median of the gray range.
And determining the number of pixels, namely, leftHist, with the gray value less than or equal to a first preset gray value, and determining the number of pixels, namely, rightHist, with the gray value greater than or equal to the first preset gray value. The image overexposure and overconvence evaluation is completed according to the following conditions:
if all the conditions of the formulas 2, 3 and 4 are met at the same time, determining that the image to be detected is over-exposed, wherein the brightness parameter of the image to be detected is a parameter corresponding to the over-exposure:
avg >100, equation 2;
rightHist > leftHist × 20, equation 3;
Figure BDA0002502842320000121
wherein 100 is the first offset described in the first embodiment when the brightness parameter is calculated, and the first offset may be set by itself, and is not necessarily 100, and may also be 130, 150, and the like, which is not limited in this application.
If all the conditions of the formulas 5 and 6 are met, determining that the image to be detected is too dark, and the brightness parameter of the image to be detected is a parameter corresponding to the too dark image:
avg is less than-50, equation 5;
leftHist > rightHist, equation 6;
wherein-50 is the second offset described in the first embodiment when calculating the brightness parameter, and the second offset may be set by itself, and is not necessarily-50, and may also be-30, -20, and the like, which is not limited in this application.
And step 203, performing shadow evaluation on the image to be detected.
As shown in fig. 2a, fig. 2a is a schematic diagram of an image to be measured provided in the second embodiment of the present application, and image binarization can be performed on the image to be measured by using an inter-maximum-class variance method (OTSU) and an adaptive binarization algorithm to obtain a binarized image a and a binarized image B, as shown in fig. 2B and fig. 2c, fig. 2B is a schematic diagram of a binarization effect of the inter-maximum-class variance method provided in the second embodiment of the present application, and fig. 2c is a schematic diagram of a binarization effect of the adaptive binarization algorithm provided in the second embodiment of the present application. And subtracting the binarized image a from the binarized image B to obtain a shadow binarized image with only shadow, as shown in fig. 2d, where fig. 2d is a schematic diagram of an effect of the shadow binarized image provided in the second embodiment of the present application. Calculating the sum of the areas of the shadow regions, and if the sum of the areas of the shadow regions is greater than or equal to 25% of the total area, determining that the image to be detected contains the shadow regions, where the shadow parameters of the image to be detected are parameters corresponding to the shadow regions, and 25% is a preset ratio, which is only an exemplary description here, and the preset ratio may also be other values, for example, 10%, 5%, 15%, and the like, and the present application does not limit this.
And step 204, performing rotation correction on the image to be detected.
The rotation correction can be divided into two types, namely small-angle rotation correction and large-angle rotation correction.
Firstly, small-angle rotation correction is carried out, binarization processing can be carried out on an image to be detected before rotation correction is carried out to obtain a binary image, at least one initial connected domain is determined by using a run-length smoothing algorithm, and the initial connected domain can be understood as a text line. The run-length smoothing algorithm is to use an image expansion algorithm to combine connected domains of each character into an integral connected domain, and may use lateral expansion, where the expansion coefficient is set to 10, that is, the laterally expanded connected domains are combined, the expansion range is 10 units, and one unit may be one pixel.
Screening the at least one initial connected domain according to the minimum bounding rectangle of the at least one initial connected domain, for example, selecting the initial connected domain which meets all the following conditions in the at least one connected domain, wherein the area of the minimum bounding rectangle is less than 50; the long side of the rectangular frame of the minimum circumscribed rectangle is larger than 50; the short edge of the rectangular frame of the minimum circumscribed rectangle is larger than 10; the length of the long side of the rectangular frame of the minimum circumscribed rectangle is more than 2 times of the length of the short side. The selected connected domain may be considered a connected domain of the text region. Then, calculating the average value of the inclination angles of the minimum circumscribed rectangles of all connected domains as an average inclination angle, and performing rotation correction on the image to be measured according to the average inclination angle, wherein the conversion formula of the rotation correction is shown as a formula 7:
Figure BDA0002502842320000131
wherein (x)0,y0) The coordinates of the pixel points on the original image to be measured are obtained, and the (x, y) coordinates of the pixel points after rotation are obtained.
And then carrying out large-angle rotation correction, dividing the image into four categories of 0 degree, 90 degrees, 180 degrees and 270 degrees according to the deflection angle, and classifying the image to be detected by adopting a preset angle classification model. Wherein, the preset angle classification model can adopt a model based on VGG 19. And determining the deflection angle of the image to be detected according to a preset angle classification model, and further performing rotation correction on the image to be detected.
And step 205, evaluating the definition of the image to be detected.
An effective area is determined for an image to be measured, and in this embodiment, for example, in an online paper marking and online correction job, the effective area may be an effective text area. The left and right boundaries and the upper and lower boundaries of the effective region of the image to be detected can be initialized to obtain an initial left boundary, an initial right boundary, an initial upper boundary and an initial lower boundary of the effective region; traversing at least one external rectangular frame of a connected domain in the image to be detected, and determining the upper boundary, the lower boundary, the left boundary and the right boundary of the effective area based on the left boundary, the upper boundary and the lower boundary of the external rectangular frame of each connected domain, the initial left boundary and the initial right boundary; and determining the effective area according to the upper and lower boundaries and the left and right boundaries of the effective area. In this embodiment, the connected component can also be said to be a text line.
Specifically, the left and right boundaries of the valid text region (i.e., the valid region described in the first embodiment, which is described in the present embodiment by taking the valid region as the valid text region as an example) are determined, the left boundary of the text valid region is set as leftBoundary, the initial value is (width of the image to be measured-1), the right boundary is rightBoundary, and the initial value is 0. And traversing the external rectangular frames of each text line in sequence, and setting the left boundary of each rectangular frame as rectLeft and the right boundary as rectRight. Let leftBoundary equal min (recitLeft, leftBoundary), the minimum between recitLeft and leftBoundary, and rightBoundary equal max (rectRight, rightBoundary), the maximum between rectRight and rightBoundary. And sequentially judging all text lines, wherein the final accumulated result is the left and right boundaries of the effective text area.
The upper and lower boundaries of the text effective area are determined. Setting the upper boundary of the text effective area as upBoundary, and setting the initial value as 0; the lower boundary is the down boundary, and the initial value is the height of the image to be measured. Sorting the rectangular frames of the text lines in the sequence from small to large according to the line coordinates of the rectangular frames, and enabling upBoundary to be equal to the upper boundary line coordinate of the first text line rectangular frame after sorting. Let down bounding be equal to the lower boundary line coordinates of the rectangular box of the last text line after sorting.
And cutting the image according to the size of 320 multiplied by 320 to obtain a group of image blocks with the same size in the effective area of the image to be detected. The definition level of the image is divided into 4 levels, namely severe blur, ordinary blur, slight blur and definition, and the image block is divided into 5 categories by adding the background category. The pre-set scoring model may be an vgg19 network-based model, using a pre-set scoring model. Inputting the image block into a preset scoring model can obtain 5 results, which are respectively represented by 0-5, wherein 0 represents background, 1 represents severe blur, 2 represents normal blur, 3 represents mild blur, and 4 represents clearness.
Let two values, vagueNum and clearNum, represent the number of blurred image blocks and the number of clear image blocks, respectively, with the initial values both being 0. Inputting at least one image block into a preset grading model (training of the grading model is obtained by training after marking data on a collection line as training data), obtaining a classification result of each image block, and if the output result is 0, taking the current image block as a background and not performing any operation; if the output result is 1/2/3, the current image block is fuzzy, and at this time, 1 is added to vagueNum; if the output result is 4, the current image block is clear, and the clearNum is added with 1. Until all of the at least one image block processing ends.
And judging whether the image to be detected is locally blurred, wholly blurred or clear according to the output result of the grading model, the number of blurred image blocks and the number of clear image blocks.
For example, if vagueNum is 0, determining that the image to be measured is clear, and the definition parameter of the image to be measured is a parameter corresponding to the image clarity;
if the vagueNum is greater than 0 and the clearNum is greater than vagueNum, determining that the image to be detected is locally blurred, and determining that the definition parameter of the image to be detected is a parameter corresponding to the local blurring;
if the image to be detected meets the formula 8 and the formula 9, determining that the image to be detected is overall fuzzy, and determining that the definition parameter of the image to be detected is a parameter corresponding to the overall fuzzy;
vagueNum >0, equation 8;
Figure BDA0002502842320000151
wherein p isiAnd indicating a prediction result of the ith image block by using a preset scoring model, and num indicates the number of non-background image blocks in at least one image block. Equation 9 indicates that the average value of the prediction results of the non-background image blocks is less than 2.9.
And step 206, judging the image quality.
Specifically, the quality of the image to be measured is determined according to the brightness parameter obtained in step 202, the shadow parameter obtained in step 203, and the sharpness parameter obtained in step 205. If the image to be detected has the conditions of overexposure, over-darkness, shadow or blur, the image quality of the image to be detected is determined to be poor, namely the image quality of the image to be detected is unqualified, otherwise, the image quality of the image to be detected is determined to be good, namely the image quality of the image to be detected is qualified, namely the image to be detected does not have any condition of overexposure, over-darkness, shadow or blur.
In the embodiment of the application, the brightness parameter, the shadow parameter and the definition parameter of the image to be detected are automatically calculated, so that the manpower is reduced, the processing efficiency is improved, the reference image is not required to be relied on, the adaptability is better, the quality of the guessed image is comprehensively determined according to the brightness parameter, the shadow parameter and the definition parameter, all aspects of factors are considered, and the judgment on the image quality is more accurate.
Example III,
Based on the image data processing methods described in the first and second embodiments, an embodiment of the present application provides an image data processing apparatus for performing the image data processing methods described in the first and second embodiments, as shown in fig. 3, the image data processing apparatus 30 includes: a segmentation module 301, a scoring module 302, a definition calculation module 303 and a quality calculation module 304;
the segmentation module 301 is configured to determine at least one image block according to an image to be detected;
the scoring module 302 is configured to input the at least one image block into a preset scoring model to obtain a definition score of the at least one image block;
the definition calculating module 303 is configured to determine a definition parameter of the image to be detected according to the definition score of the at least one image block;
the quality calculation module 304 is configured to determine the image quality of the image to be detected according to the image quality parameter of the image to be detected, where the image quality parameter of the image to be detected includes a definition parameter of the image to be detected.
Optionally, in an implementation manner, the definition calculating module 303 is further configured to determine a definition parameter of the image to be detected according to an average value of the definition scores of the at least one image block.
Optionally, in an implementation manner, the definition calculating module 303 determines a definition parameter of the image to be detected according to the number of first-class image blocks and the number of second-class image blocks, where the first-class image blocks are image blocks whose definition score is lower than a preset score threshold, and the second-class image blocks are image blocks whose definition score is greater than or equal to the preset score threshold.
Optionally, in an implementation manner, the segmentation module 301 is further configured to determine an effective region in the image to be measured; the active area is divided into at least one image block.
Optionally, in an implementation manner, the segmentation module 301 is further configured to initialize left and right boundaries and upper and lower boundaries of an effective region of the image to be detected, so as to obtain an initial left boundary, an initial right boundary, an initial upper boundary, and an initial lower boundary of the effective region; traversing at least one external rectangular frame of a connected domain in the image to be detected, and determining the upper boundary, the lower boundary, the left boundary and the right boundary of the effective area based on the left boundary, the upper boundary and the lower boundary of the external rectangular frame of each connected domain, the initial left boundary and the initial right boundary; and determining the effective area according to the upper and lower boundaries and the left and right boundaries of the effective area.
Optionally, in an implementation, as shown in fig. 4, the image data processing apparatus 30 further includes a rotation correction module 305, configured to determine at least one connected domain in the image to be measured by using a run-length smoothing algorithm, and determine an inclination angle of the at least one connected domain; and performing rotation correction on the image to be detected according to the inclination angle of at least one connected domain.
Optionally, in an implementation, the rotation correction module 305 is configured to determine at least one initial connected domain in the image to be measured by using a run-length smoothing algorithm; rejecting non-text regions in at least one initial connected domain to obtain at least one connected domain; calculating the average value of the inclination angles of the circumscribed rectangles of the at least one connected domain to obtain an average inclination angle; and performing rotation correction on the image to be detected according to the average inclination angle.
Optionally, in an implementation manner, as shown in fig. 4, the image data processing apparatus 30 further includes a rotation correction module 305, configured to input the image to be measured into a preset angle classification model to obtain a deflection angle of the image to be measured; and performing rotation correction on the image to be detected according to the deflection angle of the image to be detected.
Optionally, in an implementation manner, as shown in fig. 5, the image data processing apparatus 30 further includes a shading calculation module 306, configured to determine a shading parameter of the image to be detected;
the quality calculation module 304 is configured to determine the image quality of the image to be detected according to the definition parameter of the image to be detected and the brightness parameter of the image to be detected, where the image quality parameter of the image to be detected includes the definition parameter of the image to be detected and the brightness parameter of the image to be detected.
Optionally, in an implementation manner, the brightness calculation module 306 is configured to determine a gray value of at least one pixel in the image to be detected, and determine an average offset of the gray value according to the gray value of the at least one pixel; and determining the brightness parameter of the image to be detected according to the gray value and the average offset of at least one pixel.
Optionally, in an implementation manner, the shading calculation module 306 is configured to determine that the shading parameter of the image to be detected is a parameter corresponding to overexposure if the average offset is greater than the first offset and the number of pixels of which the gray value is greater than or equal to the second gray threshold is greater than a first preset number; and if the average offset is smaller than the second offset and the number of the pixels of which the gray value is smaller than or equal to the first gray threshold is larger than a second preset number, determining the brightness parameter of the image to be detected as a parameter corresponding to the over-dark of the image.
Optionally, in an implementation manner, as shown in fig. 6, the image data processing apparatus 30 further includes a shadow calculating module 307, configured to determine a shadow parameter of the image to be measured;
the quality calculation module 304 is configured to determine the image quality of the image to be detected according to the definition parameter of the image to be detected and the shadow parameter of the image to be detected, where the image quality parameter of the image to be detected includes the definition parameter of the image to be detected and the shadow parameter of the image to be detected.
Optionally, in an implementation manner, the shadow calculating module 307 is configured to perform binarization processing on the image to be measured to obtain a binarized image; determining the area ratio of the shadow area to the image to be detected according to the binary image; and when the area ratio is larger than or equal to a preset ratio, determining the shadow parameter of the image to be detected as the parameter corresponding to the shadow-containing area, and when the area ratio is smaller than the preset ratio, determining the shadow parameter of the image to be detected as the parameter corresponding to the shadow-free area, and determining the shadow parameter of the image to be detected according to the binary image.
Optionally, in an implementation manner, the quality calculation module 304 is configured to determine the image quality of the image to be detected according to the definition parameter of the image to be detected, the brightness parameter of the image to be detected, and the shadow parameter of the image to be detected, where the image quality parameter of the image to be detected includes the definition parameter of the image to be detected, the brightness parameter of the image to be detected, and the shadow parameter of the image to be detected.
In the embodiment of the application, at least one image block is determined according to an image to be detected; inputting at least one image block into a preset grading model to obtain the definition grade of the at least one image block; determining the definition parameter of the image to be detected according to the definition score of at least one image block; and determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the image quality parameters of the image to be detected comprise the definition parameters of the image to be detected. The image quality of the image to be detected is determined by performing definition grading on the image to be detected blocks and then determining the definition parameter of the image to be detected according to the definition grading, so that the labor is reduced, the processing efficiency is improved, a reference image is not required to be relied on, and the adaptability is better.
Example four,
Based on the image data processing methods described in the first and second embodiments, an embodiment of the present application provides an image data processing apparatus for executing the image data processing methods described in the first and second embodiments, as shown in fig. 7, the electronic apparatus 70 includes: at least one processor (processor)702, memory (memory)704, bus 706, and communication Interface (Communications Interface) 708.
Wherein:
the processor 702, communication interface 708, and memory 704 communicate with one another via a communication bus 706.
A communication interface 708 for communicating with other devices.
The processor 702 is configured to execute the program 710, and may specifically execute the relevant steps in the methods described in the first embodiment and the second embodiment.
In particular, the program 710 may include program code that includes computer operating instructions.
The processor 702 may be a central processing unit CPU or an ASIC specific integrated circuit
(Application Specific Integrated Circuit) or one or more Integrated circuits configured to implement embodiments of the invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 704 for storing a program 710. The memory 704 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
Example V,
An embodiment of the present application provides a computer storage medium, including: the computer storage medium stores a computer program that, when executed by a processor, implements the image data processing method as described in any of the embodiments of the present application.
In the embodiment of the application, at least one image block is determined according to an image to be detected; inputting at least one image block into a preset grading model to obtain the definition grade of the at least one image block; determining the definition parameter of the image to be detected according to the definition score of at least one image block; and determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the image quality parameters of the image to be detected comprise the definition parameters of the image to be detected. The image quality of the image to be detected is determined by performing definition grading on the image to be detected blocks and then determining the definition parameter of the image to be detected according to the definition grading, so that the labor is reduced, the processing efficiency is improved, a reference image is not required to be relied on, and the adaptability is better.
The image data processing apparatus of the embodiments of the present application exists in various forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include: smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. This type of device comprises: audio, video players (e.g., ipods), handheld game consoles, electronic books, and smart toys and portable car navigation devices.
(4) And other electronic equipment with data interaction function.
Thus, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular transactions or implement particular abstract data types. The application may also be practiced in distributed computing environments where transactions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (16)

1. An image data processing method characterized by comprising:
determining at least one image block according to an image to be detected;
inputting the at least one image block into a preset grading model to obtain the definition grade of the at least one image block;
determining the definition parameter of the image to be detected according to the definition score of the at least one image block;
and determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the image quality parameters of the image to be detected comprise the definition parameters of the image to be detected.
2. The method according to claim 1, wherein the determining the sharpness parameter of the image to be tested according to the sharpness score of the at least one image block comprises:
and determining the definition parameter of the image to be detected according to the average value of the definition scores of the at least one image block.
3. The method according to claim 1, wherein the determining the sharpness parameter of the image to be tested according to the sharpness score of the at least one image block comprises:
determining the definition parameter of the image to be detected according to the number of first-class image blocks and the number of second-class image blocks, wherein the first-class image blocks are image blocks with definition scores lower than a preset score threshold, and the second-class image blocks are image blocks with definition scores greater than or equal to the preset score threshold.
4. The method of claim 1, wherein determining at least one image block from the image to be tested comprises:
determining an effective area in the image to be detected; dividing the effective area into the at least one image block.
5. The method of claim 4, wherein the image under test comprises at least one connected component, and wherein determining the effective area in the image under test comprises:
initializing the left and right boundaries and the upper and lower boundaries of the effective area of the image to be detected to obtain an initial left boundary, an initial right boundary, an initial upper boundary and an initial lower boundary of the effective area;
traversing an external rectangular frame of at least one connected domain in the image to be detected, and determining an upper boundary, a lower boundary, a left boundary and a right boundary of the effective region based on the left boundary, the upper boundary and the lower boundary of the external rectangular frame of each connected domain, the initial left boundary and the initial right boundary;
and determining the effective area according to the upper and lower boundaries and the left and right boundaries of the effective area.
6. The method of claim 1, further comprising:
determining at least one connected domain in the image to be detected by using a run length smoothing algorithm, and determining the inclination angle of the at least one connected domain;
and performing rotation correction on the image to be detected according to the inclination angle of the at least one connected domain.
7. The method of claim 1, wherein determining at least one connected component in the image to be measured by using a run-length smoothing algorithm and determining a tilt angle of the at least one connected component comprises:
determining at least one initial connected domain in the image to be detected by utilizing the run length smoothing algorithm; eliminating non-text regions from the at least one initial connected domain to obtain the at least one connected domain;
and rotationally correcting the image to be detected according to the inclination angle of the at least one connected domain, comprising:
calculating the average value of the inclination angles of the circumscribed rectangles of the at least one connected domain to obtain an average inclination angle; and performing rotation correction on the image to be detected according to the average inclination angle.
8. The method according to any one of claims 1-7, further comprising:
inputting the image to be detected into a preset angle classification model to obtain a deflection angle of the image to be detected;
and performing rotation correction on the image to be detected according to the deflection angle of the image to be detected.
9. The method of claim 1, further comprising: determining the brightness parameter of the image to be detected;
determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the determining comprises the following steps:
and determining the image quality of the image to be detected according to the definition parameter of the image to be detected and the brightness parameter of the image to be detected, wherein the image quality parameter of the image to be detected comprises the definition parameter of the image to be detected and the brightness parameter of the image to be detected.
10. The method of claim 9, wherein determining the darkness parameter of the image under test comprises:
determining the gray value of at least one pixel in the image to be detected, and determining the average offset of the gray value according to the gray value of the at least one pixel;
and determining the brightness parameter of the image to be detected according to the gray value of the at least one pixel and the average offset.
11. The method of claim 10, wherein determining the brightness parameter of the image to be tested according to the gray value of the at least one pixel and the average offset comprises:
if the average offset is greater than the first offset and the number of pixels with the gray value greater than or equal to the second gray threshold is greater than a first preset number, determining the brightness parameter of the image to be detected as a parameter corresponding to overexposure;
and if the average offset is smaller than the second offset and the number of pixels of which the gray value is smaller than or equal to the first gray threshold is larger than a second preset number, determining the brightness parameter of the image to be detected as a parameter corresponding to over-dark of the image.
12. The method of claim 1, further comprising: determining the shadow parameters of the image to be detected;
determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, wherein the determining comprises the following steps:
determining the image quality of the image to be detected according to the definition parameter of the image to be detected and the shadow parameter of the image to be detected, wherein the image quality parameter of the image to be detected comprises the definition parameter of the image to be detected and the shadow parameter of the image to be detected.
13. The method of claim 12, wherein determining the shadow parameters of the image under test comprises:
carrying out binarization processing on the image to be detected to obtain a binarized image;
determining the area ratio of the shadow area to the image to be detected according to the binary image;
when the area ratio is larger than or equal to a preset ratio, determining the shadow parameter of the image to be detected as a parameter corresponding to a shadow-containing area, and when the area ratio is smaller than the preset ratio, determining the shadow parameter of the image to be detected as a parameter corresponding to a shadow-free area.
14. An image data processing apparatus characterized by comprising: the device comprises a segmentation module, a grading module, a definition calculation module and a quality calculation module;
the segmentation module is used for determining at least one image block according to an image to be detected;
the grading module is used for inputting the at least one image block into a preset grading model to obtain the definition grade of the at least one image block;
the definition calculating module is used for determining the definition parameter of the image to be detected according to the definition score of the at least one image block;
the quality calculation module is used for determining the image quality of the image to be detected according to the image quality parameters of the image to be detected, and the image quality parameters of the image to be detected comprise the definition parameters of the image to be detected.
15. An electronic device, comprising: a processor and a memory, the processor being connected to the memory, the memory storing a computer program, the processor being configured to execute the computer program to implement the image data processing method as claimed in any one of claims 1 to 13.
16. A computer storage medium, comprising: the computer storage medium stores a computer program which, when executed by a processor, implements the image data processing method as described in any one of claims 1 to 13.
CN202010437523.8A 2020-05-21 2020-05-21 Image data processing method, apparatus and computer storage medium Pending CN111598884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010437523.8A CN111598884A (en) 2020-05-21 2020-05-21 Image data processing method, apparatus and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010437523.8A CN111598884A (en) 2020-05-21 2020-05-21 Image data processing method, apparatus and computer storage medium

Publications (1)

Publication Number Publication Date
CN111598884A true CN111598884A (en) 2020-08-28

Family

ID=72187468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010437523.8A Pending CN111598884A (en) 2020-05-21 2020-05-21 Image data processing method, apparatus and computer storage medium

Country Status (1)

Country Link
CN (1) CN111598884A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906708A (en) * 2021-03-29 2021-06-04 北京世纪好未来教育科技有限公司 Picture processing method and device, electronic equipment and computer storage medium
CN112927148A (en) * 2021-01-27 2021-06-08 上海云深网络技术有限公司 Image processing method and image processing system
CN112991313A (en) * 2021-03-29 2021-06-18 清华大学 Image quality evaluation method and device, electronic device and storage medium
CN113099064A (en) * 2021-04-08 2021-07-09 北京利联科技有限公司 Method and apparatus for image parameter determination
CN114257695A (en) * 2021-12-14 2022-03-29 成都信和创业科技有限责任公司 Universal image projection equipment imaging definition detection method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100056167A (en) * 2008-11-19 2010-05-27 삼성전자주식회사 Image processing apparatus for obtaining high-definition color image and method therof
CN103632369A (en) * 2013-12-05 2014-03-12 淮海工学院 Method for universally detecting quality of non-reference underwater images on basis of combination of block average definition
CN105391940A (en) * 2015-11-05 2016-03-09 华为技术有限公司 Image recommendation method and apparatus
CN106093051A (en) * 2016-06-06 2016-11-09 佛山市智人机器人有限公司 Paper roll tangent plane burr detection method based on machine vision and device
CN107958455A (en) * 2017-12-06 2018-04-24 百度在线网络技术(北京)有限公司 Image definition appraisal procedure, device, computer equipment and storage medium
CN110298827A (en) * 2019-06-19 2019-10-01 桂林电子科技大学 A kind of picture quality recognition methods based on image procossing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100056167A (en) * 2008-11-19 2010-05-27 삼성전자주식회사 Image processing apparatus for obtaining high-definition color image and method therof
CN103632369A (en) * 2013-12-05 2014-03-12 淮海工学院 Method for universally detecting quality of non-reference underwater images on basis of combination of block average definition
CN105391940A (en) * 2015-11-05 2016-03-09 华为技术有限公司 Image recommendation method and apparatus
CN106093051A (en) * 2016-06-06 2016-11-09 佛山市智人机器人有限公司 Paper roll tangent plane burr detection method based on machine vision and device
CN107958455A (en) * 2017-12-06 2018-04-24 百度在线网络技术(北京)有限公司 Image definition appraisal procedure, device, computer equipment and storage medium
CN110298827A (en) * 2019-06-19 2019-10-01 桂林电子科技大学 A kind of picture quality recognition methods based on image procossing

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927148A (en) * 2021-01-27 2021-06-08 上海云深网络技术有限公司 Image processing method and image processing system
CN112906708A (en) * 2021-03-29 2021-06-04 北京世纪好未来教育科技有限公司 Picture processing method and device, electronic equipment and computer storage medium
CN112991313A (en) * 2021-03-29 2021-06-18 清华大学 Image quality evaluation method and device, electronic device and storage medium
CN112906708B (en) * 2021-03-29 2023-10-24 北京世纪好未来教育科技有限公司 Picture processing method and device, electronic equipment and computer storage medium
CN113099064A (en) * 2021-04-08 2021-07-09 北京利联科技有限公司 Method and apparatus for image parameter determination
CN114257695A (en) * 2021-12-14 2022-03-29 成都信和创业科技有限责任公司 Universal image projection equipment imaging definition detection method
CN114257695B (en) * 2021-12-14 2023-11-07 成都信和创业科技有限责任公司 Universal imaging definition detection method for image projection equipment

Similar Documents

Publication Publication Date Title
CN111598884A (en) Image data processing method, apparatus and computer storage medium
CN111340752A (en) Screen detection method and device, electronic equipment and computer readable storage medium
JP4590470B2 (en) Method and system for estimating background color
CN111353497B (en) Identification method and device for identity card information
US9785850B2 (en) Real time object measurement
US11790499B2 (en) Certificate image extraction method and terminal device
CN114529459B (en) Method, system and medium for enhancing image edge
CN111626249B (en) Method and device for identifying geometric figure in topic image and computer storage medium
CN113688838B (en) Red handwriting extraction method and system, readable storage medium and computer equipment
CN114511041B (en) Model training method, image processing method, device, equipment and storage medium
CN111915635A (en) Test question analysis information generation method and system supporting self-examination paper marking
CN111899243A (en) Image definition evaluation method and device and computer readable storage medium
CN113609984A (en) Pointer instrument reading identification method and device and electronic equipment
US8442348B2 (en) Image noise reduction for digital images using Gaussian blurring
CN113129298B (en) Method for identifying definition of text image
CN111179245B (en) Image quality detection method, device, electronic equipment and storage medium
US20170352170A1 (en) Nearsighted camera object detection
CN113901941A (en) Cargo vehicle checking method and device, electronic device and storage medium
CN109558878B (en) Image recognition method and device
CN112801987A (en) Mobile phone part abnormity detection method and equipment
CN112785550A (en) Image quality value determination method, image quality value determination device, storage medium, and electronic device
CN111739014B (en) Image quality detection method based on image processing and related device
CN109271986B (en) Digital identification method based on Second-Confirm
CN116665146A (en) Vehicle thatch cover detection method and device, electronic equipment and storage medium
CN115546805A (en) Answer sheet identification method and device, readable medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination