WO2021143233A1 - 图像清晰度检测方法、系统、设备及存储介质 - Google Patents

图像清晰度检测方法、系统、设备及存储介质 Download PDF

Info

Publication number
WO2021143233A1
WO2021143233A1 PCT/CN2020/121508 CN2020121508W WO2021143233A1 WO 2021143233 A1 WO2021143233 A1 WO 2021143233A1 CN 2020121508 W CN2020121508 W CN 2020121508W WO 2021143233 A1 WO2021143233 A1 WO 2021143233A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
sharpness
blocks
detected
Prior art date
Application number
PCT/CN2020/121508
Other languages
English (en)
French (fr)
Inventor
魏建欢
宋佳阳
孙新
章勇
曹李军
毛晓蛟
熊超
陈卫东
Original Assignee
苏州科达科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州科达科技股份有限公司 filed Critical 苏州科达科技股份有限公司
Publication of WO2021143233A1 publication Critical patent/WO2021143233A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the present invention relates to the technical field of image processing, in particular to an image definition detection method, system, equipment and storage medium.
  • Image quality evaluation can be divided into subjective evaluation methods and objective evaluation methods.
  • Subjective evaluation is subjectively scored by the observer on the image quality, generally using the average subjective score (Mean opin-ion score, MO) or the average subjective score difference ( Opinion score, DMOS) (that is, the difference between the human eye's evaluation scores for undistorted images and distorted images), but subjective evaluation has a large workload and time-consuming, which is very inconvenient to use.
  • the objective evaluation method is that the computer calculates the quality index of the image according to a certain algorithm. According to whether the reference image is needed during the evaluation, it can be divided into full reference (FR), semi-reference (reduced reference, RR) and none. Reference (No refer-ence, NR) and other three types of evaluation methods.
  • the sharpness of an image is a measure of the richness of the texture details of the image, whether the image can reach the resolution that can be expressed, can be used as an important index to measure the quality of the image, and it can better correspond to the subjective feelings of people.
  • Low image clarity is manifested as image blur.
  • the attenuation of image clarity may come from transmission and compression. If you want to evaluate the attenuation of image clarity, you can measure its attenuation by comparing it with the image before compression and transmission. However, there are still some cases where it is necessary to evaluate the degree of sharpness degradation caused by the focus error. At this time, the source of the image, that is, the image on the camera side has been distorted, and there is no undistorted image for reference. Therefore, no reference image quality evaluation can be used. method.
  • the non-reference image quality evaluation method is a method that directly evaluates the quality of the target image without any information from the original image. It is currently the most widely used evaluation method in practical applications.
  • the existing non-reference image quality evaluation methods generally use methods based on artificial feature extraction. This method has achieved good results for a single camera or image quality data sets published on the Internet, such as Live, TID2008/TID2013, etc., and is used in practical applications. The effect is not satisfactory.
  • the method based on artificial feature extraction mainly has a small model capacity, which cannot take into account the diversity of cameras and the complexity of the scene in actual use, and the generalization ability of the actual scene is poor.
  • the evaluation speed is very slow when the entire image is input into the deep learning model, and the average of multiple sub-image blocks cannot be used to determine the image with blurred background, and the accuracy is poor.
  • the image with blurred background is generally taken with a camera with a large aperture, the focus is on the foreground, the foreground is clear and the background is blurred, the purpose is to highlight the foreground, if multiple sub-image blocks are taken, some of the sub-image blocks are blurred. Averaging will lower the score value of the entire image, which is incorrect.
  • the test result is inaccurate, because the scaling has lost part of the sharpness information, and the sharpness of the image whose resolution exceeds the scaled resolution cannot be correctly evaluated.
  • the purpose of the present invention is to provide an image definition detection method, system, equipment and storage medium, which are suitable for accurate detection of the definition of different images.
  • the embodiment of the present invention provides an image definition detection method, which includes the following steps:
  • the combined image is input to the trained sharpness detection model, and the sharpness value output by the sharpness detection model is obtained as the sharpness value of the image to be detected.
  • the step of extracting a plurality of sub-image blocks of a predetermined size from the image to be detected includes extracting M ⁇ N sub-image blocks from the image to be detected, and the sub-image blocks are arranged in N in the first direction.
  • the columns are arranged in M rows along the second direction, and the length of each sub-image block along the first direction is all w, and the length along the second direction is all h.
  • the distance between two adjacent sub-image blocks along the first direction is the same S w value, and the two adjacent sub-image blocks along the second direction
  • the spacing of the blocks is the same Sh value.
  • merging the sub-image blocks includes merging the sub-image blocks along the second direction to obtain a merged image, where the length of the merged image along the first direction is w, and the length along the second direction The length of is M ⁇ N ⁇ h.
  • the extraction of M ⁇ N sub-image blocks from the image to be detected includes the following steps:
  • each sub-image block In the image to be detected, starting from the starting position of each sub-image block, extract an area with a length w along the first direction and a length h along the second direction as the corresponding sub-image block;
  • Merging the sub-image blocks to obtain a merged image includes using the following steps to calculate the pixel value of each pixel (x′,y′) in the merged image, x′ ⁇ (1,w),y′ ⁇ ( 1,M ⁇ N ⁇ h):
  • the sharpness detection model includes an input layer, a feature extraction layer, and a fully connected layer, the feature extraction layers are M ⁇ N, and the output of the feature extraction layer is connected to the input of the fully connected layer;
  • Inputting the merged image into the trained sharpness detection model includes inputting the merged image into the sharpness detection model, and the input layer splits the merged image into M ⁇ N sub-image blocks, each of the The sub-image blocks are respectively input to one of the feature extraction layers, and the sharpness value output by the fully connected layer is obtained.
  • the method further includes training the clarity detection model by adopting the following steps:
  • the merged image corresponding to each training image and the corresponding sharpness value label are added to the training set, and the sharpness detection model is trained using the training set.
  • the image definition detection method of the present invention since the sub-image blocks of the specified size are extracted and merged from the image to be detected before the definition detection, and then the combined image is input into the definition detection model instead of the image to be detected, the detection will be detected
  • the sharpness value of the merged image is used as the sharpness value of the image to be detected.
  • the advantage of this is that the original resolution of the image to be detected is not limited on the basis of ensuring the uniform size of the input image of the sharpness detection model, that is, the merged image.
  • the merged image with the same size can be obtained, which can be applied to the definition detection of images with different resolutions; in addition, by adopting the present invention , There is no need to zoom the image when the image is detected. Because the content of the merged image is the information of multiple sub-image blocks scattered in the image to be detected, compared with the prior art only selecting a partial image for sharpness detection, The sharpness value of the merged image can better represent the sharpness value of the image to be detected.
  • the sharpness detection of the image merged by multiple sub-image blocks realizes the accurate detection of the sharpness of the image to be tested; because the merged image represents the overall to-be-detected image Detect the sharpness value of the image instead of the sharpness value of the partial image.
  • the accuracy of the sharpness value will not be affected by the selection of the image area. Therefore, it can be used for the partially blurred image. For example, an image with a blurred background can be accurately detected for sharpness.
  • the embodiment of the present invention also provides an image sharpness detection system, which is applied to the image sharpness detection method, and the system includes:
  • the sub-image block segmentation module is used to extract multiple sub-image blocks of preset size from the image to be detected;
  • a sub-image block merging module for merging the sub-image blocks to obtain a merged image
  • the sharpness detection module is configured to input the combined image into the trained sharpness detection model, and obtain the sharpness value output by the sharpness detection model as the sharpness value of the image to be detected.
  • the sharpness detection module inputs the merged image instead of the image to be detected.
  • the degree of detection model uses the sharpness value of the detected merged image as the sharpness value of the image to be detected.
  • the merged image of the same size can be obtained, which can be applied to the definition detection of images of different resolutions.
  • the sharpness value of the combined image can better represent the sharpness value of the image to be detected, and the sharpness detection of the image combined by multiple sub-image blocks realizes the accurate detection of the sharpness of the image to be detected;
  • the merged image represents the sharpness value of the overall image to be detected instead of the sharpness value of the partial image.
  • the accuracy of the sharpness value will not be affected by the selection of the image area. Therefore, It can accurately detect the sharpness of a partially blurred image, such as an image with a blurred background.
  • the embodiment of the present invention also provides an image clarity detection device, including:
  • a memory in which executable instructions of the processor are stored
  • the processor is configured to execute the steps of the image definition detection method by executing the executable instructions.
  • the processor executes the executable instructions in the memory to execute the steps of the image definition detection method, because the image to be detected is first extracted with a specified size before the definition detection Sub-image blocks are merged, and then the merged image instead of the image to be detected is input to the definition detection model, and the definition value of the detected merged image is used as the definition value of the image to be detected.
  • the advantage of this is that the definition is guaranteed Based on the unified size of the input image of the detection model, that is, the combined image, the original resolution of the image to be detected is not limited. The original images to be detected with various resolutions can be obtained after the first two steps of sub-image block extraction and merging.
  • the same merged image can be applied to the sharpness detection of images of different resolutions; in addition, by adopting the present invention, there is no need to zoom the image when detecting the image, because the content of the merged image is scattered in the image to be detected
  • the sharpness value of the merged image can better represent the sharpness value of the image to be detected, and multiple sub-image blocks are merged
  • the sharpness detection of the image to be detected realizes the accurate detection of the sharpness of the image to be detected; since the merged image represents the sharpness value of the overall image to be detected instead of the sharpness value of the partial image, it is not necessary for a partially blurred image.
  • the accuracy of the sharpness value will be affected by the selection of the image area. Therefore, it is possible to accurately perform sharpness detection on a partially blurred image, such as an image with a blurred background.
  • the embodiment of the present invention also provides a computer-readable storage medium for storing a program, which implements the steps of the image clarity detection method when the program is executed.
  • the program in the storage medium when executed to realize the steps of the image definition detection method, because before the definition detection, the sub-image block of the specified size is first extracted from the image to be detected and combined Perform merging, and then input the merged image instead of the image to be detected into the sharpness detection model, and use the sharpness value of the detected merged image as the sharpness value of the image to be detected.
  • the advantage of this is that the input of the sharpness detection model is guaranteed
  • the original resolution of the image to be detected is not limited on the basis of the uniform size of the image, that is, the combined image.
  • the original images to be detected with various resolutions after the first two steps of sub-image block extraction and merging can obtain merged images of the same size.
  • the sharpness value of the combined image can better represent the sharpness value of the image to be detected, and the sharpness of the image combined through multiple sub-image blocks
  • Degree detection realizes accurate detection of the sharpness of the image to be inspected; since the merged image represents the sharpness value of the overall image to be inspected rather than the sharpness value of the partial image, for a partially blurred image, it will not be The selection of the region affects the accuracy of the sharpness value. Therefore, it is possible to accurately perform sharpness detection on a partially blurred image, such as an image with a blurred background.
  • FIG. 1 is a flowchart of an image clarity detection method according to an embodiment of the present invention
  • FIG. 2 is a flowchart of the training of a sharpness detection model according to an embodiment of the present invention
  • FIG. 3 is a schematic diagram of extracting and merging sub-image blocks from an image according to an embodiment of the present invention
  • FIGS. 4 and 5 are schematic diagrams of a high-definition image and a combined image according to an embodiment of the present invention
  • 6 and 7 are schematic diagrams of a slightly blurred image and a merged image according to an embodiment of the present invention.
  • FIGS. 8 and 9 are schematic diagrams of a severely blurred image and a merged image according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of establishing a coordinate system in an image to be detected according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of the structure of an image clarity detection system according to an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of an image clarity detection device according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a computer storage medium according to an embodiment of the present invention.
  • embodiments of the present invention provide an image definition detection method, which can be applied to image definition detection of various resolutions and can improve the detection accuracy.
  • the image definition detection method includes the following steps:
  • S110 Extract multiple sub-image blocks of preset size from the image to be detected
  • S130 Input the combined image into the trained sharpness detection model, and obtain the sharpness value output by the sharpness detection model as the sharpness value of the image to be detected.
  • the present invention first extracts sub-image blocks of a specified size from the image and merges them through steps S110 and S120 before sharpness detection, and then enters the merged image into the trained sharpness detection model through step S130, and merges the detected
  • the sharpness value of the image is used as the sharpness value of the image to be detected.
  • the advantage of this is that the original resolution of the image to be detected is not limited on the basis of ensuring the uniform size of the input image of the sharpness detection model, that is, the combined image. After the original images to be detected with different resolutions are extracted and merged in the first two steps of sub-image blocks, merged images with the same size can be obtained, which can be applied to the definition detection of images with different resolutions.
  • the method of the present invention when used for image definition detection, there is no need to zoom the image. Because the content of the combined image is the information of multiple sub-image blocks scattered in the image to be detected, compared to the prior art, only selecting For the definition detection of partial images, the definition value of the combined image can better represent the definition value of the image to be detected, which improves the accuracy of image definition detection; because the combined image represents the clarity of the overall image to be detected
  • the degree value is not the sharpness value of the partial image.
  • the accuracy of the sharpness value will not be affected by the selection of the image area. Therefore, it can be used for partially blurred images, such as background blur.
  • the sharpness of the image is accurately detected. Furthermore, since the combined image has a smaller size than the original image, it also reduces the burden of feature extraction in the sharpness detection model, reduces the loss of the operating system of the image sharpness detection method, and improves the image The efficiency of sharpness detection.
  • the image sharpness detection method further includes the following steps to train the sharpness detection model:
  • S210 Collect multiple training images and the sharpness value labels of each training image
  • S220 Extract a plurality of sub-image blocks of preset size from each training image; here the number and size of the sub-image blocks are the same as the sub-image blocks extracted from the image to be detected when the sharpness detection model is used for image detection The same number and size;
  • S240 Add the merged image corresponding to each training image and the corresponding sharpness value label into a training set, and train the sharpness detection model using the training set.
  • all the obtained merged images can be divided into two parts: one is added to the training set for training the sharpness detection model, and one is added to the test set for testing the sharpness detection model.
  • the sharpness detection model may be a deep learning model.
  • Deep learning originated from the research of artificial neural networks and is a new field in the research of machine learning theory. It mimics the processing and analysis mechanism of the human brain by constructing a deep neural network close to the analysis and learning of the human brain, and forms a more abstract high-level feature representation through layer-by-layer learning.
  • the deep learning training model requires a large amount of data.
  • the training data is the merged image, the merged image in the training set is used as the training sample, and a label is required for each picture.
  • the content of the label is The sharpness value of the image.
  • the sharpness detection model may adopt a convolutional neural network (Convolutional Neural Networks, CNN) model.
  • Convolutional neural network is an extension of traditional neural network, which is developed by biologists from the research of cat's visual cortex.
  • Convolutional neural network models generally include an input layer, a feature extraction layer, and a fully connected layer. The parameters of the feature extraction layer are learned through training data, which avoids manual feature extraction. Through the weight sharing of the same feature map, the network parameters are greatly reduced. quantity. The image can be directly used as the input of the network, avoiding the complicated process of feature extraction and data reconstruction in traditional recognition algorithms.
  • Convolutional neural networks have good fault tolerance, parallel processing and self-learning capabilities, and have good robustness and computational efficiency in processing two-dimensional image problems.
  • Convolutional neural networks have been used in pattern classification, object detection, and object recognition.
  • Convolutional neural networks can also be divided into multiple types, for example, Vgg network, ResNet network, LeNet network, etc., all of which can be applied to the image definition detection of the present invention.
  • the step S110 extracting a plurality of sub-image blocks of a preset size from the image to be detected, including extracting M ⁇ N sub-image blocks from the image to be detected, and the sub-image blocks are along the first direction They are arranged in N columns, arranged in M rows along the second direction, and the length of each sub-image block along the first direction is all w, and the length along the second direction is all h. That is, multiple sub-image blocks are arranged in the form of an M ⁇ N matrix in the image to be detected.
  • FIG. 3 it is a schematic diagram of extracting and merging sub-image blocks from the image to be detected in a specific example.
  • the image F110 is a schematic image to be detected
  • each rectangular block arranged in a matrix shape in the image F110 is a schematic sub-image block
  • the image F120 is a combined image obtained by combining the respective sub-image blocks.
  • the M ⁇ N sub-image blocks of preset size are preferably evenly distributed in the image to be detected, and a more representative image area that can better reflect the quality of the image to be detected as a whole can be extracted for detection. Thereby improving the accuracy of image definition detection.
  • the step S200: merging the sub-image blocks includes merging the sub-image blocks along the second direction to obtain a merged image, and the length of the merged image along the first direction Is w, and the length along the second direction is M ⁇ N ⁇ h.
  • the first direction refers to the horizontal direction in FIG. 3, and the second direction refers to the vertical direction in FIG. 3.
  • the length of the image F110 to be detected along the first direction that is, the width of the image
  • W 0 the width of the image
  • H 0 the length along the second direction
  • 20 sub-image blocks are extracted from the image F110 to be detected, and the 20 sub-image blocks are arranged in a matrix of 4 rows and 5 columns, that is, the value of M is 4 and the value of N is 5.
  • a merged image F120 is obtained.
  • the merged image F120 has a width of 64 and a height of 20.
  • the values of M, N, w, and h are all examples, and in practical applications, other values can be selected, which all fall within the protection scope of the present invention.
  • a high-definition image F210 is shown, and as shown in FIG. 5, a combined image F220 obtained after sub-image block extraction and merging is performed using the high-definition image F210 as the original image.
  • a slightly blurred image F310 is shown, and as shown in FIG. 7, it shows a merged image F320 obtained after sub-image block extraction and merging are performed using the slightly blurred image F310 as the original image.
  • a severely blurred image F410 is shown, and as shown in FIG. 9, it shows a merged image F420 obtained after sub-image block extraction and merging are performed using the severely blurred image F410 as the original image. Comparing Figure 5, Figure 7 and Figure 9, it can be seen that the combined image can compare the overall quality of the original image. By detecting the sharpness value of the merged image, the sharpness value of the original image can be obtained more accurately.
  • step S110 takes the example shown in FIG. 10 as an example to specifically describe the specific implementation of step S110.
  • the distance between two adjacent sub-image blocks along the first direction is the same S w value
  • the distance between two adjacent sub-image blocks along the second direction is the same.
  • the spacing of each sub-image block is the same Sh value.
  • S111 Determine the starting position coordinates (O w , O h ) of the first sub-image block along the first direction and the first sub-image block along the second direction, that is, the first sub-image in the upper left corner of the image in the example of FIG. 10
  • the starting position coordinate of the block, the unit of the starting position coordinate is pixel, and the value of the coordinate value is an integer greater than or equal to 0; in this embodiment, the first pixel in the upper left corner of each sub-image block
  • the upper left corner of the image to be detected can be taken as the 0 point of the coordinate system, the horizontal direction is the x-axis, and the vertical direction is the y-axis;
  • the following formula may be used to determine the starting position coordinates of the first sub-image block:
  • C 1 and C 2 are preset scale coefficients, and the division in the formula is divisible, that is, only the integer part of the result is selected and the decimal part is discarded.
  • W 0 is the length of the image to be detected in the first direction
  • H 0 is the length of the image to be detected in the second direction.
  • the distance S w between two adjacent sub-image blocks along the first direction is the horizontal distance between the upper left pixel of the previous sub image block and the upper left pixel of the next sub image block.
  • two adjacent sub h before spacing S is the image block in the second direction between the upper left corner of a sub-pixel after the image block with a vertical distance between the upper left corner of the sub-pixel image blocks.
  • the distance between the first sub-image block in the horizontal direction and the left side is O w
  • the distance between the last sub-image block in the horizontal direction (in this example, the fifth sub-image block in the horizontal direction) and the right side It is also O w
  • the distance between the first sub-image block in the vertical direction and the upper side is O h
  • the last sub-image block in the vertical direction (in this example, the fourth sub-image block in the vertical direction) and the lower side The distance of is also Oh .
  • step S120 the sub-image blocks are combined to obtain a combined image, which includes calculating the pixel value of each pixel (x', y') in the combined image by using the following steps, x ′ ⁇ (1,w), y′ ⁇ (1,M ⁇ N ⁇ h):
  • [] means to take an integer
  • % means to take the remainder in the division
  • the resulting merged image is merged in the vertical direction from top to bottom and then from left to right of each sub-image block B ij. That is, the top sub-image block in the merged image is the image block in the first row and the first column, and then the image block in the second row and the first column, the image block in the third row and the first column, and the fourth row and the first column image in the descending order.
  • I(P j +x′,P i +y′%h) represents the pixel value of the point (P j +x′,P i +y′%h) in the image I to be detected, P j + x' ⁇ (1, W 0), P i + y '% h ⁇ (1, H 0).
  • the sharpness detection model includes an input layer, a feature extraction layer, and a fully connected layer.
  • the feature extraction layer can be set to one, that is, the merged image is input into the feature extraction layer as a whole, and the feature extraction layer performs feature extraction and then inputs it into the fully connected layer to obtain the sharpness value output by the fully connected layer .
  • the sharpness detection model may be a convolutional neural network model, and the feature extraction layer may include a convolution layer and a pooling layer.
  • the input layer of the convolutional neural network can process input data, standardize the input data, and process it into a data format that can be processed by the convolutional layer.
  • the function of the convolutional layer is to perform feature extraction on the input data. It includes multiple convolution kernels.
  • Each element of the convolution kernel corresponds to a weight coefficient and a deviation.
  • the convolution kernel When the convolution kernel is working, it will scan the input features regularly, do matrix element multiplication and summation of the input features in the receptive field and superimpose the deviation.
  • the pooling layer is used to perform feature selection and information filtering on the feature map output by the convolution layer after the feature extraction of the convolution layer.
  • the pooling layer contains a preset pooling function, whose function is to replace the result of a single point in the feature map with the feature map statistics of its neighboring regions.
  • the fully connected layer non-linearly combines the features extracted by the convolutional layer and the pooling layer to obtain the output.
  • the convolutional layer may adopt The one-dimensional convolution kernel only performs horizontal convolution on the merged image, and does not perform vertical convolution. Therefore, compared with the convolutional neural network model that adopts a two-dimensional convolution kernel and a three-dimensional convolution kernel, the sharpness detection model of the present invention can perform feature extraction on the input image more quickly.
  • the feature extraction layers in the sharpness detection model can also be set to M ⁇ N, and the M ⁇ N feature extraction layers have the same parameters.
  • the output of the input layer is connected to the M ⁇ N feature extraction layers, and the output of the M ⁇ N feature extraction layer is connected to the input of the fully connected layer.
  • the step S130 inputting the merged image into the trained sharpness detection model, including inputting the merged image into the sharpness detection model, and the input layer splits the merged image into M ⁇ N sub-images Block, the size of each sub-image block is the above-mentioned w ⁇ h, each of the sub-image blocks is input into one of the feature extraction layers, and the feature maps extracted by the M ⁇ N sub-image blocks are input into the fully connected layer to obtain The sharpness value output by the fully connected layer.
  • the input combined image received by the sharpness detection model may be a three-channel RGB image, so the input image is a three-channel combined image with a total number of pixels of M ⁇ N ⁇ w ⁇ h.
  • the model splits the merged image first, 3 ⁇ M ⁇ N sub-image blocks with w ⁇ h pixels are obtained.
  • the sub-image blocks are merged in step S120 and then split in the model, which can facilitate the image input of the definition detection model.
  • Inputting a three-channel M ⁇ N ⁇ w ⁇ h merged image into the definition detection model has higher transmission efficiency than directly inputting 3 ⁇ M ⁇ N sub-image blocks with pixels of w ⁇ h , The model is more convenient to process.
  • the preprocessing of the training image can also adopt the specific implementation of step S110 and step S120 as described above, and each training image is preprocessed into M ⁇ N sub-pixels w ⁇ h. After the image block, the M ⁇ N sub-image blocks are combined into a combined image with a total number of pixels of M ⁇ N ⁇ w ⁇ h.
  • step S210 after acquiring a plurality of training images and the definition value of each training image, it may also include randomly cropping the images for data augmentation, further expanding the number of training images, and the cropped image I c the width W c, a height H c.
  • H c is a positive integer greater than 1 and less than the height H 0 of the original image I 0
  • W c is a positive integer greater than 1 less than the width W 0 of the original image I 0.
  • Training may be randomly cut to obtain a plurality of images I c cropped image, an image cropped from the random obtained cropped image of the original image I c I 0 has the same sharpness values.
  • the step S220 extract M ⁇ N sub-image blocks of preset size from each training image.
  • the width of each sub-image block is w and the height is h. Both w and h are fixed values.
  • w is a positive integer greater than 1 and less than W c
  • h is a positive integer greater than 1 and less than H c .
  • All the sub-image blocks have the same size, and the distance between two adjacent sub-image blocks in the horizontal direction is fixed S w , and the distance between two adjacent sub-image blocks in the vertical direction is fixed Sh .
  • the specific sub-image block segmentation method can be performed using the above steps S111 to S114.
  • the step S230 merge the sub-image blocks of each training image to obtain a merged image corresponding to each training image, which specifically includes merging all the sub-image blocks corresponding to a cropped image in the vertical direction ,
  • the combined image I'whose height is M ⁇ h is obtained, and the sharpness value of the combined image I'is the same as the original image I 0.
  • each cropped image I c corresponds to a merged image I′.
  • the step S240 adding the merged image corresponding to each training image and the corresponding sharpness value label into the training set, and training the sharpness detection model using the training set, including constructing a regression model of a convolutional neural network
  • the convolutional neural network may be a result of a deep learning network such as LeNet, Vgg, ResNet, etc., but the present invention is not limited to this.
  • the software required to train the model can be Tensorflow, Pytorch, Caffe, etc., and the required hardware is a computer. Input the merged image I'and the sharpness value label in the training set into the constructed convolutional neural network, and train to convergence to obtain the sharpness detection model.
  • the sharpness detection model includes a feature extraction layer
  • the merged image I'in the training set is input into the feature extraction layer, and the extracted features are input into the fully connected layer, and then the predicted sharpness value is obtained, Compare with the marked clarity value.
  • the definition detection model includes M ⁇ N feature extraction layers, after splitting the merged image I'in the training set to obtain M ⁇ N sub-image blocks, input the M ⁇ N feature extraction layers respectively, and then The features of M ⁇ N sub-image blocks are input into the fully connected layer to obtain the predicted sharpness value.
  • an embodiment of the present invention also provides an image sharpness detection system, which is applied to the image sharpness detection method, and the system includes:
  • the sub-image block segmentation module M100 is used to extract a plurality of sub-image blocks of preset size from the image to be detected;
  • the sub-image block merging module M200 is used to merge the sub-image blocks to obtain a merged image
  • the sharpness detection module M300 is configured to input the combined image into the trained sharpness detection model, and obtain the sharpness value output by the sharpness detection model as the sharpness value of the image to be detected.
  • the present invention first adopts the sub-image block segmentation module M100 and the sub-image block merging module M200. Before the sharpness detection, first extracts and merges the sub-image blocks of the specified size from the image, and then uses the sharpness detection module M300 to input the combined image for training.
  • the sharpness detection model uses the sharpness value of the detected combined image as the sharpness value of the image to be detected. This has the advantage of ensuring that the input image of the sharpness detection model, that is, the unified size of the combined image, is treated
  • the original resolution of the detected image is not limited. After the original images to be detected with different resolutions are extracted and merged in the first two steps, the merged image of the same size can be obtained, which can be applied to the clarity of the images of different resolutions.
  • the method of the present invention when used for image definition detection, there is no need to zoom the image. Because the content of the combined image is the information of multiple sub-image blocks scattered in the image to be detected, compared to the prior art, only selecting For the definition detection of partial images, the definition value of the combined image can better represent the definition value of the image to be detected, which improves the accuracy of image definition detection; because the combined image represents the clarity of the overall image to be detected
  • the degree value is not the sharpness value of the partial image. For a partially blurred image, the accuracy of the sharpness value will not be affected by the selection of the image area. Therefore, it can be used for partially blurred images, such as background blur. The sharpness of the image is accurately detected. Furthermore, since the combined image has a smaller size than the original image, it also reduces the burden of feature extraction in the definition detection model, reduces the loss of the image definition detection system, and improves the image definition detection performance. efficiency.
  • each module in the image sharpness detection system of the present invention can adopt the implementation manner of each step in the above-mentioned image sharpness detection method.
  • the sub-image block segmentation module M100 may adopt the above-mentioned implementation of step S110
  • the sub-image block merging module M200 may adopt the above-mentioned implementation of step S120
  • the definition detection module M300 may adopt the above-mentioned specific implementation of step S130. To repeat.
  • the image sharpness detection system may further include a model training module for collecting training images and processing them to obtain a training set, and using the training set to train the sharpness detection Model.
  • the model training module may train the clarity detection model by using the process of steps S210 to S240 described above.
  • An embodiment of the present invention also provides an image clarity detection device, including a processor; a memory in which executable instructions of the processor are stored; wherein the processor is configured to execute all of the executable instructions by executing the executable instructions. The steps of the image clarity detection method described.
  • the electronic device 600 according to this embodiment of the present invention will be described below with reference to FIG. 12.
  • the electronic device 600 shown in FIG. 12 is only an example, and should not bring any limitation to the function and application scope of the embodiment of the present invention.
  • the electronic device 600 is represented in the form of a general-purpose computing device.
  • the components of the electronic device 600 may include but are not limited to: at least one processing unit 610, at least one storage unit 620, a bus 630 connecting different system components (including the storage unit 620 and the processing unit 610), a display unit 640, and the like.
  • the storage unit stores program code, and the program code can be executed by the processing unit 610, so that the processing unit 610 executes the various exemplary methods described in the above-mentioned electronic prescription circulation processing method section of this specification. Steps of implementation.
  • the processing unit 610 may perform the steps shown in FIG. 1.
  • the storage unit 620 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 6201 and/or a cache storage unit 6202, and may further include a read-only storage unit (ROM) 6203.
  • RAM random access storage unit
  • ROM read-only storage unit
  • the storage unit 620 may also include a program/utility tool 6204 having a set of (at least one) program module 6205.
  • program module 6205 includes but is not limited to: an operating system, one or more application programs, other program modules, and programs. Data, each of these examples or some combination may include the realization of the network environment.
  • the bus 630 may represent one or more of several types of bus structures, including a storage unit bus or a storage unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or a local area using any bus structure among multiple bus structures. bus.
  • the electronic device 600 may also communicate with one or more external devices 700 (such as keyboards, pointing devices, Bluetooth devices, etc.), and may also communicate with one or more devices that enable a user to interact with the electronic device 600, and/or communicate with Any device (such as a router, modem, etc.) that enables the electronic device 600 to communicate with one or more other computing devices. This communication can be performed through an input/output (I/O) interface 650.
  • the electronic device 600 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 660.
  • the network adapter 660 can communicate with other modules of the electronic device 600 through the bus 630.
  • the processor executes the executable instructions in the memory to execute the steps of the image definition detection method, because the image to be detected is first extracted with a specified size before the definition detection Sub-image blocks are merged, and then the merged image instead of the image to be detected is input into the definition detection model, and the definition value of the detected merged image is used as the definition value of the image to be detected.
  • the advantage of this is that the definition is guaranteed Based on the unified size of the input image of the detection model, that is, the combined image, the original resolution of the image to be detected is not limited. The original images to be detected with various resolutions can be obtained after the first two steps of sub-image block extraction and merging.
  • the same merged image can be applied to the sharpness detection of images of different resolutions; in addition, by adopting the present invention, there is no need to zoom the image when detecting the image, because the content of the merged image is scattered in the image to be detected
  • the sharpness value of the merged image can better represent the sharpness value of the image to be detected, and multiple sub-image blocks are merged
  • the sharpness detection of the image to be detected realizes the accurate detection of the sharpness of the image to be detected; since the merged image represents the sharpness value of the overall image to be detected instead of the sharpness value of the partial image, it is not necessary for partially blurred images
  • the accuracy of the sharpness value will be affected by the selection of the image area. Therefore, it is possible to accurately perform sharpness detection on a partially blurred image, such as an image with a blurred background.
  • the combined image has a smaller size than the original image, it also reduces the burden of feature extraction in the sharpness detection model, reduces the processor loss of the image sharpness detection device, and improves image clarity. The efficiency of the degree of detection.
  • the embodiment of the present invention also provides a computer-readable storage medium for storing a program, which implements the steps of the image clarity detection method when the program is executed.
  • various aspects of the present invention may also be implemented in the form of a program product, which includes program code.
  • the program product runs on a terminal device, the program code is used to enable the The terminal device executes the steps according to various exemplary embodiments of the present invention described in the above-mentioned electronic prescription circulation processing method section of this specification.
  • a program product 800 for implementing the above method according to an embodiment of the present invention is described. It can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be installed in a terminal device, For example, running on a personal computer.
  • the program product of the present invention is not limited to this.
  • the readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or combined with an instruction execution system, device, or device.
  • the program product can use any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may include a data signal propagated in baseband or as a part of a carrier wave, and readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the readable storage medium may also be any readable medium other than the readable storage medium, and the readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device.
  • the program code contained on the readable storage medium can be transmitted by any suitable medium, including but not limited to wireless, wired, optical cable, RF, etc., or any suitable combination of the above.
  • the program code used to perform the operations of the present invention can be written in any combination of one or more programming languages.
  • the programming languages include object-oriented programming languages—such as Java, C++, etc., as well as conventional procedural styles. Programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • the remote computing device can be connected to a user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computing device (for example, using Internet service providers). Business to connect via the Internet).
  • LAN local area network
  • WAN wide area network
  • Internet service providers for example, using Internet service providers.
  • the program in the storage medium when executed to realize the steps of the image definition detection method, because before the definition detection, the sub-image block of the specified size is first extracted from the image to be detected and combined Merge, and then input the merged image instead of the image to be detected into the sharpness detection model, and use the sharpness value of the detected merged image as the sharpness value of the image to be detected.
  • the advantage of this is that the input of the sharpness detection model is guaranteed
  • the original resolution of the image to be detected is not limited on the basis of the uniform size of the image, that is, the combined image.
  • the original images to be detected with various resolutions after the first two steps of sub-image block extraction and merging can obtain merged images of the same size.
  • the sharpness value of the combined image can better represent the sharpness value of the image to be detected, and the sharpness of the image combined through multiple sub-image blocks
  • Degree detection realizes accurate detection of the sharpness of the image to be inspected; because the merged image represents the sharpness value of the overall image to be inspected rather than the sharpness value of the partial image, for a partially blurred image, it will not be The selection of the region affects the accuracy of the sharpness value. Therefore, it is possible to accurately perform sharpness detection on a partially blurred image, such as an image with a blurred background.
  • the combined image has a smaller size than the original image, it also reduces the burden of feature extraction in the definition detection model, reduces the execution system loss when the computer-readable storage medium is executed, and improves Improved the efficiency of image sharpness detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种图像清晰度检测方法、系统、设备及存储介质,所述方法包括:首先从待检测图像中提取多个预设大小的子图像块(S110);将所述子图像块进行合并,得到合并图像(S120);然后将所述合并图像输入训练好的清晰度检测模型,得到所述清晰度检测模型输出的清晰度值,作为所述待检测图像的清晰度值(S130)。由于在清晰度检测之前首先对图像提取指定大小的子图像块并进行合并,对待检测图像的原分辨率没有限定,从而可以适用于不同分辨率的图像的清晰度检测,此外,通过采用该方法,对图像进行检测时无需缩放图像,通过多个子图像块合并的图像的清晰度检测实现了对待检测图像的清晰度准确检测。

Description

图像清晰度检测方法、系统、设备及存储介质
本申请要求申请号为:CN202010053986.4、申请日为2020.01.17的中国国家知识产权局的在先专利申请为优先权,该在先专利申请文本中的内容通过引用而完全加入本专利申请中。
技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像清晰度检测方法、系统、设备及存储介质。
背景技术
图像作为信息的主要载体,包含了大量有用信息。在图像的获取、压缩、传输等过程中不可避免会存在一些异常干扰因素,如噪声、模糊、偏色以及数据丢失等,这些都会造成图像质量的下降(降质、失真),从而导致其中包含信息的丢失。
图像质量评价可以分为主观评价方法和客观评价方法。主观评价由观察者对图像质量进行主观评分,一般采用平均主观得分(Mean opin-ion score,MO)或平均主观得分差异(
Figure PCTCN2020121508-appb-000001
opinion score,DMOS)(即人眼对无失真图像和有失真图像评价得分的差异)表示,但主观评价工作量大、耗时长,使用起来很不方便。客观评价方法是由计算机根据一定算法计算得到图像的质量指标,根据评价时是否需要参考图像又可以分为全参考(Full reference,FR)、半参考(部分参考)(Reduced reference,RR)和无参考(No refer-ence,NR)等三类评价方法。
图像的清晰度是衡量图像纹理细节丰富程度,图像是否能达到所能表示的分辨率的程度,可以作为衡量图像质量优劣的重要指标,它能够较好的与人的主观感受相对应。图像清晰度低的表现为图像模糊。
在实际应用中,图像清晰度的衰减可能来自于传输和压缩,如果要对图像的清晰度衰减程度进行评价,可以通过和压缩传输前的图像进行比对来衡量其衰减程度。然而,还有一些情况,需要对聚焦错误造成的清晰度衰减程度进行评价,此时,图像的来源即摄像机端的图像已经失真,没有无失真图像可供参考,因此只能采用无参考图像质量评价方法。
无参考图像质量评价方法是一种无需原始图像任何信息,直接对目标图像进行质量评价的方法,是目前在实际应用中最广泛的评价方法。现有的无参考图像质量评价方法一般采用基于人工特征提取的方法,该种方法对于单一摄像机或者网上公开的图像质量数据集如Live,TID2008/TID2013等上面取得了较好的效果,在实际应用中效果并不理想。基于人工特征提取的方法主要存在模型容量小,无法考虑到摄像机的多样性,以及在实际使用中场景的复杂性,对实际场景泛化能力差。
此外,现有的基于深度学习的图像质量评价方法中,将整张图输入深度学习模型则评价速度很慢,并且取多个子图像块求平均无法判断背景虚化的图像,准确性差。因为,背景虚化的图像一般是用大光圈的摄像机拍摄,将焦点聚焦在前景上,前景清晰而背景模糊,目的是突出前景,如果取多个子图像块,则部分子图像块是模糊的,求平均值则会拉低整幅图像的评分值,是不正确的。如果对图像缩放到一定大小(例如224x224的分辨率)后输入深度学习模型则测试结果不准确,因为缩放已经丢失了部分清晰度信息,无法对分辨率超过缩放分辨率的图像正确评价清晰度。
发明内容
针对现有技术中的问题,本发明的目的在于提供一种图像清晰度检测方法、系统、 设备及存储介质,适用于不同图像的清晰度准确检测。
本发明实施例提供一种图像清晰度检测方法,包括如下步骤:
从待检测图像中提取多个预设大小的子图像块;
将所述子图像块进行合并,得到合并图像;
将所述合并图像输入训练好的清晰度检测模型,得到所述清晰度检测模型输出的清晰度值,作为所述待检测图像的清晰度值。
可选地,所述从待检测图像中提取多个预设大小的子图像块的步骤,包括从待检测图像中提取M×N个子图像块,所述子图像块沿第一方向排列成N列,沿第二方向排列成M行,且每个子图像块沿第一方向的长度均为w,沿第二方向的长度均为h。
可选地,所述M×N个预设大小的子图像块中,沿第一方向相邻两个子图像块的间距均为一相同的S w值,且沿第二方向相邻两个子图像块的间距均为一相同的S h值。
可选地,将所述子图像块进行合并,包括将所述子图像块沿所述第二方向进行合并,得到合并图像,所述合并图像沿第一方向的长度为w,沿第二方向的长度为M×N×h。
可选地,所述从待检测图像中提取M×N个子图像块,包括如下步骤:
确定沿第一方向第1个、沿第二方向第1个子图像块的起始位置坐标(O w,O h);
计算沿第一方向相邻两个子图像块的间距S w和沿第二方向相邻两个子图像块的间距S h
确定沿第一方向第j个、沿第二方向第i个子图像块的起始坐标(P j,P i),j∈(1,N),i∈(1,M);
在所述待检测图像中,从每个子图像块的起始位置起,提取沿第一方向长度为w、沿第二方向长度为h的区域,作为对应的子图像块;
将所述子图像块进行合并,得到合并图像,包括采用如下步骤计算所述合并图像中各个像素点(x′,y′)的像素值,x′∈(1,w),y′∈(1,M×N×h):
计算所述合并图像中各个像素点(x′,y′)所对应的子图像块沿第一方向的序号j和沿第二方向的序号i;
计算所述合并图像中各个像素点(x′,y′)的像素值I′(x′,y′)。
可选地,所述清晰度检测模型包括输入层、特征提取层和全连接层,所述特征提取层为M×N个,所述特征提取层的输出连接所述全连接层的输入;
将所述合并图像输入训练好的清晰度检测模型,包括将所述合并图像输入所述清晰度检测模型,所述输入层将所述合并图像拆分为M×N个子图像块,各个所述子图像块分别输入一个所述特征提取层,得到所述全连接层输出的清晰度值。
可选地,所述方法还包括采用如下步骤训练所述清晰度检测模型:
采集多个训练用图像和各个训练用图像的清晰度值标记;
从每个训练用图像中提取多个预设大小的子图像块;
将每个训练用图像的子图像块进行合并,得到每个训练用图像对应的合并图像;
将每个训练用图像对应的合并图像和对应的清晰度值标记加入训练集中,采用所述训练集训练所述清晰度检测模型。
通过采用本发明的图像清晰度检测方法,由于在清晰度检测之前首先对待检测图像提取指定大小的子图像块并进行合并,然后将合并图像而非待检测图像输入清晰度检测模型,将检测到的合并图像的清晰度值作为待检测图像的清晰度值,这样的好处在于,在保证清晰度检测模型的输入图像即合并图像的尺寸统一的基础上,对待检 测图像的原分辨率没有限定,各种不同分辨率的原始待检测图像经过前两步子图像块提取和合并之后之后都能够得到尺寸相同的合并图像,从而可以适用于不同分辨率的图像的清晰度检测;此外,通过采用本发明,对图像进行检测时无需缩放图像,由于合并图像所包含的内容是分散在待检测图像中的多个子图像块的信息,相比于现有技术中只选择局部图像进行清晰度检测来说,合并图像的清晰度值可以更好地代表待检测图像的清晰度值,通过多个子图像块合并的图像的清晰度检测实现了对待检测图像的清晰度准确检测;由于合并图像代表的是整体待检测图像的清晰度值而非局部图像的清晰度值,对于部分模糊的图像来说,也不会因为对图像区域的选择而影响清晰度值的准确性,因此,能够对部分模糊的图像,例如背景虚化的图像准确地进行清晰度检测。
本发明实施例还提供一种图像清晰度检测系统,应用于所述的图像清晰度检测方法,所述系统包括:
子图像块分割模块,用于从待检测图像中提取多个预设大小的子图像块;
子图像块合并模块,用于将所述子图像块进行合并,得到合并图像;
清晰度检测模块,用于将所述合并图像输入训练好的清晰度检测模型,得到所述清晰度检测模型输出的清晰度值,作为所述待检测图像的清晰度值。
通过采用本发明的图像清晰度检测系统,由于子图像块分割模块对待检测图像提取指定大小的子图像块并由子图像块合并模块进行合并,清晰度检测模块将合并图像而非待检测图像输入清晰度检测模型,将检测到的合并图像的清晰度值作为待检测图像的清晰度值,这样的好处在于,在保证清晰度检测模型的输入图像即合并图像的尺寸统一的基础上,对待检测图像的原分辨率没有限定,各种不同分辨率的原始待检测图像经过前两步子图像块提取和合并之后之后都能够得到尺寸相同的合并图像,从而可以适用于不同分辨率的图像的清晰度检测;此外,通过采用本发明,对图像进行检测时无需缩放图像,由于合并图像所包含的内容是分散在待检测图像中的多个子图像块的信息,相比于现有技术中只选择局部图像进行清晰度检测来说,合并图像的清晰度值可以更好地代表待检测图像的清晰度值,通过多个子图像块合并的图像的清晰度检测实现了对待检测图像的清晰度准确检测;由于合并图像代表的是整体待检测图像的清晰度值而非局部图像的清晰度值,对于部分模糊的图像来说,也不会因为对图像区域的选择而影响清晰度值的准确性,因此,能够对部分模糊的图像,例如背景虚化的图像准确地进行清晰度检测。
本发明实施例还提供一种图像清晰度检测设备,包括:
处理器;
存储器,其中存储有所述处理器的可执行指令;
其中,所述处理器配置为经由执行所述可执行指令来执行所述的图像清晰度检测方法的步骤。
通过采用本发明的图像清晰度检测设备,其处理器在执行存储器中的可执行指令来执行所述的图像清晰度检测方法的步骤时,由于在清晰度检测之前首先对待检测图像提取指定大小的子图像块并进行合并,然后将合并图像而非待检测图像输入清晰度检测模型,将检测到的合并图像的清晰度值作为待检测图像的清晰度值,这样的好处在于,在保证清晰度检测模型的输入图像即合并图像的尺寸统一的基础上,对待检测图像的原分辨率没有限定,各种不同分辨率的原始待检测图像经过前两步子图像块提取和合并之后之后都能够得到尺寸相同的合并图像,从而可以适用于不同分辨率的图像的清晰度检测;此外,通过采用本发明,对图像进行检测时无需缩放图像,由于合并图像所包含的内容是分散在待检测图像中的多个子图像块的信息,相比于现有技术 中只选择局部图像进行清晰度检测来说,合并图像的清晰度值可以更好地代表待检测图像的清晰度值,通过多个子图像块合并的图像的清晰度检测实现了对待检测图像的清晰度准确检测;由于合并图像代表的是整体待检测图像的清晰度值而非局部图像的清晰度值,对于部分模糊的图像来说,也不会因为对图像区域的选择而影响清晰度值的准确性,因此,能够对部分模糊的图像,例如背景虚化的图像准确地进行清晰度检测。
本发明实施例还提供一种计算机可读存储介质,用于存储程序,所述程序被执行时实现所述的图像清晰度检测方法的步骤。
通过采用本发明的计算机可读存储介质,存储介质中的程序被执行而实现所述的图像清晰度检测方法的步骤时,由于在清晰度检测之前首先对待检测图像提取指定大小的子图像块并进行合并,然后将合并图像而非待检测图像输入清晰度检测模型,将检测到的合并图像的清晰度值作为待检测图像的清晰度值,这样的好处在于,在保证清晰度检测模型的输入图像即合并图像的尺寸统一的基础上,对待检测图像的原分辨率没有限定,各种不同分辨率的原始待检测图像经过前两步子图像块提取和合并之后之后都能够得到尺寸相同的合并图像,从而可以适用于不同分辨率的图像的清晰度检测;此外,通过采用本发明,对图像进行检测时无需缩放图像,由于合并图像所包含的内容是分散在待检测图像中的多个子图像块的信息,相比于现有技术中只选择局部图像进行清晰度检测来说,合并图像的清晰度值可以更好地代表待检测图像的清晰度值,通过多个子图像块合并的图像的清晰度检测实现了对待检测图像的清晰度准确检测;由于合并图像代表的是整体待检测图像的清晰度值而非局部图像的清晰度值,对于部分模糊的图像来说,也不会因为对图像区域的选择而影响清晰度值的准确性,因此,能够对部分模糊的图像,例如背景虚化的图像准确地进行清晰度检测。
附图说明
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显。
图1是本发明一实施例的图像清晰度检测方法的流程图;
图2是本发明一实施例的清晰度检测模型训练的流程图;
图3是本发明一实施例的从图像中提取子图像块并合并的示意图;
图4和图5是本发明一实施例的高清图像与合并图像的示意图;
图6和图7是本发明一实施例的轻微模糊图像与合并图像的示意图;
图8和图9是本发明一实施例的严重模糊图像与合并图像的示意图;
图10为本发明一实施例的在待检测图像中建立坐标系的示意图;
图11是本发明一实施例的图像清晰度检测系统的结构示意图;
图12是本发明一实施例的图像清晰度检测设备的结构示意图;
图13是本发明一实施例的计算机存储介质的结构示意图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的实施方式;相反,提供这些实施方式使得本发明将全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。在图中相同的附图标记表示相同或类似的结构,因而将省略对它们的重复描述。
为了解决现有技术的技术问题,本发明实施例提供一种图像清晰度检测方法,可以适用于各种分辨率的图像清晰度检测,并且可以提高检测准确度。
如图1所示,所述图像清晰度检测方法包括如下步骤:
S110:从待检测图像中提取多个预设大小的子图像块;
S120:将所述子图像块进行合并,得到合并图像;
S130:将所述合并图像输入训练好的清晰度检测模型,得到所述清晰度检测模型输出的清晰度值,作为所述待检测图像的清晰度值。
本发明首先通过步骤S110和S120,在清晰度检测之前首先对图像提取指定大小的子图像块并进行合并,然后再通过步骤S130将合并图像输入训练好的清晰度检测模型,将检测到的合并图像的清晰度值作为待检测图像的清晰度值,这样的好处在于,在保证清晰度检测模型的输入图像即合并图像的尺寸统一的基础上,对待检测图像的原分辨率没有限定,各种不同分辨率的原始待检测图像经过前两步子图像块提取和合并之后之后都能够得到尺寸相同的合并图像,从而可以适用于不同分辨率的图像的清晰度检测。并且,在采用本发明的方法进行图像清晰度检测时,无需缩放图像,由于合并图像所包含的内容是分散在待检测图像中的多个子图像块的信息,相比于现有技术中只选择局部图像进行清晰度检测来说,合并图像的清晰度值可以更好地代表待检测图像的清晰度值,提高了图像清晰度检测的准确度;由于合并图像代表的是整体待检测图像的清晰度值而非局部图像的清晰度值,对于部分模糊的图像来说,也不会因为对图像区域的选择而影响清晰度值的准确性,因此,能够对部分模糊的图像,例如背景虚化的图像准确地进行清晰度检测。进一步地,由于合并图像相比于原图像来说,尺寸更小,因此,也减少了清晰度检测模型中进行特征提取的负担,降低了图像清晰度检测方法的运行系统的损耗,提高了图像清晰度检测的效率。
如图2所示,在该实施例中,所述图像清晰度检测方法还包括采用如下步骤训练所述清晰度检测模型:
S210:采集多个训练用图像和各个训练用图像的清晰度值标记;
首先,采集一批数量较大的图像,且其中一部分为模糊的图像,并且根据图像的清晰度,给每张图像标注一个清晰度值,例如,采集2000张图像,其中包括至少500张模糊的图像,具体图像的数量可以根据需要选择,并且模糊的图像可以设定为清晰度值小于预设阈值的图像;
S220:从每个训练用图像中提取多个预设大小的子图像块;此处子图像块的数量和大小与采用所述清晰度检测模型进行图像检测时从待检测图像中提取的子图像块的数量和大小相同;
S230:将每个训练用图像的子图像块进行合并,得到每个训练用图像对应的合并图像;
S240:将每个训练用图像对应的合并图像和对应的清晰度值标记加入训练集中,采用所述训练集训练所述清晰度检测模型。
在实际应用中,可以将得到的所有合并图像分为两份:一份加入训练集,用于清晰度检测模型的训练,一份加入测试集,用于清晰度检测模型的测试。
在该实施例中,所述清晰度检测模型可以为深度学习模型。深度学习最早源于人工神经网络的研究,是机器学习理论研究中一个新生领域。它通过构造接近人脑分析和学习的深度神经网络,来模仿人脑的处理和分析机理,把底层特征通过逐层学习形成更加抽象的高层特征表示。深度学习训练模型需要大量数据,在应用到本发明的图像清晰度检测时,训练用的数据即为合并图像,训练集中的合并图像作为训练样本,并且需要给每张图片一个标签,标签内容为图像的清晰度值。
进一步地,在该实施例中,所述清晰度检测模型可以采用卷积神经网络(Convolutional Neural Networks,CNN)模型。卷积神经网络是传统神经网络的拓展,它是由生物学家从猫的视觉皮层研究发展而来。卷积神经网络模型一般可以包括输入 层、特征提取层和全连接层,特征提取层参数是通过训练数据学习得到,避免了人工特征提取,通过同一特征图的权值共享,大幅减少了网络参数的数量。图像可以直接作为网络的输入,避免了传统识别算法中复杂的特征提取和数据重建过程。卷积神经网络具有良好的容错能力、并行处理能力和自学习能力,在处理二维图像问题上具有良好的鲁棒性和运算效率。卷积神经网络的泛化能力要显著优于其他方法,卷积神经网络已被应用于模式分类、物体检测和物体识别等方面。卷积神经网络也可以分为多种,例如,Vgg网络、ResNet网络和LeNet网络等,均可以应用于本发明的图像清晰度检测中。
在该实施例中,所述步骤S110:从待检测图像中提取多个预设大小的子图像块,包括从待检测图像中提取M×N个子图像块,所述子图像块沿第一方向排列成N列,沿第二方向排列成M行,且每个子图像块沿第一方向的长度均为w,沿第二方向的长度均为h。即多个子图像块在所述待检测图像中以一个M×N的矩阵的形式排列。
如图3所示,为一具体实例中从待检测图像中提取子图像块并合并的示意图。其中,图像F110为示意的待检测图像,图像F110中各个排列成矩阵形状的矩形块为示意的子图像块,图像F120为示意的各个子图像块合并后的合并图像。在该实例中,所述M×N个预设大小的子图像块优选均匀分布于所述待检测图像中,可以提取更具有代表性、更能反映整体待检测图像质量的图像区域进行检测,从而提高图像清晰度检测的准确度。
在该实施例中,所述步骤S200:将所述子图像块进行合并,包括将所述子图像块沿所述第二方向进行合并,得到合并图像,所述合并图像沿第一方向的长度为w,沿第二方向的长度为M×N×h。
以图3的实例为例,第一方向指的是图3中的水平方向,第二方向指的是图3中的竖直方向。待检测图像F110沿第一方向的长度即图像的宽度为W 0,沿第二方向的长度即图像的高度为H 0。在该实例中,将待检测图像F110中提取出20个子图像块,20个子图像块排列成4行5列的矩阵,即M值为4,N值为5。提取出的每个子图像块的宽度为64个像素,高度为1个像素,即w=64,h=1。在对20个子图像块沿竖直方向进行合并之后,得到合并图像F120,合并图像F120的宽度为64,高度为20。此处,M、N、w、h的值均为示例,在实际应用中,可以选择为其他数值,均属于本发明的保护范围之内。
如图4所示,示出了一张高清图像F210,如图5所示,示出了以该高清图像F210作为原图像进行子图像块提取和合并之后得到的合并图像F220。如图6所示,示出了一张轻微模糊图像F310,如图7所示,示出了以该轻微模糊图像F310作为原图像进行子图像块提取和合并之后得到的合并图像F320。如图8所示,示出了一张严重模糊图像F410,如图9所示,示出了以该严重模糊图像F410作为原图像进行子图像块提取和合并之后得到的合并图像F420。比较图5、图7和图9,可以看出,合并图像可以比较真实地原图像的整体质量。通过检测合并图像的清晰度值,可以比较准确地得到原图像的清晰度值。
下面以图10示出的实例为例具体说明步骤S110的具体实现方式。在该实施例中,所述M×N个预设大小的子图像块中,沿第一方向相邻两个子图像块的间距均为一相同的S w值,且沿第二方向相邻两个子图像块的间距均为一相同的S h值。具体地,所述步骤S110中,采用如下步骤从待检测图像中提取M×N个子图像块:
S111:确定沿第一方向第1个、沿第二方向第1个子图像块的起始位置坐标(O w,O h),即在图10的实例中图像的左上角的第一个子图像块的起始位置坐标,该起始位置坐标的单位为像素,坐标值的取值都是大于等于0的整数;在该实施例中,将每 个子图像块的左上角的第一个像素点作为该子图像块的起始位置,可以以待检测图像的左上角作为坐标系的0点,以水平方向为x轴,以竖直方向为y轴;
具体地,可以采用如下公式确定第一个子图像块的起始位置坐标:
Figure PCTCN2020121508-appb-000002
Figure PCTCN2020121508-appb-000003
此处,C 1和C 2是预设的比例系数,公式中的除法为整除,即只选取结果的整数部分而舍去小数部分。
S112:根据如下公式计算沿水平方向相邻两个子图像块的间距S w和沿竖直方向相邻两个子图像块的间距S h
Figure PCTCN2020121508-appb-000004
Figure PCTCN2020121508-appb-000005
其中,W 0为所述待检测图像沿第一方向的长度,H 0为所述待检测图像沿第二方向的长度。
如图10所示,相邻两个子图像块之间沿第一方向的间距S w即为前一个子图像块的左上角像素点与后一个子图像块的左上角像素点之间的水平距离,相邻两个子图像块之间沿第二方向的间距S h即为前一个子图像块的左上角像素点与后一个子图像块的左上角像素点之间的竖直距离。在该实施例中,水平方向第一个子图像块与左侧边的距离为O w,水平方向最后一个子图像块(该实例中为水平方向第五个子图像块)与右侧边的距离也为O w,竖直方向第一个子图像块与上侧边的距离为O h,竖直方向最后一个子图像块(该实例中为竖直方向第四个子图像块)与下侧边的距离也为O h
S113:根据如下公式确定沿第一方向第j个、沿第二方向第i个子图像块B ij的起始坐标(P j,P i),j∈(1,N),i∈(1,M):
P j=O w+(j-1)*S w
P i=O h+(i-1)*S h
S114:在所述待检测图像中,从每个子图像块的起始位置起,提取沿第一方向长度为w、沿第二方向长度为h的区域,作为对应的子图像块。
在该实施例中,所述步骤S120中,将所述子图像块进行合并,得到合并图像,包括采用如下步骤计算所述合并图像中各个像素点(x′,y′)的像素值,x′∈(1,w),y′∈(1,M×N×h):
S121:根据如下公式计算合并图像I′中各个像素点(x′,y′)所对应的子图像块沿第一方向的序号j和沿第二方向的序号i:
Figure PCTCN2020121508-appb-000006
j=y′%(h*N)
其中,[]表示取整数,%表示取除法中的余数;
由此得到的合并图像是由各个子图像块B ij首先从上到下、然后从左到右的顺序依次在竖直方向进行合并。即合并图像中最上面的子图像块是第1行第1列图像块,然后向下依次是第2行第1列图像块,第3行第1列图像块,第4行第1列图像块,第1行第2列图像块,第2行第2列图像块,第3行第2列图像块,第4行第2列图像块,第1行第3列图像块……
S122:根据如下公式计算所述合并图像I′中各个像素点(x′,y′)的像素值I′(x′,y′):
I′(x′,y′)=I(P j+x′,P i+y′%h)
其中,I(P j+x′,P i+y′%h)表示在所述待检测图像I中点(P j+x′,P i+y′%h)的像素值,P j+x′∈(1,W 0),P i+y′%h∈(1,H 0)。
在该实施例中,所述清晰度检测模型包括输入层、特征提取层和全连接层。所述特征提取层可以设置为1个,即将所述合并图像整体输入所述特征提取层中,所述特征提取层进行特征提取后输入全连接层,得到所述全连接层输出的清晰度值。所述清晰度检测模型可以为卷积神经网络模型,所述特征提取层可以包括卷积层和池化层。卷积神经网络的输入层可以处理输入数据,将输入数据进行标准化处理,处理成可以被卷积层处理的数据格式。卷积层的功能是对输入数据进行特征提取,其内部包括多个卷积核,组成卷积核的每个元素都对应一个权重系数和一个偏差量。卷积核在工作时,会有规律地扫过输入特征,在感受野内对输入特征做矩阵元素乘法求和并叠加偏差量。池化层用于在卷积层进行特征提取后,对卷积层输出的特征图进行特征选择和信息过滤。池化层包含预设定的池化函数,其功能是将特征图中单个点的结果替换为其相邻区域的特征图统计量。全连接层对卷积层和池化层提取的特征进行非线性组合以得到输出。
在该实施例中,由于将M×N个子图像块沿竖直方向合并得到所述合并图像后再将所述合并图像输入所述卷积神经网络模型中,因此,所述卷积层可以采用一维卷积核,仅对合并图像进行水平方向的卷积,而不进行竖直方向的卷积。因此,相比于采用二维卷积核、三维卷积核的卷积神经网络模型,本发明的清晰度检测模型可以更快速地对输入图像进行特征提取。
在另一种实施方式中,所述清晰度检测模型中的特征提取层也可以设置为M×N个,并且该M×N个特征提取层具有相同的参数。所述输入层的输出连接所述M×N个特征提取层,所述M×N个特征提取层的输出连接所述全连接层的输入。
所述步骤S130:将所述合并图像输入训练好的清晰度检测模型,包括将所述合并图像输入所述清晰度检测模型,所述输入层将所述合并图像拆分为M×N个子图像块,每个子图像块的尺寸即为上述的w×h,各个所述子图像块分别输入一个所述特征提取层,将M×N个子图像块提取的特征图输入到全连接层中,得到所述全连接层输出的清晰度值。
在实际应用中,所述清晰度检测模型接收的输入的合并图像可以为三通道的RGB图像,因此输入的图像是三通道的总像素数目为M×N×w×h的合并图像。模型首先对合并图像进行拆分后,得到3×M×N个像素数为w×h的子图像块。在该实施例中,通过步骤S120将子图像块进行合并,然后再在模型中进行拆分的方式,可以有利于清晰度检测模型的图像输入。向清晰度检测模型中输入一个三通道的M×N×w×h的合并图像,相比于直接输入3×M×N个像素数为w×h的子图像块的方式,传输效率更高,模型处理起来更方便。
在该实施例中,对所述训练用图像的预处理也可以采用如上所述步骤S110和步骤S120的具体实施方式,将各个训练用图像预处理为M×N个像素数w×h的子图像块之后,再将M×N个子图像块合并为总像素数为M×N×w×h的合并图像。
所述步骤S210中:采集多个训练用图像和各个训练用图像的清晰度值标记之后,还可以包括对图像进行随机裁剪进行数据增广,进一步扩大训练用图像的数量,裁剪后的图像I c的宽度为W c、高度为H c。H c为大于1小于原图像I 0高度H 0的正整数,W c为大于1小于原图像I 0宽度W 0的正整数。
一张训练用图像可以随机裁剪得到多个裁剪图像I c,从一张图像中随机裁剪得到的裁剪图像I c具有与原图像I 0相同的清晰度值。
所述步骤S220:从每个训练用图像中提取M×N个预设大小的子图像块。每个子图像块的宽度为w,高度为h。w和h均为固定值。w为大于1小于W c的正整数,h为大于1小于H c的正整数。所有子图像块大小相同,并且在水平方向上相邻两个子图像块的间距均为固定的S w,在竖直方向上相邻两个子图像块的间距均为固定的S h。具体子图像块的分割方式可以采用如上步骤S111~S114的流程进行。
所述步骤S230:将每个训练用图像的子图像块进行合并,得到每个训练用图像对应的合并图像,具体包括将一张裁剪图像所对应的所有子图像块在竖直方向上合并起来,得到高度是M×h的合并图像I’,合并图像I’的清晰度值与原图像I 0相同。
如果一张训练用图像之前进行了多次裁剪,得到了多个裁剪图像I c,则每个裁剪图像I c均对应一个合并图像I’。
所述步骤S240:将每个训练用图像对应的合并图像和对应的清晰度值标记加入训练集中,采用所述训练集训练所述清晰度检测模型,包括构建一个卷积神经网络的回归模型,卷积神经网络可以是LeNet、Vgg、ResNet等深度学习网络结果,但本发明不限于此。训练模型所需要的软件可以是Tensorflow、Pytorch、Caffe等,需要的硬件为计算机。将训练集中的合并图像I’和清晰度值标记输入到构建的卷积神经网络中,训练至收敛,得到清晰度检测模型。
在所述清晰度检测模型中包括一个特征提取层时,所述训练集中的合并图像I’输入到所述特征提取层中,提取的特征输入全连接层中,然后得到预测的清晰度值,与标记的清晰度值进行比较。在所述清晰度检测模型中包括M×N个特征提取层时,所述训练集中的合并图像I’进行拆分得到M×N个子图像块后,分别输入M×N个特征提取层,然后将M×N个子图像块的特征输入全连接层中,得到预测的清晰度值。
如图11所示,本发明实施例还提供一种图像清晰度检测系统,应用于所述的图像清晰度检测方法,所述系统包括:
子图像块分割模块M100,用于从待检测图像中提取多个预设大小的子图像块;
子图像块合并模块M200,用于将所述子图像块进行合并,得到合并图像;
清晰度检测模块M300,用于将所述合并图像输入训练好的清晰度检测模型,得到所述清晰度检测模型输出的清晰度值,作为所述待检测图像的清晰度值。
本发明首先采用子图像块分割模块M100和子图像块合并模块M200,在清晰度检测之前首先对图像提取指定大小的子图像块并进行合并,然后再采用清晰度检测模块M300将合并图像输入训练好的清晰度检测模型,将检测到的合并图像的清晰度值作为待检测图像的清晰度值,这样的好处在于,在保证清晰度检测模型的输入图像即合并图像的尺寸统一的基础上,对待检测图像的原分辨率没有限定,各种不同分辨率的原始待检测图像经过前两步子图像块提取和合并之后之后都能够得到尺寸相同的合并图像,从而可以适用于不同分辨率的图像的清晰度检测。并且,在采用本发明的方法进行图像清晰度检测时,无需缩放图像,由于合并图像所包含的内容是分散在待检测图像中的多个子图像块的信息,相比于现有技术中只选择局部图像进行清晰度检测来说,合并图像的清晰度值可以更好地代表待检测图像的清晰度值,提高了图像清晰度检测的准确度;由于合并图像代表的是整体待检测图像的清晰度值而非局部图像的清晰度值,对于部分模糊的图像来说,也不会因为对图像区域的选择而影响清晰度值的准确性,因此,能够对部分模糊的图像,例如背景虚化的图像准确地进行清晰度检测。进一步地,由于合并图像相比于原图像来说,尺寸更小,因此,也减少了清晰度检测模型中进行特征提取的负担,降低了图像清晰度检测系统损耗,提高了图像清晰度检测的效率。
本发明的图像清晰度检测系统中各个模块的功能实现可以采用上述图像清晰度 检测方法中各个步骤的实施方式。例如,子图像块分割模块M100可以采用上述步骤S110的实施方式,子图像块合并模块M200可以采用上述步骤S120的实施方式,清晰度检测模块M300可以采用上述步骤S130的具体实施方式,此处不予赘述。
在该实施例中,所述图像清晰度检测系统还可以包括模型训练模块,所述模型训练模块用于采集训练用图像并进行处理得到训练集,并采用所述训练集训练所述清晰度检测模型。具体地,所述模型训练模块可以采用上述步骤S210~S240的流程训练所述清晰度检测模型。
本发明实施例还提供一种图像清晰度检测设备,包括处理器;存储器,其中存储有所述处理器的可执行指令;其中,所述处理器配置为经由执行所述可执行指令来执行所述的图像清晰度检测方法的步骤。
所属技术领域的技术人员能够理解,本发明的各个方面可以实现为系统、方法或程序产品。因此,本发明的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。
下面参照图12来描述根据本发明的这种实施方式的电子设备600。图12显示的电子设备600仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图12所示,电子设备600以通用计算设备的形式表现。电子设备600的组件可以包括但不限于:至少一个处理单元610、至少一个存储单元620、连接不同系统组件(包括存储单元620和处理单元610)的总线630、显示单元640等。
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元610执行,使得所述处理单元610执行本说明书上述电子处方流转处理方法部分中描述的根据本发明各种示例性实施方式的步骤。例如,所述处理单元610可以执行如图1中所示的步骤。
所述存储单元620可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)6201和/或高速缓存存储单元6202,还可以进一步包括只读存储单元(ROM)6203。
所述存储单元620还可以包括具有一组(至少一个)程序模块6205的程序/实用工具6204,这样的程序模块6205包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线630可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备600也可以与一个或多个外部设备700(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备600交互的设备通信,和/或与使得该电子设备600能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口650进行。并且,电子设备600还可以通过网络适配器660与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。网络适配器660可以通过总线630与电子设备600的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备600使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
通过采用本发明的图像清晰度检测设备,其处理器在执行存储器中的可执行指令 来执行所述的图像清晰度检测方法的步骤时,由于在清晰度检测之前首先对待检测图像提取指定大小的子图像块并进行合并,然后将合并图像而非待检测图像输入清晰度检测模型,将检测到的合并图像的清晰度值作为待检测图像的清晰度值,这样的好处在于,在保证清晰度检测模型的输入图像即合并图像的尺寸统一的基础上,对待检测图像的原分辨率没有限定,各种不同分辨率的原始待检测图像经过前两步子图像块提取和合并之后之后都能够得到尺寸相同的合并图像,从而可以适用于不同分辨率的图像的清晰度检测;此外,通过采用本发明,对图像进行检测时无需缩放图像,由于合并图像所包含的内容是分散在待检测图像中的多个子图像块的信息,相比于现有技术中只选择局部图像进行清晰度检测来说,合并图像的清晰度值可以更好地代表待检测图像的清晰度值,通过多个子图像块合并的图像的清晰度检测实现了对待检测图像的清晰度准确检测;由于合并图像代表的是整体待检测图像的清晰度值而非局部图像的清晰度值,对于部分模糊的图像来说,也不会因为对图像区域的选择而影响清晰度值的准确性,因此,能够对部分模糊的图像,例如背景虚化的图像准确地进行清晰度检测。
进一步地,由于合并图像相比于原图像来说,尺寸更小,因此,也减少了清晰度检测模型中进行特征提取的负担,降低了图像清晰度检测设备的处理器损耗,提高了图像清晰度检测的效率。
本发明实施例还提供一种计算机可读存储介质,用于存储程序,所述程序被执行时实现所述的图像清晰度检测方法的步骤。在一些可能的实施方式中,本发明的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述电子处方流转处理方法部分中描述的根据本发明各种示例性实施方式的步骤。
参考图13所示,描述了根据本发明的实施方式的用于实现上述方法的程序产品800,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本发明的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
所述计算机可读存储介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读存储介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本发明操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完 全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。
通过采用本发明的计算机可读存储介质,存储介质中的程序被执行而实现所述的图像清晰度检测方法的步骤时,由于在清晰度检测之前首先对待检测图像提取指定大小的子图像块并进行合并,然后将合并图像而非待检测图像输入清晰度检测模型,将检测到的合并图像的清晰度值作为待检测图像的清晰度值,这样的好处在于,在保证清晰度检测模型的输入图像即合并图像的尺寸统一的基础上,对待检测图像的原分辨率没有限定,各种不同分辨率的原始待检测图像经过前两步子图像块提取和合并之后之后都能够得到尺寸相同的合并图像,从而可以适用于不同分辨率的图像的清晰度检测;此外,通过采用本发明,对图像进行检测时无需缩放图像,由于合并图像所包含的内容是分散在待检测图像中的多个子图像块的信息,相比于现有技术中只选择局部图像进行清晰度检测来说,合并图像的清晰度值可以更好地代表待检测图像的清晰度值,通过多个子图像块合并的图像的清晰度检测实现了对待检测图像的清晰度准确检测;由于合并图像代表的是整体待检测图像的清晰度值而非局部图像的清晰度值,对于部分模糊的图像来说,也不会因为对图像区域的选择而影响清晰度值的准确性,因此,能够对部分模糊的图像,例如背景虚化的图像准确地进行清晰度检测。
进一步地,由于合并图像相比于原图像来说,尺寸更小,因此,也减少了清晰度检测模型中进行特征提取的负担,降低了计算机可读存储介质被执行时,执行系统损耗,提高了图像清晰度检测的效率
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换,都应当视为属于本发明的保护范围。

Claims (10)

  1. 一种图像清晰度检测方法,其特征在于,包括如下步骤:
    从待检测图像中提取多个预设大小的子图像块;
    将所述子图像块进行合并,得到合并图像;
    将所述合并图像输入训练好的清晰度检测模型,得到所述清晰度检测模型输出的清晰度值,作为所述待检测图像的清晰度值。
  2. 根据权利要求1所述的图像清晰度检测方法,其特征在于,所述从待检测图像中提取多个预设大小的子图像块的步骤,包括从待检测图像中提取M×N个子图像块,所述子图像块沿第一方向排列成N列,沿第二方向排列成M行,且每个子图像块沿第一方向的长度均为w,沿第二方向的长度均为h。
  3. 根据权利要求2所述的图像清晰度检测方法,其特征在于,所述M×N个预设大小的子图像块中,沿第一方向相邻两个子图像块的间距均为一相同的S w值,且沿第二方向相邻两个子图像块的间距均为一相同的S h值。
  4. 根据权利要求3所述的图像清晰度检测方法,其特征在于,将所述子图像块进行合并,包括将所述子图像块沿所述第二方向进行合并,得到合并图像,所述合并图像沿第一方向的长度为w,沿第二方向的长度为M×N×h。
  5. 根据权利要求4所述的图像清晰度检测方法,其特征在于,所述从待检测图像中提取M×N个子图像块,包括如下步骤:
    确定沿第一方向第1个、沿第二方向第1个子图像块的起始位置坐标(O w,O h);
    计算沿第一方向相邻两个子图像块的间距S w和沿第二方向相邻两个子图像块的间距S h
    确定沿第一方向第j个、沿第二方向第i个子图像块的起始坐标(P j,P i),j∈(1,N),i∈(1,M);
    在所述待检测图像中,从每个子图像块的起始位置起,提取沿第一方向长度为w、沿第二方向长度为h的区域,作为对应的子图像块;
    将所述子图像块进行合并,得到合并图像,包括采用如下步骤计算所述合并图像中各个像素点(x′,y′)的像素值,x′∈(1,w),y′∈(1,M×N×h):
    计算所述合并图像中各个像素点(x′,y′)所对应的子图像块沿第一方向的序号j和沿第二方向的序号i;
    计算所述合并图像中各个像素点(x′,y′)的像素值I′(x′,y′)。
  6. 根据权利要求5所述的图像清晰度检测方法,其特征在于,所述清晰度检测模型包括输入层、特征提取层和全连接层,所述特征提取层为M×N个,所述特征提取层的输出连接所述全连接层的输入;
    将所述合并图像输入训练好的清晰度检测模型,包括将所述合并图像输入所述清晰度检测模型,所述输入层将所述合并图像拆分为M×N个子图像块,各个所述子图像块分别输入一个所述特征提取层,得到所述全连接层输出的清晰度值。
  7. 根据权利要求1所述的图像清晰度检测方法,其特征在于,所述方法还包括采用如下步骤训练所述清晰度检测模型:
    采集多个训练用图像和各个训练用图像的清晰度值标记;
    从每个训练用图像中提取多个预设大小的子图像块;
    将每个训练用图像的子图像块进行合并,得到每个训练用图像对应的合并图像;
    将每个训练用图像对应的合并图像和对应的清晰度值标记加入训练集中,采用所述训练集训练所述清晰度检测模型。
  8. 一种图像清晰度检测系统,其特征在于,应用于权利要求1至7中任一项所述的图像清晰度检测方法,所述系统包括:
    子图像块分割模块,用于从待检测图像中提取多个预设大小的子图像块;
    子图像块合并模块,用于将所述子图像块进行合并,得到合并图像;
    清晰度检测模块,用于将所述合并图像输入训练好的清晰度检测模型,得到所述清晰度检测模型输出的清晰度值,作为所述待检测图像的清晰度值。
  9. 一种图像清晰度检测设备,其特征在于,包括:
    处理器;
    存储器,其中存储有所述处理器的可执行指令;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至7中任一项所述的图像清晰度检测方法的步骤。
  10. 一种计算机可读存储介质,用于存储程序,其特征在于,所述程序被执行时实现权利要求1至7中任一项所述的图像清晰度检测方法的步骤。
PCT/CN2020/121508 2020-01-17 2020-10-16 图像清晰度检测方法、系统、设备及存储介质 WO2021143233A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010053986.4A CN111311543B (zh) 2020-01-17 2020-01-17 图像清晰度检测方法、系统、设备及存储介质
CN202010053986.4 2020-01-17

Publications (1)

Publication Number Publication Date
WO2021143233A1 true WO2021143233A1 (zh) 2021-07-22

Family

ID=71148320

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/121508 WO2021143233A1 (zh) 2020-01-17 2020-10-16 图像清晰度检测方法、系统、设备及存储介质

Country Status (2)

Country Link
CN (1) CN111311543B (zh)
WO (1) WO2021143233A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115102892A (zh) * 2022-05-18 2022-09-23 慧之安信息技术股份有限公司 基于gat 1400协议模拟测试方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311543B (zh) * 2020-01-17 2022-09-02 苏州科达科技股份有限公司 图像清晰度检测方法、系统、设备及存储介质
CN112135048B (zh) * 2020-09-23 2022-02-15 创新奇智(西安)科技有限公司 一种目标对象的自动对焦方法以及装置
CN112367518B (zh) * 2020-10-30 2021-07-13 福州大学 一种输电线路无人机巡检图像质量评价方法
CN112541435B (zh) * 2020-12-14 2023-03-28 贝壳技术有限公司 一种图像处理的方法、装置和存储介质
CN113392241B (zh) * 2021-06-29 2023-02-03 中海油田服务股份有限公司 测井图像清晰度的识别方法、装置、介质及电子设备
CN113486821B (zh) * 2021-07-12 2023-07-04 西安电子科技大学 基于时域金字塔的无参考视频质量评价方法
CN113627314A (zh) * 2021-08-05 2021-11-09 Oppo广东移动通信有限公司 人脸图像模糊检测方法、装置、存储介质与电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867805A (zh) * 2009-04-20 2010-10-20 上海安维尔信息科技有限公司 一种利用警戒网格提升运动检测效率的方法
CN102013017A (zh) * 2010-11-26 2011-04-13 华中科技大学 一种高分辨率遥感图像场景粗分类方法
CN103793918A (zh) * 2014-03-07 2014-05-14 深圳市辰卓科技有限公司 一种图像清晰度检测方法及装置
CN105809704A (zh) * 2016-03-30 2016-07-27 北京小米移动软件有限公司 识别图像清晰度的方法及装置
CN109948625A (zh) * 2019-03-07 2019-06-28 上海汽车集团股份有限公司 文本图像清晰度评估方法及系统、计算机可读存储介质
US20190378247A1 (en) * 2018-06-07 2019-12-12 Beijing Kuangshi Technology Co., Ltd. Image processing method, electronic device and non-transitory computer-readable recording medium
CN110572579A (zh) * 2019-09-30 2019-12-13 联想(北京)有限公司 图像处理方法、装置及电子设备
CN111311543A (zh) * 2020-01-17 2020-06-19 苏州科达科技股份有限公司 图像清晰度检测方法、系统、设备及存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689524B (zh) * 2019-09-04 2022-04-22 华南理工大学 一种无参考在线图像清晰度评价方法与系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101867805A (zh) * 2009-04-20 2010-10-20 上海安维尔信息科技有限公司 一种利用警戒网格提升运动检测效率的方法
CN102013017A (zh) * 2010-11-26 2011-04-13 华中科技大学 一种高分辨率遥感图像场景粗分类方法
CN103793918A (zh) * 2014-03-07 2014-05-14 深圳市辰卓科技有限公司 一种图像清晰度检测方法及装置
CN105809704A (zh) * 2016-03-30 2016-07-27 北京小米移动软件有限公司 识别图像清晰度的方法及装置
US20190378247A1 (en) * 2018-06-07 2019-12-12 Beijing Kuangshi Technology Co., Ltd. Image processing method, electronic device and non-transitory computer-readable recording medium
CN109948625A (zh) * 2019-03-07 2019-06-28 上海汽车集团股份有限公司 文本图像清晰度评估方法及系统、计算机可读存储介质
CN110572579A (zh) * 2019-09-30 2019-12-13 联想(北京)有限公司 图像处理方法、装置及电子设备
CN111311543A (zh) * 2020-01-17 2020-06-19 苏州科达科技股份有限公司 图像清晰度检测方法、系统、设备及存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115102892A (zh) * 2022-05-18 2022-09-23 慧之安信息技术股份有限公司 基于gat 1400协议模拟测试方法

Also Published As

Publication number Publication date
CN111311543B (zh) 2022-09-02
CN111311543A (zh) 2020-06-19

Similar Documents

Publication Publication Date Title
WO2021143233A1 (zh) 图像清晰度检测方法、系统、设备及存储介质
US11967069B2 (en) Pathological section image processing method and apparatus, system, and storage medium
JP6902122B2 (ja) ダブル視野角画像較正および画像処理方法、装置、記憶媒体ならびに電子機器
WO2021139258A1 (zh) 基于图像识别的细胞识别计数方法、装置和计算机设备
US20220254134A1 (en) Region recognition method, apparatus and device, and readable storage medium
US11967181B2 (en) Method and device for retinal image recognition, electronic equipment, and storage medium
CN112233076B (zh) 基于红色圆标靶图像处理的结构振动位移测量方法及装置
WO2021164280A1 (zh) 三维边缘检测方法、装置、存储介质和计算机设备
US11694331B2 (en) Capture and storage of magnified images
CN113920538B (zh) 目标检测方法、装置、设备、存储介质及计算机程序产品
CN115861210B (zh) 一种基于孪生网络的变电站设备异常检测方法和系统
WO2017113692A1 (zh) 一种图像匹配方法及装置
WO2021051382A1 (zh) 白平衡处理方法和设备、可移动平台、相机
CN111709269B (zh) 一种深度图像中基于二维关节信息的人手分割方法和装置
CN116519106B (zh) 一种用于测定生猪体重的方法、装置、存储介质和设备
CN111667495A (zh) 一种图像场景解析方法和装置
CN116778581A (zh) 一种基于改进YOLOv7模型的考场异常行为检测方法
CN116229236A (zh) 一种基于改进YOLO v5模型的结核杆菌检测方法
CN113593707B (zh) 胃早癌模型训练方法、装置、计算机设备及存储介质
CN114863132A (zh) 图像空域信息的建模与捕捉方法、系统、设备及存储介质
CN112146834B (zh) 结构振动位移测量方法及装置
JP7309953B1 (ja) サイズ算出方法、サイズ算出装置、およびプログラム
CN117496323B (zh) 基于Transformer的多尺度二阶病理图像分类方法及系统
WO2022183325A1 (zh) 视频块处理方法及装置、神经网络的训练方法和存储介质
CN118071979A (zh) 一种基于深度学习的晶圆预对准方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20913666

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20913666

Country of ref document: EP

Kind code of ref document: A1