CN116012323A - Image definition calculation method, image processing model training method and device - Google Patents

Image definition calculation method, image processing model training method and device Download PDF

Info

Publication number
CN116012323A
CN116012323A CN202211680193.0A CN202211680193A CN116012323A CN 116012323 A CN116012323 A CN 116012323A CN 202211680193 A CN202211680193 A CN 202211680193A CN 116012323 A CN116012323 A CN 116012323A
Authority
CN
China
Prior art keywords
image
edge
hypotenuse
sharpness
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211680193.0A
Other languages
Chinese (zh)
Inventor
张湾湾
敦婧瑜
王亚运
薛佳乐
李轶锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211680193.0A priority Critical patent/CN116012323A/en
Publication of CN116012323A publication Critical patent/CN116012323A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to the technical field of image processing, and particularly discloses an image definition calculation method, an image processing model training method and an image processing model training device. The image definition calculating method comprises the following steps: performing edge extraction on the target image to obtain an edge image; selecting an edge region based on the edge image; straightening edge lines in an edge area image in the target image to obtain a hypotenuse image; based on the hypotenuse image, the sharpness of the target image is calculated. By the method, the problem of poor definition grading universality in image definition calculation under different scenes can be solved.

Description

Image definition calculation method, image processing model training method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image definition computing method, an image processing model training method and an image processing model training device.
Background
Currently, in some training methods of image processing models, image sharpness is used as auxiliary training data. When image sharpness is used as auxiliary training data, the sharpness of the images may be ranked to determine the sharpness interval in which each image in the training set is located. Currently, the image definition is generally represented by using edge gradient information and characteristic values thereof, but if an image scene is transformed, the amplitude of the image definition determined by the method can be greatly changed. This results in multiple images in the same scene using the same sharpness rating, while different scene images cannot use the same sharpness rating. In addition, training data of an image processing model in reality often come from a plurality of different scenes, and when the image definition determined by the current image definition calculation method is used as auxiliary training data, more manpower and time cost are required to be consumed for classifying the definition of each scene.
Disclosure of Invention
The invention mainly solves the technical problems of providing an image definition calculation method, an image processing model training method and an image processing model training device, and can solve the problem of poor definition grading universality when the image definition determined by the current image definition calculation method is used as auxiliary training data of an image processing model.
In order to solve the technical problems, the invention adopts a technical scheme that: provided is an image sharpness calculation method, including: performing edge extraction on the target image to obtain an edge image; selecting an edge region based on the edge image; straightening edge lines in an edge area image in the target image to obtain a hypotenuse image; based on the hypotenuse image, the sharpness of the target image is calculated.
In one embodiment, the straightening process for the edge line in the edge area image in the target image includes: and (3) moving the pixel points on the edge line to the circumscribed line of the edge line so as to straighten the edge line into the circumscribed line.
In one embodiment, there is only one edge line in the edge region; the length of the edge line in the edge region is greater than or equal to a first threshold; the distance between the midpoint of the edge line in the edge region and the preset boundary of the edge region is greater than or equal to a second threshold.
In one embodiment, the edge lines in the edge region are strong edge lines.
In one embodiment, calculating sharpness of the target image based on the hypotenuse image includes: rotating the hypotenuse image to enable the angle of the straightened edge line in the rotated hypotenuse image to be within a preset range; the sharpness of the target image is calculated based on the rotated hypotenuse image.
In one embodiment, calculating the sharpness of the target image based on the rotated hypotenuse image includes: intercepting the rotated hypotenuse image to obtain a image containing a tough edge; the midpoint of the edge line subjected to straightening treatment is positioned at the centroid of the image containing the tough edge; the distance between the centroid and the boundary of the image containing the border is greater than or equal to a third threshold value; the minimum distance between the preset intersection point and the vertex of the image containing the tough edge is larger than or equal to a fourth threshold value, and the preset intersection point is the intersection point of the edge line and the boundary which are subjected to straightening processing.
In one embodiment, the number of edge regions selected based on the edge image is at least two; calculating sharpness of the target image based on the hypotenuse image, comprising: calculating the definition of the hypotenuse image corresponding to each edge area; and fusing the definition of the hypotenuse images corresponding to all the edge areas to obtain the definition of the target image.
In order to solve the technical problems, the invention adopts another technical scheme that: provided is a training method of an image processing model, comprising the following steps: determining the credibility of each image of the training image set;
processing each image by using an image processing model to obtain a processing result of each image; calculating a loss of each image based on a processing result of each image; weighting the losses of at least part of the images in the training image set to obtain total losses, wherein the weight of each image is positively correlated with the credibility of each image; the image processing model is trained based on the total loss.
In one embodiment, the credibility of each image is determined by the sharpness of each image.
In one embodiment, determining the trustworthiness of each image of the training image set includes: the method of any one of the above to calculate the sharpness of each image.
In an embodiment, the weight of each image is a weight corresponding to a definition interval in which the definition of each image is located.
In order to solve the technical problems, the invention adopts another technical scheme that: there is provided a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the image sharpness calculation method of any of the above when executing the computer program; or a training method of an image processing model as in any of the above.
In order to solve the technical problems, the invention adopts another technical scheme that: providing a computer readable storage medium having instructions stored therein, the computer program when executed by a processor implementing the image sharpness calculation method of any of the above; or a training method of an image processing model as in any of the above.
The beneficial effects of the invention are as follows: different from the prior art, the method obtains the edge image by extracting the edge of the target image; selecting an edge region based on the edge image; straightening edge lines in an edge area image in the target image to obtain a hypotenuse image; based on the hypotenuse image, the sharpness of the target image is calculated. Therefore, when different scenes are replaced, the fluctuation of the definition amplitude of the image is small. By the method, the problem of poor definition grading universality when the image definition determined by the current image definition calculation method is used as auxiliary training data of an image processing model can be solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flowchart of an embodiment of a method for calculating image sharpness according to the present invention;
FIG. 2 is a schematic diagram of another embodiment of the image sharpness calculation method of the present invention;
FIG. 3 is a flow chart of an embodiment of an image processing model training method according to the present invention;
FIG. 4 is a schematic diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not limiting. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a method for calculating image sharpness according to the present invention, the method includes:
step 101: and extracting the edge of the target image to obtain an edge image.
Optionally, edge extraction may be performed on the target image by an edge detection algorithm, to obtain an edge image. The edge detection algorithm can be a Sobel (sobel) operator, a Prewitt operator, a canny operator and the like, and the type of the algorithm is not limited.
Alternatively, an image gradient algorithm may be used to obtain an edge image by calculating a gradient for each pixel in the image, counting a gradient histogram of the image, and distinguishing an edge from a smooth region using a threshold. Or the edge extraction may be performed on the target image in other manners as long as the edge image of the target image can be obtained.
The target image may also be filtered to remove noise before edge extraction to reduce unnecessary interference. By way of example, a 3×3 gaussian filter may be employed to gaussian filter the target image. And filtering the target image to remove noise, so that the influence of the noise in the target image on the edge extraction of the target image is reduced, and the edge detection precision of the target image is improved.
The target image may be acquired before edge extraction of the target image. In one implementation, the target image may be acquired from the monitoring, so that edge extraction is performed on the acquired target image in step 101.
The acquired image may be a gray-scale image, or may be a color image such as an RGB image or a YUV image.
If the acquired image is a gray scale image, the acquired image may be directly used as the target image.
When the acquired image is an RGB image, it is necessary to perform gradation processing on the acquired image and take the gradation-processed image as a target image in order to perform edge extraction on the target image in step 101. In the gradation processing, a component method, a maximum value method, an average value method, a weighted average method, or the like can be used. Referring to fig. 2, fig. 2 is a flowchart illustrating an embodiment of a method for calculating image sharpness according to the present invention. Taking a vehicle image as an example, description will be made: corresponding to the acquired RGB vehicle image (a), carrying out gray scale processing on the vehicle image to obtain a gray scale image (b) of the vehicle image; edge extraction is performed based on the grayscale image (b) of the vehicle image, and an edge image (c) of the target image is obtained.
When the acquired image is a YUV image, the Y component of the YUV image may be taken as the target image.
Step 102: and selecting an edge area based on the edge image.
After the edge image is acquired, region selection can be performed on the edge image.
In a possible implementation manner, the edge image may be subjected to region division to obtain a plurality of candidate regions; an appropriate edge region is then selected from the plurality of candidate regions.
In another possible embodiment, the region of the edge image where the edge line is present is truncated, and a suitable edge region is selected from the plurality of truncated regions.
Optionally, the appropriate edge region means that the selected edge region meets a first preset condition. And specifically the first preset condition may be as follows, although not limited thereto.
For example, as shown in fig. (e), only one edge line is selected in the selected edge region. The length of the edge line in the edge area is greater than or equal to a first threshold value, and/or the distance between the midpoint of the edge line in the edge area and a preset boundary of the edge area is greater than or equal to a second threshold value.
The preset boundary may be any boundary of the edge area. Alternatively, the preset boundary may be determined based on the position of the edge line in the edge region. For example, if the absolute value of the included angle between the midpoint external tangent line and the horizontal line of the edge line in the edge area is greater than the preset angle, the preset boundary is the upper boundary and/or the lower boundary of the edge area, otherwise, the preset boundary is the left boundary and/or the right boundary of the edge area. The preset angle may be set according to practical situations, and is not limited herein, and may be, for example, 30 ° or 40 °.
In an embodiment, the first threshold and the second threshold may be fixed thresholds set in advance. For example, the fixed threshold may be empirically set by the user.
In another embodiment, the first threshold and the second threshold may be set according to practical situations, which is not limited herein. For example, the first threshold may be 100 pixels and the second threshold may be 35 pixels.
Furthermore, the edge lines in the edge region may be strong edge lines. Compared with the weak edge line, the pixel value difference at two sides of the strong edge line is relatively larger, so that the definition of the image can be accurately calculated by selecting the strong edge line.
Step 103: and straightening edge lines in an edge area image in the target image to obtain a hypotenuse image.
After the selected edge area is determined based on the steps, edge lines in the edge area image in the target image can be straightened to obtain a hypotenuse image, so that the definition of the target image can be calculated based on the hypotenuse image.
In one possible embodiment, the edge line may be straightened by a translation process. For example, the pixel points on the edge line may be moved onto the circumscribed line of the edge line to straighten the edge line into the circumscribed line. The external tangent line of the edge line may refer to an external tangent line of a preset point on the edge line. While the preset point may be a midpoint of the edge line, a point adjacent to the midpoint, or an end point of the edge line, particularly without limitation.
Specifically, taking a preset boundary as an upper boundary and/or a lower boundary of an edge area as an example, calculating the slope and intercept of an outer tangent line of a midpoint of an edge line in an image of the edge area, taking the midpoint of the edge line as a reference, horizontally moving a plurality of pixels on the edge line, calculating a target coordinate value according to a linear equation every time one pixel is moved, horizontally shifting the pixels on the edge line by d distances based on the horizontal distance d between the target coordinate value and an initial coordinate value of a pixel point on the edge line, and enabling the pixel point on the edge line to be positioned on an circumscribed straight line of an original edge line after the pixel point on the edge line is moved, so that a hypotenuse image is obtained. So that the sharpness of the target image is calculated based on the hypotenuse image later.
Description will be made by taking a vehicle image as an example: when the gray level image (b) of the vehicle image is obtained and the edge area is selected based on the edge image (c), the edge line in the edge area image (e) in the gray level image (b) of the vehicle image is straightened. The specific method is referred to step 103 and will not be described here.
As shown in fig. (e), since the edge image (c) is processed from the grayscale image (b) of the vehicle image, the edge region selected based on the edge image (c) may be mapped into the grayscale image (b) of the vehicle image, and thus the edge region image in the grayscale image of the vehicle image may be determined based on the selected edge region.
It should be noted that, as shown in fig. 2 (e) - (f), after the edge line in the edge area image in the grayscale image of the vehicle image is straightened, the pixel points on the edge line move to the circumscribed line of the original edge line, and the pixel values around the edge line also move along with the movement of the pixel points on the edge line. The pixel value of the pixel between the straightened edge line and the original edge line is changed into a preset pixel value, and the preset pixel value is equal to the pixel value of the adjacent pixel on the side, away from the straightened edge line, of the pixel on the original edge line.
Step 104: based on the hypotenuse image, the sharpness of the target image is calculated.
Based on the hypotenuse image obtained in the above step, the sharpness of the target image can be calculated based on the hypotenuse image.
Wherein the number of obtained hypotenuse images is not limited, for example the number of obtained hypotenuse images may be one or more.
When the number of the obtained hypotenuse images is one, the sharpness of the hypotenuse image is directly calculated, and the sharpness of the hypotenuse image is taken as the sharpness of the target image.
When the number of the obtained hypotenuse images is multiple, firstly calculating the definition of the hypotenuse image corresponding to each edge area; and fusing the definition of the hypotenuse images corresponding to all the edge areas, so as to calculate the definition of the target image.
In one possible approach, the sharpness of the hypotenuse image may be calculated by the MTF50 algorithm. When the related technology calculates the definition of the camera image by using the MTF algorithm, the interested area is determined according to the edge center point coordinates of the black block to be tested in the professional test chart card, the constraint value of the interested area and the slope of each edge of the known interested area, the interested area with a certain angle with the black block to be tested is obtained by using a linear equation, and the interested area is in a preset angle range so as to meet the requirement of the MTF algorithm on the interested area. The method and the device have the advantages that the hypotenuse image is obtained based on the steps, when the image definition is calculated by the MTF algorithm in the follow-up process, the definition MTF value can be calculated in a more common image, the professional test chart is not relied on, and the scene universality is good.
In another possible way, the sharpness of the hypotenuse image may be calculated by an SFR algorithm. The related art relies on a professional test chart when calculating the sharpness of a camera image using an SFR algorithm. The image resolution calculation method based on the SFR algorithm obtains the hypotenuse image based on the steps, and can calculate the resolution SFR value in a more common image when the image resolution is calculated by the SFR algorithm later, and the image resolution calculation method does not depend on a professional test chart card, so that the scene universality is good.
In another implementation, the hypotenuse image may be input into a trained convolutional neural network model to obtain a sharpness score for the hypotenuse image, thereby obtaining the sharpness of the target image.
In addition, before calculating the sharpness of the hypotenuse image, image normalization may be performed, and image sharpness calculation may be performed at the same resolution. The method is suitable for images obtained by different shooting devices under different scenes, the image definition can be evaluated by the same evaluation standard, and the industrial application field and evaluation accuracy of the image definition calculation are enlarged. The above image normalization method includes, but is not limited to, scaling in equal proportion, or mapping pixels to blank files with the same size, so as to achieve the normalization processing of the image size.
Alternatively, before performing the sharpness calculation, it may also be determined whether the hypotenuse image obtained based on the above steps satisfies a second preset condition; if the hypotenuse image obtained based on the steps does not meet the second preset condition, the hypotenuse image is processed so that the processed hypotenuse image meets the second preset condition, and the requirements of the MTF/SFP algorithm and other algorithms on the image for performing definition calculation are met.
Illustratively, it may be judged whether or not the angle of the straightened edge line in the hypotenuse image is within a preset range (illustratively, [60 °,120 ° ] or [75 °,115 ° ], etc.); if the angle of the edge line after being straightened in the hypotenuse image is within the preset range, the definition of the target image can be calculated based on the hypotenuse image. It should be noted that the above preset range does not include 90 °. If the angle of the straightened edge line in the hypotenuse image is not within the preset range, the hypotenuse image needs to be rotated so that the angle of the straightened edge line in the rotated hypotenuse image is within the preset range, and then the definition of the target image is calculated based on the rotated hypotenuse image.
In addition, the second preset condition may include not only the above-described angle condition, but also a midpoint position condition and/or an intersection position condition of the edge line after being straightened in the hypotenuse image, and the like. When the midpoint position condition and/or the intersection position condition do not meet the requirements, the rotated hypotenuse image can be intercepted to obtain the image containing the tendrils, so that the midpoint position condition and/or the intersection position condition of the image containing the tendrils meet the requirements. The meeting of the midpoint position condition and/or the intersection position condition of the image containing the tough edges can specifically mean that: the midpoint of the edge line subjected to the straightening treatment is positioned at the centroid of the image containing the tough edge; the distance between the centroid and the boundary of the image containing the border is greater than or equal to a third threshold value; the minimum distance between the preset intersection point and the vertex of the image containing the tough edge is larger than or equal to a fourth threshold value, and the preset intersection point is the intersection point of the edge line and the boundary which are subjected to straightening processing.
In an embodiment, the third threshold and the fourth threshold may be fixed thresholds set in advance. For example, the fixed threshold may be empirically set by the user.
In another embodiment, the third threshold and the fourth threshold may be set according to actual situations, and are not limited herein. For example, the third threshold may be 20 pixels and the fourth threshold may be 5 pixels.
In the present embodiment, an edge image is obtained by performing edge extraction on a target image; selecting an edge region based on the edge image; straightening edge lines in an edge area image in the target image to obtain a hypotenuse image; calculating the definition of the target image based on the hypotenuse image; under the condition that the image scenes are changed, the amplitude fluctuation of the image definition determined by the method is small, and therefore, when the image definition determined by the image definition calculation method is used as auxiliary training data, the definition classification of each scene is carried out without consuming more manpower and time cost, so that the definition classification in each scene is unified; compared with the original camera image MTF/SFR value calculation method, the image definition calculation method does not depend on a professional test chart card, can calculate the definition MTF value in a more common image, and has good scene universality of definition grading. Through the mode, the method and the device can solve the problem of poor universality of image definition calculation.
Referring to fig. 3, fig. 3 is a flowchart illustrating an embodiment of an image processing model training method according to the present application.
Step 301: the credibility of each image of the training image set is determined.
In an embodiment, the image sharpness calculation method of any one of the embodiments may be used to calculate the sharpness of each image in the training image set, and the sharpness of each image in the training image set is used as the credibility of each image in the training image set. Of course, the sharpness of each image may be determined by other methods. In this embodiment, in the training process, the credibility of the training image is represented by using the definition of the training image in the training image set, and the model is guided to train based on the definition of the training image, so that the image processing model pays more attention to the data with high definition, and the influence of the data with low definition on the image processing model is properly reduced, so that the reliability of the model after training is higher.
In another embodiment, the source credibility of each image in the training image set may be used as the credibility of each image in the training image set.
Alternatively, when the credibility of each image in the training image set is obtained by using the credibility of the source, the credibility of each image may be determined according to the source of each image. For example, an image obtained by direct photographing is more reliable than an image downloaded on the internet.
In yet another embodiment, quality parameters such as sharpness, contrast, or data source reliability for each image in the training image set may be determined; then, determining quality parameter grading standards based on the quality parameter distribution conditions of all images in the training image set; and taking the credibility label corresponding to the quality parameter interval where each image in the training image set is positioned as the credibility of each pixel based on the grading standard.
Alternatively, the confidence label may have a range of values of 0, 1. In other embodiments, the confidence label may also have a range of values of [0,2].
The credibility label is positively correlated with the quality parameter of the training image. For example, the reliability label corresponding to the quality parameter interval of (1,29) is 0.1, and the reliability label corresponding to the quality parameter interval of (30, 60) is 0.2.
For example, the sharpness of each image in the training set of images may be determined; then, determining the definition grading standard of the current training image by combining the current definition distribution condition of the training images in the whole training image set; and taking the credibility label corresponding to the definition interval of each image in the training image set as the credibility of each pixel based on the grading standard.
Step 302: and processing each image by using the image processing model to obtain a processing result of each image.
Specifically, each training image in the training image set is input into an image processing model, so that the image processing is carried out on each training image through the image processing model, and a processing result of each training image is obtained. The image processing model can be applied to image processing such as object classification, image detection, image recognition, image segmentation and the like.
Furthermore, an image processing model may be built prior to step 302. The image processing model may be any type of neural network model capable of performing image processing, for example, the image processing model may be a Deep Neural Network (DNN) model, such as may include a Convolutional Neural Network (CNN) model, a Recurrent Neural Network (RNN), and so on.
After the network structure of the image processing model is built, network parameters can be initialized, so that the image processing model after the network parameters are initialized can be trained through the training method of the image processing model.
Step 303: the loss of each image is calculated based on the processing result of each image.
Specifically, after the processing result of each image obtained through the above steps, the loss of each image may be calculated based on the processing result of each image.
Illustratively, the penalty of each image may be calculated by the task tag and the processing result of each image.
The loss of each image can be calculated by a variance loss function, a square difference loss function, an L2 distance and other loss functions.
Step 304: the loss of at least part of the images in the training image set is weighted to obtain the total loss.
Specifically, the loss of at least some of the images in the training image set is weighted to yield a total loss. Wherein the weighting means may be an average weighting.
Alternatively, the calculation formula of the total loss is as follows:
Figure BDA0004011271230000111
wherein loss is overall In order to account for the total loss,
Figure BDA0004011271230000112
for the weight of the ith image, loss i For the loss of the i-th image, N represents the total amount of images.
Wherein the weight of each image is positively correlated with the confidence level of each image, i.e., the higher the confidence level the higher the weight of the image in the loss function. Otherwise, the weight of the image with low reliability in the loss function is low, so that the influence of the data with low reliability on the image processing model is reduced, and the reliability of the finally obtained image processing model is higher.
In the case where the credibility label of each image is used as the credibility of each image in step 301, the credibility label of each image may be directly used as the weight of each image.
In other embodiments, the confidence level of each image may be processed linearly or non-linearly to obtain the weight of each image.
Step 305: the image processing model is trained based on the total loss.
Alternatively, the image processing model may be back-propagated based on the total loss to iteratively update the neural network parameters in the image processing model, thereby completing the training of the model.
Preferably, as described in the above step S301, the definition of each image in the training image set may be calculated using the above image definition calculation method, and the definition of each image is taken as the credibility of each image; the loss of each image may thus be calculated based on each image sharpness, and the loss of at least some of the images in the training set of images may be weighted to obtain a total loss, based on which the image processing model is trained. The training of the image processing model is guided by utilizing the definition characterization credibility of the input image, so that the image processing model pays more attention to data with high definition, the influence of data with low definition on the image processing model is properly reduced, and the reliability of the trained image processing model is higher.
Optionally, after the total loss is calculated, whether the current image processing model meets the end condition may be determined based on the total loss; if the current image processing model does not meet the training ending condition, continuing to optimize the parameters of the image processing model based on the steps 302, 303, 304 and 305, and stopping updating the parameters of the model until the image processing model meets the training ending condition, so that the trained image processing model can be obtained.
Alternatively, the training end condition may be: the total loss is less than the loss threshold; or the number of parameter updates is greater than a number threshold, etc.
In this embodiment, the credibility of each image in the training image set is determined; processing each image by using an image processing model to obtain a processing result of each image; calculating a loss of each image based on a processing result of each image; weighting the losses of at least part of the images in the training image set to obtain total losses, wherein the weight of each image is positively correlated with the credibility of each image; the image processing model is trained based on the total loss. By utilizing the image credibility assistance, the image processing model is more focused on data with high credibility, so that the influence of data with low credibility on the image processing model is properly reduced, and the reliability of the model after training is higher.
In order to better illustrate the image processing model training method of the present application, the following specific embodiments of the image processing model are provided for exemplary illustration:
s1: a training image set is acquired.
S2: and selecting a single image from the training image set as a target image to obtain the image containing the tough edges.
S21: and carrying out gray processing on the single image selected in the training image set to obtain a gray image (namely a target image).
S22: performing edge extraction on the target image to obtain an edge image; selecting N edge areas meeting a first preset condition based on the edge image, wherein the first preset condition is as follows:
a) The length of the edge line needs to be greater than a defined threshold T1 (T1. Gtoreq.100 pixel).
b) There is one and only one strong edge in the edge region, and the strong edge is located in the center region of the edge region.
c) The distance between the midpoint of the strong edge and the edge region reference boundary is greater than a limiting threshold T2 (T2 is more than or equal to 35 pixel); if the absolute value of the included angle between the midpoint external tangent line and the horizontal line of the strong edge is larger than 30 degrees, the reference boundary is an upper boundary and a lower boundary, otherwise, the reference boundary is a left boundary and a right boundary.
S23: and straightening edge lines in an edge area image in the target image to obtain a hypotenuse image.
And calculating the slope k and the intercept b of the outer tangent line of the midpoint of the strong edge in the edge area, and straightening the edge by using a translation method to obtain a hypotenuse image. Taking a reference boundary as an upper boundary and a lower boundary as an example, the specific implementation method is as follows: and calculating a target coordinate value according to a linear equation when one pixel is moved in the horizontal direction by taking the midpoint of the edge as a reference, calculating the horizontal distance d between the target coordinate and the pixel points on the edge line of the row, and finally integrally translating all the pixels of the row by d pixels so that the moved edge points are positioned on the circumscribed straight line.
S24: and obtaining the image containing the tough edges based on the hypotenuse image.
Rotating the hypotenuse image to enable the angle of the hypotenuse to be in a [60 DEG, 120 DEG ] interval, and then intercepting a toughness-containing image meeting a second preset condition in the rotated hypotenuse image, wherein the second preset condition is that:
a) The midpoint of the edge line after the straightening treatment is positioned at the centroid of the image containing the tough edge.
b) The distance between the centroid and the boundary of the image containing the border is not less than a threshold T3 (T3. Gtoreq.20 pixel).
c) The minimum distance between the intersection point of the edge line subjected to straightening treatment and the boundary of the image containing the tough edge and the nearest boundary is not smaller than a threshold value T4 (T4 is not smaller than 5 pixels).
S3: the sharpness of the individual images is determined.
For a single image, extracting a plurality of images containing the edges, respectively calculating the MTF50/SFR values of the images containing the edges, and fusing the definitions of all the images containing the edges to obtain the definition of the single image
S4: and calculating the definition of all images in the training image set, dividing a plurality of definition intervals by combining the definition distribution condition of all images in the training image set, and determining a credibility label (i.e. credibility) for each definition interval.
And calculating the definition of all images in the training image set, dividing M definition intervals by combining the overall definition distribution condition of the training image set, and determining a credibility label for each definition interval. The reliability label corresponding to the low-definition interval is small, whereas the reliability label corresponding to the high-definition interval is large, and the value range of the reliability label can be 0-1.
S5: the credibility of the single image in the training image set is determined.
The credibility label corresponding to the definition interval where each image in the training image set is located can be used as the credibility (also referred to as credibility label) of each image.
And, the reliability of each image may be taken as a weighting coefficient for the loss of each image. And determining the credibility labels of all the training images in the training image set by the method of S2-S5.
S6: and guiding training of the image processing model according to the task label and the credibility label.
The data information with high reliability has stronger certainty, and the tolerance of people to errors is lower, so that the proportion of the data information in loss is increased; in contrast, data with low confidence needs to have its specific gravity in loss reduced appropriately. The reliability label is used as the weight coefficient of the original loss to achieve the purpose, so that the reliability of the CNNs model is improved, and the formula is as follows:
Figure BDA0004011271230000141
it should be noted that the credibility tag is only used to adjust the weight of the original loss, and does not participate in the calculation of the task tag.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of a computer device according to the present invention, where the computer device includes a processor 50 and a memory 51 coupled to each other for cooperating with each other to implement the image occlusion method described in any of the above embodiments. Also stored in the memory 51 is at least one computer program 52 running on the processor 50, which processor 50 implements the steps of any of the various image occlusion method embodiments described above when executing the computer program 52.
The processor 50 may be a central first processing unit (Central Processing Unit, CPU), the processor 50 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), or the like. Further, the memory 51 may also include both an internal storage unit and an external storage device. The memory 51 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, etc., such as program codes of computer programs, etc. The memory 51 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application also provide a computer readable storage medium storing a computer program, where the computer program when executed by a processor implements steps of the foregoing method embodiments.
The present embodiments provide a computer program product which, when run on a terminal device, causes the terminal device to perform steps that enable the respective method embodiments described above to be implemented.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow in the methods of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program may implement the steps of the above embodiments of the methods when executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a terminal device, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunication signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (13)

1. An image sharpness calculation method, comprising:
performing edge extraction on the target image to obtain an edge image;
selecting an edge area based on the edge image;
straightening edge lines in the edge region image in the target image to obtain a hypotenuse image;
and calculating the definition of the target image based on the hypotenuse image.
2. The image sharpness calculation method according to claim 1, wherein the straightening process of edge lines in the edge area image in the target image includes:
and moving the pixel points on the edge line to the circumscribed line of the edge line so as to straighten the edge line into the circumscribed line.
3. The method for calculating the sharpness of an image according to claim 1, wherein,
only one edge line in the edge region;
the length of the edge line in the edge area is greater than or equal to a first threshold;
and the distance between the midpoint of the edge line in the edge area and the preset boundary of the edge area is greater than or equal to a second threshold.
4. The method for calculating the sharpness of an image according to claim 3, wherein,
the edge lines in the edge region are strong edge lines.
5. The image sharpness calculation method according to claim 1, wherein the calculating the sharpness of the target image based on the hypotenuse image includes:
rotating the hypotenuse image to enable the angle of the straightened edge line in the rotated hypotenuse image to be within a preset range;
and calculating the definition of the target image based on the rotated hypotenuse image.
6. The image sharpness calculation method according to claim 5, wherein the calculating the sharpness of the target image based on the rotated hypotenuse image includes:
intercepting the rotated hypotenuse image to obtain an image containing the flexible border;
the midpoint of the edge line subjected to straightening treatment is positioned at the centroid of the image containing the tough edges;
the distance between the centroid and the boundary of the image containing the tough edges is larger than or equal to a third threshold value;
the minimum distance between a preset intersection point and the vertex of the image containing the tough edge is larger than or equal to a fourth threshold value, and the preset intersection point is the intersection point of the edge line subjected to straightening treatment and the boundary.
7. The image sharpness calculation method according to claim 1, wherein the number of the edge regions selected based on the edge image is at least two;
calculating the sharpness of the target image based on the hypotenuse image, comprising:
calculating the definition of the hypotenuse image corresponding to each edge area;
and fusing the definition of the hypotenuse images corresponding to all the edge areas to obtain the definition of the target image.
8. A method of training an image processing model, the method comprising:
determining the credibility of each image of the training image set;
processing each image by using the image processing model to obtain a processing result of each image;
calculating the loss of each image based on the processing result of each image;
weighting the loss of at least part of the images in the training image set to obtain total loss, wherein the weight of each image is positively correlated with the credibility of each image;
training the image processing model based on the total loss.
9. The method of claim 8, wherein the credibility of each image is determined by the sharpness of each image.
10. The method of claim 9, wherein determining the confidence level of each image of the training image set comprises:
the sharpness of each image is calculated based on the method of any of claims 1-7.
11. The method of claim 10, wherein the weight of each image is a weight corresponding to a sharpness interval in which sharpness of each image is located.
12. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the image sharpness calculation method according to any of claims 1 to 7 when executing the computer program; or a training method of an image processing model according to any one of claims 8 to 11.
13. A computer readable storage medium having instructions stored therein, the computer program when executed by a processor implementing the image sharpness calculation method according to any of claims 1 to 7; or a training method of an image processing model according to any one of claims 8 to 11.
CN202211680193.0A 2022-12-21 2022-12-21 Image definition calculation method, image processing model training method and device Pending CN116012323A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211680193.0A CN116012323A (en) 2022-12-21 2022-12-21 Image definition calculation method, image processing model training method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211680193.0A CN116012323A (en) 2022-12-21 2022-12-21 Image definition calculation method, image processing model training method and device

Publications (1)

Publication Number Publication Date
CN116012323A true CN116012323A (en) 2023-04-25

Family

ID=86029175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211680193.0A Pending CN116012323A (en) 2022-12-21 2022-12-21 Image definition calculation method, image processing model training method and device

Country Status (1)

Country Link
CN (1) CN116012323A (en)

Similar Documents

Publication Publication Date Title
CN109272016B (en) Target detection method, device, terminal equipment and computer readable storage medium
WO2015010451A1 (en) Method for road detection from one image
CN111860439A (en) Unmanned aerial vehicle inspection image defect detection method, system and equipment
CN112947419B (en) Obstacle avoidance method, device and equipment
US20170178341A1 (en) Single Parameter Segmentation of Images
CN110969046B (en) Face recognition method, face recognition device and computer-readable storage medium
US20140050411A1 (en) Apparatus and method for generating image feature data
CN110349188B (en) Multi-target tracking method, device and storage medium based on TSK fuzzy model
CN109658378B (en) Pore identification method and system based on soil CT image
CN112906583A (en) Lane line detection method and device
Fang et al. Laser stripe image denoising using convolutional autoencoder
CN112364881B (en) Advanced sampling consistency image matching method
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
US11367206B2 (en) Edge-guided ranking loss for monocular depth prediction
CN114445482A (en) Method and system for detecting target in image based on Libra-RCNN and elliptical shape characteristics
CN108960247B (en) Image significance detection method and device and electronic equipment
Lecca et al. Comprehensive evaluation of image enhancement for unsupervised image description and matching
CN110288608B (en) Crop row center line extraction method and device
Chen et al. Image segmentation based on mathematical morphological operator
CN111062341A (en) Video image area classification method, device, equipment and storage medium
CN116012323A (en) Image definition calculation method, image processing model training method and device
CN115187545A (en) Processing method, system and storage medium for high spatial resolution remote sensing image
JP2010097341A (en) Image processor for detecting image as object of detection from input image
CN108573230B (en) Face tracking method and face tracking device
CN112489052A (en) Line structure light central line extraction method under complex environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination