CN113052794A - Image definition recognition method based on edge features - Google Patents
Image definition recognition method based on edge features Download PDFInfo
- Publication number
- CN113052794A CN113052794A CN202110020428.2A CN202110020428A CN113052794A CN 113052794 A CN113052794 A CN 113052794A CN 202110020428 A CN202110020428 A CN 202110020428A CN 113052794 A CN113052794 A CN 113052794A
- Authority
- CN
- China
- Prior art keywords
- gray
- image
- edge
- target image
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 10
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000012935 Averaging Methods 0.000 claims abstract description 7
- 230000009466 transformation Effects 0.000 claims abstract description 5
- 238000011156 evaluation Methods 0.000 abstract description 12
- 230000000007 visual effect Effects 0.000 abstract description 4
- 230000008859 change Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000012271 agricultural production Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an image definition recognition method based on edge characteristics, which relates to the technical field of image processing and is used for preprocessing a target image, wherein the preprocessing comprises the steps of carrying out gray level transformation and denoising on the target image; acquiring a gray difference value of the processed target image, and determining an edge section of the target image; and summing all the gray scale edge gradient values of the whole image, and averaging the gray scale edge gradient values on the whole image to obtain an image definition identification value. The method has the advantages of unimodal performance, consistency and the like, the definition degree and the evaluation result meet the unbiased condition, the blurred images with different contents can be evaluated, and the visual effect of the image quality is well reflected.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image definition identification method based on edge features.
Background
Digital image processing is a technology for converting image signals into digital signals and processing the digital signals by using a computer, is widely applied to a plurality of fields such as scientific inspection machines, industrial and agricultural production, aerospace and the like, and becomes an attractive subject with a wide prospect. With the increasing application of image processing, the requirements for image quality are also increasing, but in the process of camera shooting, due to factors such as relative motion between a target object and a camera or surrounding environment, a shot image is blurred, and the degree of blurring of the image has a great influence on subsequent image processing. Therefore, it is necessary to evaluate the degree of sharpness of an image before image processing.
The evaluation of image sharpness is classified into subjective evaluation and objective evaluation. The subjective evaluation is that an evaluator scores the image blur according to an evaluation rule and an evaluation scale, but the image quality is unreliable due to the fact that manual observation is easily affected by personal subjective factors, and long-time observation is easy to cause visual fatigue, so that the subsequent evaluation work is slow in progress and cannot meet the requirement of high-speed development of modern enterprises; the objective evaluation adopts an algorithm to automatically evaluate the image definition, but the calculation complexity is high and the application range is wide at present.
The invention discloses a method and a device for determining picture definition for searching Chinese patent CN111754491A, which are characterized in that an object included in a picture to be determined is detected and identified by using a target detection model to obtain at least one main object, and then images of the main objects, namely main object images, are extracted from the picture to be determined. And carrying out edge detection on each main object image, and calculating the definition of the main object image according to the edge detection image. And finally, obtaining the definition of the whole picture according to the definition of each main object image. According to the scheme, the main object image is identified and extracted from the picture, the definition of the main object image is calculated, and the influence of a fuzzy background or a sharp background contained in the picture on the definition of the whole picture is avoided, so that the accuracy of the picture definition judgment result is improved. But it is computationally expensive and computationally complex.
An effective solution to the problems in the related art has not been proposed yet.
Disclosure of Invention
Aiming at the problems in the related art, the invention provides an image definition identification method based on edge features, so as to overcome the technical problems in the prior related art.
The technical scheme of the invention is realized as follows:
an image definition recognition method based on edge features comprises the following steps:
step S1, preprocessing the target image, wherein the preprocessing comprises gray level conversion and denoising of the target image;
step S2, acquiring the gray level difference of the processed target image and determining the edge section of the target image;
and step S3, summing all the gray scale edge gradient values of the whole image, and averaging the gray scale edge gradient values on the whole image to obtain an image definition identification value.
Further, the step of performing gray scale transformation on the target image is represented as:
gray=0.299·R+0.587·G+0.114·B。
further, the step of obtaining the gray scale difference of the processed target image includes the following steps:
the gray value of each pixel point is respectively scanned in advance, and the adjacent gray difference is expressed as:
Δgray=|g(i,j+1)-g(i,j)|,(0≤i≤m,0≤j≤n-1)
wherein g (i, j) represents a gray value at coordinates (x, y);
obtaining the gray difference value delta between two adjacent pointsgrayIn which includesgray≥TgrayIt is denoted as an image edge segment.
Further, the step of summing all the gray scale edge slope values of the whole image includes the following steps:
three points with the gray difference value larger than the gray threshold value are selected in advance;
calculating the slopes of the three straight lines respectively;
the average values of the three lines are obtained and are respectively expressed as:
ΔGj+2=|g(i,j+2)-g(i,j+1)|
ΔGj+1=|g(i,j+1)-g(i,j)|
where G (i, j) is a gray edge gradient value calculated after the determination of a gray edge.
Summing all gray edge slope values of the whole target image, which is expressed as:
further, the method also comprises the following steps:
obtaining sum sun of gray scale edge gradient values in vertical direction of target imageH;
The gray edge slope values are averaged over the entire image, d, as:
the invention has the beneficial effects that:
the invention relates to an image definition recognition method based on edge characteristics, which comprises the steps of preprocessing a target image, acquiring a gray level difference value of the processed target image, and determining an edge section of the target image; and summing all the gray scale edge gradient values of the whole image, and averaging the gray scale edge gradient values on the whole image to obtain an image definition identification value. The method has the advantages of unimodal performance, consistency and the like, the definition degree and the evaluation result meet the unbiased condition, the blurred images with different contents can be evaluated, and the visual effect of the image quality is well reflected.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart illustrating an image sharpness identifying method based on edge features according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
According to the embodiment of the invention, an image definition identification method based on edge features is provided.
As shown in fig. 1, an image sharpness recognition method based on edge features according to an embodiment of the present invention includes the following steps:
step S1, preprocessing the target image, wherein the preprocessing comprises gray level conversion and denoising of the target image;
step S2, acquiring the gray level difference of the processed target image and determining the edge section of the target image;
and step S3, summing all the gray scale edge gradient values of the whole image, and averaging the gray scale edge gradient values on the whole image to obtain an image definition identification value.
Wherein, the step of performing gray scale transformation on the target image is represented as:
gray=0.299·R+0.587·G+0.114·B。
the data volume of subsequent image processing can be greatly reduced by reducing the RGB three-channel data value of the image into the gray level one channel, and the image is required to be denoised besides the gray level processing of the image, so that the error result obtained due to the interference of noise is avoided.
The step of obtaining the gray scale difference value of the processed target image comprises the following steps:
the gray value of each pixel point is respectively scanned in advance, and the adjacent gray difference is expressed as:
Δgray=|g(i,j+1)-g(i,j)|,(0≤i≤m,0≤j≤n-1)
wherein g (i, j) represents a gray value at coordinates (x, y);
obtaining the gray difference value delta between two adjacent pointsgrayIn which includesgray≥TgrayIt is denoted as an image edge segment.
Wherein, the step of summing all the gray scale edge gradient values of the whole image comprises the following steps:
three points with the gray difference value larger than the gray threshold value are selected in advance;
calculating the slopes of the three straight lines respectively;
the average values of the three lines are obtained and are respectively expressed as:
ΔGj+2=|g(i,j+2)-g(i,j+1)|
ΔGj+1=|g(i,j+1)-g(i,j)|
where G (i, j) is a gray edge gradient value calculated after the determination of a gray edge.
Summing all gray edge slope values of the whole target image, which is expressed as:
wherein, still include the following step:
obtaining sum sun of gray scale edge gradient values in vertical direction of target imageH;
The gray edge slope values are averaged over the entire image, d, as:
by means of the technical scheme, the gray level difference of the processed target image is obtained by preprocessing the target image, and the edge section of the target image is determined; and summing all the gray scale edge gradient values of the whole image, and averaging the gray scale edge gradient values on the whole image to obtain an image definition identification value. The method has the advantages of unimodal performance, consistency and the like, the definition degree and the evaluation result meet the unbiased condition, the blurred images with different contents can be evaluated, and the visual effect of the image quality is well reflected.
In addition, a new gradient calculation method is provided according to the fact that the edge gray value change of a clear image is in a sudden change state, the edge gray value change of a fuzzy image is in a gentle state, the gray value corresponding to each pixel point is regarded as a straight line, whether the measured straight line is flat or not depends on the slope of the straight line, namely three points with gray difference values larger than the slope of the gray threshold value are selected, the slopes of the three straight lines are calculated respectively, and finally, the average value is calculated. Conversely, the smaller the average value of the gradient of the gray scale edge is, the gentler the gray scale change is, and the more blurred the image is.
In summary, according to the above technical solution of the present invention, the target image is preprocessed to obtain the gray scale difference of the processed target image, and determine the edge section of the target image; and summing all the gray scale edge gradient values of the whole image, and averaging the gray scale edge gradient values on the whole image to obtain an image definition identification value. The method has the advantages of unimodal performance, consistency and the like, the definition degree and the evaluation result meet the unbiased condition, the blurred images with different contents can be evaluated, and the visual effect of the image quality is well reflected.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (5)
1. An image definition recognition method based on edge features is characterized by comprising the following steps:
preprocessing a target image, wherein gray level transformation and denoising processing are carried out on the target image;
acquiring a gray difference value of the processed target image, and determining an edge section of the target image;
and summing all the gray scale edge gradient values of the whole image, and averaging the gray scale edge gradient values on the whole image to obtain an image definition identification value.
2. An image sharpness recognition method according to claim 1, wherein the step of performing a gray-scale transformation on the target image is represented as:
gray=0.299·R+0.587·G+0.114·B。
3. the method for recognizing image sharpness based on edge features according to claim 1, wherein the step of obtaining the gray-scale difference value of the processed target image comprises the steps of:
the gray value of each pixel point is respectively scanned in advance, and the adjacent gray difference is expressed as:
Δgray=|g(i,j+1)-g(i,j)|,(0≤i≤m,0≤j≤n-1)
wherein g (i, j) represents a gray value at coordinates (x, y);
obtaining the gray difference value delta between two adjacent pointsgrayIn which includesgray≥TgrayIt is denoted as an image edge segment.
4. An image sharpness identification method according to claim 1, wherein the step of summing all gray edge gradient values of the whole image comprises the following steps:
three points with the gray difference value larger than the gray threshold value are selected in advance;
calculating the slopes of the three straight lines respectively;
the average values of the three lines are obtained and are respectively expressed as:
ΔGj+2=|g(i,j+2)-g(i,j+1)|
ΔGj+1=|g(i,j+1)-g(i,j)|
where G (i, j) is a gray edge gradient value calculated after the determination of a gray edge.
Summing all gray edge slope values of the whole target image, which is expressed as:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110020428.2A CN113052794A (en) | 2021-01-07 | 2021-01-07 | Image definition recognition method based on edge features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110020428.2A CN113052794A (en) | 2021-01-07 | 2021-01-07 | Image definition recognition method based on edge features |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113052794A true CN113052794A (en) | 2021-06-29 |
Family
ID=76508303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110020428.2A Pending CN113052794A (en) | 2021-01-07 | 2021-01-07 | Image definition recognition method based on edge features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113052794A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115734069A (en) * | 2022-11-16 | 2023-03-03 | 安徽宝信信息科技有限公司 | Automatic tracking and image capturing system |
CN116674134A (en) * | 2023-08-03 | 2023-09-01 | 绵阳华远同创科技有限公司 | Automatic casting processing method and system for resin words |
-
2021
- 2021-01-07 CN CN202110020428.2A patent/CN113052794A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115734069A (en) * | 2022-11-16 | 2023-03-03 | 安徽宝信信息科技有限公司 | Automatic tracking and image capturing system |
CN116674134A (en) * | 2023-08-03 | 2023-09-01 | 绵阳华远同创科技有限公司 | Automatic casting processing method and system for resin words |
CN116674134B (en) * | 2023-08-03 | 2023-10-20 | 绵阳华远同创科技有限公司 | Automatic casting processing method and system for resin words |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114937055B (en) | Image self-adaptive segmentation method and system based on artificial intelligence | |
CN112819772B (en) | High-precision rapid pattern detection and recognition method | |
WO2021129569A1 (en) | Human action recognition method | |
CN105279772B (en) | A kind of trackability method of discrimination of infrared sequence image | |
CN111161222B (en) | Printing roller defect detection method based on visual saliency | |
CN115294099B (en) | Method and system for detecting hairline defect in steel plate rolling process | |
WO2017193414A1 (en) | Image corner detection method based on turning radius | |
CN115841434A (en) | Infrared image enhancement method for gas concentration analysis | |
CN113052794A (en) | Image definition recognition method based on edge features | |
CN112085651B (en) | Automatic shock wave detection and tracking algorithm based on image self-adaptive threshold and feature extraction | |
CN115311289A (en) | Method for detecting oil stain defects of plain-color cloth | |
CN116051820A (en) | Single target detection method based on multiple templates | |
CN111429372A (en) | Method for enhancing edge detection effect of low-contrast image | |
CN113298769A (en) | FPC flexible flat cable appearance defect detection method, system and medium | |
CN114332079A (en) | Plastic lunch box crack detection method, device and medium based on image processing | |
CN109671084B (en) | Method for measuring shape of workpiece | |
CN113781523B (en) | Football detection tracking method and device, electronic equipment and storage medium | |
CN114998186A (en) | Image processing-based method and system for detecting surface scab defect of copper starting sheet | |
CN116935496B (en) | Electronic cigarette smoke visual detection method | |
CN110186929A (en) | A kind of real-time product defect localization method | |
CN114067122B (en) | Two-stage binarization image processing method | |
CN113643290B (en) | Straw counting method and device based on image processing and storage medium | |
CN115601301A (en) | Fish phenotype characteristic measuring method, system, electronic device and storage medium | |
CN112085683A (en) | Depth map reliability detection method in significance detection | |
CN114994072A (en) | Magnetic bar end surface defect detection method based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210629 |