CN117252783A - Definition computing method, device and equipment for defocus blurred image - Google Patents

Definition computing method, device and equipment for defocus blurred image Download PDF

Info

Publication number
CN117252783A
CN117252783A CN202311463142.7A CN202311463142A CN117252783A CN 117252783 A CN117252783 A CN 117252783A CN 202311463142 A CN202311463142 A CN 202311463142A CN 117252783 A CN117252783 A CN 117252783A
Authority
CN
China
Prior art keywords
value
areas
image
area
definition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311463142.7A
Other languages
Chinese (zh)
Other versions
CN117252783B (en
Inventor
吕明珠
李珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weijing Technology Co ltd
Original Assignee
Shanghai Weijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weijing Technology Co ltd filed Critical Shanghai Weijing Technology Co ltd
Priority to CN202311463142.7A priority Critical patent/CN117252783B/en
Priority claimed from CN202311463142.7A external-priority patent/CN117252783B/en
Publication of CN117252783A publication Critical patent/CN117252783A/en
Application granted granted Critical
Publication of CN117252783B publication Critical patent/CN117252783B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Abstract

The application discloses a definition computing method, device and equipment for out-of-focus blurred images. The method comprises the following steps: dividing an image into a plurality of first areas; calculating the brightness sum of each first area; dividing a plurality of second areas from all the first areas, wherein each second area at least comprises part of the first areas; sorting the sum of the brightness of the first areas contained in each second area; calculating the definition value of each second area according to the sequencing result; and adding the definition value of each second area to obtain the image definition value of the image. The device comprises: the system comprises a first image dividing module, a first calculating module, a second image dividing module, a brightness and sorting module, a first definition calculating module and a second definition calculating module. The method has the technical effects of simplicity and convenience in definition calculation of the defocus blur picture and high accuracy, and can effectively avoid the problem of erroneous judgment of the definition value caused by high overall brightness mean value when the image is blurred.

Description

Definition computing method, device and equipment for defocus blurred image
Technical Field
The application relates to the technical field of image processing, in particular to a definition computing method of a defocus blur image.
Background
The existing image definition evaluation method basically uses an edge detection method, which comprises a frequency domain, a space domain and the like, and the core ideas of the method all depend on the sharpness of the edges of objects in an image scene to judge the image definition. The principle is that the closer to the quasi-focus, the more edge sharpness, the sharper the image. However, at the image defocus position away from the quasi-focus, the image is extremely blurred, and there is no clear object texture. Methods that rely on edge detection often fail to give accurate sharpness decisions. Thereby causing an inability to auto focus. The following problems often occur in the sharpness evaluation method using edge detection at the far focus position: the more blurred the image, the higher the sharpness value. The reason is that the more blurred the image the higher the overall brightness average. At this time, the edge detection value is slightly higher due to the influence of the brightness. The adoption of the brightness variance of a single pixel cannot obtain accurate definition evaluation in extremely blurred scenes. Because the neighborhood pixel values of a single pixel are nearly identical. And the method of brightness variance is obviously interfered by noise.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus and a device for calculating sharpness of an out-of-focus blurred image, so as to solve the above technical problem that the judgment of sharpness of the image is inaccurate.
Specifically, the technical scheme of the application is as follows:
a definition computing method of an out-of-focus blurred image comprises the following steps:
dividing an image into a plurality of first areas;
calculating the brightness sum of each first area;
dividing a plurality of second areas from all the first areas, wherein each second area at least comprises part of the first areas;
sorting the sum of the brightness of the first areas contained in each second area;
calculating the definition value of each second area according to the sequencing result;
and adding the definition value of each second area to obtain the image definition value of the image.
In some implementations, a method for calculating sharpness of an out-of-focus blurred image, calculating brightness of each first region, specifically includes:
the brightness sum of each first region is calculated according to the height of the first region, the width of the first region and the gray value of the image.
In some implementations, calculating the sharpness value of each second region according to the ordering result specifically includes:
after sequencing, obtaining the maximum value, the minimum value, the sub-maximum value and the sub-minimum value of the brightness sum of the first areas contained in each second area;
in each second area, a first luminance difference value and a first luminance ratio are calculated according to the maximum value and the minimum value, and a second luminance difference value and a second luminance ratio are calculated according to the secondary maximum value and the secondary minimum value.
In some implementations, a method of sharpness calculation of an out-of-focus blurred image,
the first brightness difference value is the maximum value minus the minimum value, and the second brightness difference value is the sub-maximum value minus the sub-minimum value;
the first luminance ratio is a maximum to minimum value and the second luminance ratio is a sub-maximum to sub-minimum value.
In some implementations, a sharpness calculation method for a defocus blur image includes multiplying a first luminance difference value, a second luminance difference value, a first luminance ratio value, and a second luminance ratio value of each second region.
In some implementations, a method for calculating sharpness of an out-of-focus blurred image includes dividing a plurality of second regions from all first regions, including:
selecting a region with adjacent first regions around the first region as a central first region of the second region; and dividing a plurality of second areas from all the first areas according to the plurality of central first areas, wherein each second area at least comprises part of the first areas.
In some implementations, a method for calculating sharpness of an out-of-focus blurred image, after adding the sharpness values of each second region, further includes:
and selecting a weighted value, and multiplying the weighted value by the definition value addition result of the second area to obtain an image definition value.
In some implementations, a method for computing sharpness of an out-of-focus blurred image includes rectangular first regions, each of the first regions having a same height and each of the first regions having a same width.
Based on the same technical conception, the application also provides a definition calculating device of the defocus blur image, which comprises:
the first image dividing module is used for dividing the image into a plurality of first areas;
a first calculation module for calculating a luminance sum of each first region;
the second image dividing module is used for dividing a plurality of second areas from all the first areas, and each second area at least comprises part of the first areas;
the brightness and sorting module is used for sorting the brightness sum of the first areas contained in each second area;
the first definition calculating module is used for calculating a definition value in each second area according to the sorting result;
and the second definition calculation module is used for adding the definition value of each second area to obtain the image definition value of the image.
Based on the same technical conception, the application also provides image processing equipment, which comprises the definition calculating device of the out-of-focus blurred image.
Compared with the prior art, the application has at least one of the following beneficial effects:
1. the image is divided into a plurality of first areas, the brightness sum of the first areas contained in each second area is sequenced according to a plurality of second areas divided from all the first areas, and the definition value is calculated according to the sequencing result, so that the problem that the higher the image is, the more the blur is, the higher the definition value is judged to be due to the fact that the average value of the whole brightness is high can be effectively avoided.
2. And calculating the calculated definition value of the whole picture through the weighted value, so as to further improve the reliability of the definition value calculation result.
3. The method does not depend on edge sharpness to judge the definition degree of the image, and solves the problem that an accurate definition value cannot be given at an image defocus position far away from a focus.
Drawings
The above features, technical features, advantages and implementation of the present application will be further described in the following description of preferred embodiments in a clear and easily understood manner with reference to the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a method for sharpness calculation of an out-of-focus blurred image of the present application;
FIG. 2 is a schematic view of area division of one embodiment of a method for computing sharpness of a defocus blur image of the present application;
FIG. 3 is a block diagram of one embodiment of a sharpness calculation apparatus for out-of-focus blur images of the present application.
Reference numerals illustrate: 1-first region, 2-second region, 3-central first region.
Detailed Description
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will explain specific embodiments of the present application with reference to the accompanying drawings. It is obvious that the drawings in the following description are only examples of the present application, and that other drawings and other embodiments may be obtained from these drawings by those skilled in the art without undue effort.
For simplicity of the drawing, only the parts relevant to the application are schematically shown in each drawing, and they do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. Herein, "a" means not only "only this one" but also "more than one" case.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
In this context, it should be noted that the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected, unless explicitly stated or limited otherwise; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
In addition, in the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
According to the method, the image brightness is analyzed to obtain statistical information, the more blurred the image, the more approximate the brightness of the brightest area and the darkest area of the whole image is, namely, in the definition change process, the more blurred the image, the brighter the dark area in the image is, and the darker the bright area is. Therefore, the sharpness map can be evaluated from the distribution of the variation in the overall brightness of the screen.
The application discloses a definition computing method of an out-of-focus blurred image, which refers to an attached figure 1 in the specification and comprises the following steps:
s100, dividing an image into a plurality of first areas;
specifically, when the image is acquired, the image is first divided into a plurality of first areas, and the shape of the image can be regular patterns such as a rectangle, a triangle, a pentagon and the like, or other irregular patterns. Each graph can be the same in size or different in size and is divided according to actual requirements. However, when dividing the image, it is necessary to ensure that the image is completely divided, thereby improving accuracy in the subsequent sharpness calculation and reducing occurrence of uncomputed regions.
S200, calculating the brightness sum of each first area;
specifically, after the image is divided into a plurality of first areas, image brightness analysis is carried out on the first areas one by one, and brightness sum of each first area is counted.
S300, dividing a plurality of second areas from all the first areas, wherein each second area at least comprises part of the first areas;
specifically, all the first areas are divided into a plurality of second areas, and each second area comprises a plurality of first areas. For example, in step S100, the picture is divided into 9×9 rectangular first areas, and then, based on 81 first areas, 3×3 rectangular second areas, that is, 9 second areas, each of which includes 9 first areas, are divided. Meanwhile, the second area is divided on the basis of the first areas, the area outline of the second area is formed by a plurality of first areas, and a new area is formed without splitting a certain first area.
S400, sorting the brightness sum of the first areas contained in each second area;
specifically, in step S200, the brightness and calculation of each first region have been completed, and the brightness and calculation of the first regions included in the second region are ordered, so that the brightness change in the image local region is further analyzed.
S500, calculating the definition value of each second area according to the sorting result;
specifically, the picture is firstly divided into a plurality of first areas, then the first areas are used as the basis for dividing a plurality of second areas, the brightness sum of the first areas in each second area is ordered, the calculation of the definition value of each second area is carried out on the basis, the definition value of the whole picture is calculated from a smaller angle, and the accuracy of the definition value calculation is improved.
And S600, adding the definition value of each second area to obtain the image definition value of the image.
In one embodiment, the image is first divided into 32×32 first regions, each of which is the same size and rectangular in shape;
and calculating the brightness sum of each first area to obtain 1024 brightness sum results. The brightness of an image is typically determined by calculating the gray value of each pixel in the image. The gray value represents the brightness or gray level of a point in the image. Therefore, it is necessary to relate the area of the first region to the gradation value in calculating the luminance sum, thereby obtaining the luminance sum. The gray value is an integer between 0 (black) and 255 (white), where 0 represents black, 255 represents white, and the middle value represents a different degree of gray.
In one implementation, a method for calculating sharpness of an out-of-focus blurred image, calculating brightness of each first region and specifically includes:
the brightness sum of each first region is calculated according to the height of the first region, the width of the first region and the gray value of the image.
Specifically, lumaSum blki Is the sum of the brightness of the first area, p i Blk is the pixel gray value of an image H For the height of each first region, blk H For each first region height, blk i The numbers representing the first areas are referred to as formula 1, and the luminance sum of each first area is calculated.
In one implementation, calculating the sharpness value of each second region according to the sorting result specifically includes:
after sequencing, obtaining the maximum value, the minimum value, the sub-maximum value and the sub-minimum value of the brightness sum of the first areas contained in each second area;
in each second area, a first luminance difference value and a first luminance ratio are calculated according to the maximum value and the minimum value, and a second luminance difference value and a second luminance ratio are calculated according to the secondary maximum value and the secondary minimum value.
The first brightness difference value is the maximum value minus the minimum value, and the second brightness difference value is the sub-maximum value minus the sub-minimum value;
the first luminance ratio is a maximum to minimum value and the second luminance ratio is a sub-maximum to sub-minimum value.
Specifically, the first luminance difference is calculated by equation 2, and the second luminance difference is calculated by equation 3.
Wherein the maximum value isMinimum value +.>The sub-maximum value isThe next smallest value is +.>The first luminance difference is +.>
The second brightness difference is
The first luminance ratio is calculated by equation 4 and the second luminance ratio is calculated by equation 5.
Wherein the first brightness ratio isThe second luminance ratio is->
In one implementation, a sharpness calculating method of an out-of-focus blurred image includes that sharpness values of each second area are obtained by multiplying a first brightness difference value, a second brightness difference value, a first brightness ratio value and a second brightness ratio value.
Specifically, the sharpness value of the second region is calculated by equation 6.
After obtaining the sharpness value of each second area, step S600 adds the sharpness values of each second area to obtain the image sharpness value of the image, and the image sharpness value is calculated by equation 7.
Where FV represents the image sharpness value of the image.
In one implementation, a method for calculating sharpness of an out-of-focus blurred image includes dividing a plurality of second regions from all first regions, specifically including:
selecting the areas with adjacent first areas around the first areas as central first areas of second areas, dividing the plurality of second areas from all the first areas according to the plurality of central first areas, wherein each second area at least comprises part of the first areas.
Specifically, after dividing the image into a plurality of first areas, for example, dividing the image into 9×9 rectangular first areas, and then dividing 3×3 rectangular second areas, that is, 9 second areas, based on 81 first areas, each second area including 9 first areas. When selecting the central first region, the first region with the position (2, 2) may be selected as the central first region, that is, in 81 rectangular first regions, the upper left corner is selected as the transverse second, the longitudinal second first region (that is, the position (2, 2)) is selected, the left side of the central first region is the first region with the position (1, 2), the right side is the first region with the position (3, 2), the upper side is the first region with the position (2, 1), the lower side is the first region with the position (2, 3), the upper left corner is adjacent to the first region with the position (1, 1), the upper right corner is adjacent to the first region with the position (3, 1), the lower left corner is adjacent to the first region with the position (1, 3), and the 8 adjacent first regions and the central first region jointly form the second region.
In one implementation, a method for calculating sharpness of an out-of-focus blurred image, after adding the sharpness values of each second region, further includes:
and selecting a weighted value, and multiplying the weighted value by the definition value addition result of the second area to obtain an image definition value.
Specifically, the definition value of the whole picture is calculated by introducing the weighted value and is obtained by calculating through a formula 8.
Wherein,for the weight value introduced.
In one implementation, in the method for calculating the definition of the defocus blur image, the first areas are rectangular, the height of each first area is the same, and the width of each first area is the same.
In one embodiment, referring to fig. 2 of the specification, the image is divided into 32×32 rectangular first areas 11, each rectangular first area 1 being identical in height and width. The luminance sum of each first region 1 of 1024 first regions 1 is calculated according to formula 1, respectively. Then, among the 32×32 first areas 1, a plurality of central first areas 3 are selected, in this embodiment, the first area 1 with the position (3, 3) is selected as the central first area 3, and the first area 1 with the position (3, 3) is defined as the first second area 2 with the position (3, 3) as the center, then the central first area 3 with the position (3, 4) adjacent to the central first area 3 is defined as the center of the second area 2, and the first area 1 with the position (3, 3) as the second area 2 with the position (3, 4) is defined again with the position (3, 3) as the second area 2, and so on, 28×28 second areas 2 are formed.
In each second area 2, the brightness sums of the first areas 1 contained in the second area are ranked, for example, in the first second area 2, the brightness sums of 25 first areas 1 are ranked to obtain the maximum value, the minimum value, the sub-maximum value and the sub-minimum value of the brightness sums, and the ranking is performed in each second area 2 to obtain a ranking result.
The first luminance difference and the second luminance difference in each of the second areas 2 are calculated according to equations 2 and 3, respectively, and the first luminance ratio and the second luminance ratio in each of the second areas 2 are calculated according to equations 4 and 5, respectively. The above 4 results are introduced into equation 6, and the sharpness value of each second region 2 is calculated.
And then summing the definition values of the 28×28 second areas 2 to obtain the definition value of the whole picture, wherein the positions of the second areas 2 which are not established as the central first area 3 exist at the edges of the picture, and acceptable errors exist relatively, at the moment, weighting values can be introduced, and the weighting values can be designed by technicians so as to improve the accuracy of the definition value of the picture and reduce the errors.
Based on the same technical concept, the application also provides a definition calculating device of the defocus blur image, referring to fig. 3 of the specification, which comprises:
a first image dividing module 10 for dividing an image into a plurality of first areas;
specifically, when the first image dividing module 10 acquires an image, the image is first divided into a plurality of first areas, and the shape of the first area may be regular patterns such as a rectangle, a triangle, a pentagon, or other irregular patterns. Each graph can be the same in size or different in size and is divided according to actual requirements. However, when dividing the image, it is necessary to ensure that the image is completely divided, thereby improving accuracy in the subsequent sharpness calculation and reducing occurrence of uncomputed regions.
A first calculation module 20 for calculating a luminance sum of each first region;
specifically, after the image is divided into a plurality of first areas, the first calculation module 20 performs image brightness analysis on the first areas one by one, counts the brightness sum of each first area, and calculates the brightness sum of each first area. And calculating the brightness sum of each first area according to the height of the first area, the width of the first area and the gray value of the image.
A second image dividing module 30, configured to divide a plurality of second areas from all the first areas, where each second area includes at least a part of the first areas;
specifically, the second image dividing module 30 divides all the first areas into a plurality of second areas, and each second area includes a plurality of first areas. For example, in step S100, the picture is divided into 9×9 rectangular first areas, and then, based on 81 first areas, 3×3 rectangular second areas, that is, 9 second areas, each of which includes 9 first areas, are divided. Meanwhile, the second area is divided on the basis of the first areas, the area outline of the second area is formed by a plurality of first areas, and a new area is formed without splitting a certain first area. The second image dividing module 30 selects an area with adjacent first areas around the first area as a central first area of the second areas, and divides the plurality of second areas from all the first areas according to the plurality of central first areas, wherein each second area at least comprises a part of the first areas.
A brightness and sorting module 40 for sorting the brightness sums of the respective first areas contained in each second area;
specifically, the luminance and sorting module 40 sorts the luminance sums of the first regions included in the second regions, and further analyzes the luminance change in the local region of the image, thereby obtaining a maximum value, a minimum value, a sub-maximum value and a sub-minimum value of the luminance sums in each second region.
A first sharpness calculation module 50 for calculating sharpness values in each of the second regions according to the sorting result;
specifically, the first sharpness calculating module 50 divides the picture into a plurality of first areas, then divides the picture into a plurality of second areas based on the first areas, sorts the brightness sum of the first areas in each second area, calculates the sharpness value of each second area based on the brightness sum, calculates the sharpness value of the whole picture from a smaller angle, and improves the accuracy of sharpness value calculation. In each second area, a first luminance difference value and a first luminance ratio are calculated according to the maximum value and the minimum value, and a second luminance difference value and a second luminance ratio are calculated according to the secondary maximum value and the secondary minimum value. The first luminance difference value is the maximum value minus the minimum value, and the second luminance difference value is the sub-maximum value minus the sub-minimum value. The first luminance ratio is a maximum to minimum value and the second luminance ratio is a sub-maximum to sub-minimum value.
The second sharpness calculation module 60 is configured to add the sharpness values of each of the second regions to obtain an image sharpness value of the image. Meanwhile, a weighted value may be further introduced into the second sharpness calculation module 60 to reduce errors existing in positions where the second area is not established as the central first area at the edges of the picture, so as to improve accuracy of the sharpness value of the image and reduce errors.
Based on the same technical conception, the application also provides an image processing device, and the definition calculating device of the defocus blur image of the embodiment.
It should be noted that the above embodiments can be freely combined as needed. The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (10)

1. A definition computing method of a defocus blur image is characterized by comprising the following steps:
dividing an image into a plurality of first areas;
calculating the brightness sum of each first area;
dividing a plurality of second areas from all the first areas, wherein each second area at least comprises a part of the first areas;
sorting the sum of the brightness of the respective first regions contained in each of the second regions;
calculating the definition value of each second area according to the sequencing result;
and adding the definition value of each second area to obtain the image definition value of the image.
2. The method according to claim 1, wherein calculating the luminance sum of each of the first areas comprises:
and calculating the brightness sum of each first area according to the height of the first area, the width of the first area and the gray value of the image.
3. The method for calculating the sharpness of an out-of-focus blurred image according to claim 1, wherein said calculating the sharpness value of each of said second regions according to the result of the sorting specifically comprises:
after sequencing, obtaining the maximum value, the minimum value, the sub-maximum value and the sub-minimum value of the brightness sum of the first areas contained in each second area;
in each second region, a first luminance difference value and a first luminance ratio are calculated according to the maximum value and the minimum value, and a second luminance difference value and a second luminance ratio are calculated according to the secondary maximum value and the secondary minimum value.
4. A method for sharpness calculation of an out-of-focus blurred image as claimed in claim 3, characterized in that,
the first brightness difference value is the maximum value minus the minimum value, and the second brightness difference value is the secondary maximum value minus the secondary minimum value;
the first luminance ratio is the maximum value to the minimum value, and the second luminance ratio is the sub-maximum value to the sub-minimum value.
5. The method according to claim 4, wherein the sharpness value of each of the second regions is obtained by multiplying the first luminance difference value, the second luminance difference value, the first luminance ratio value, and the second luminance ratio value.
6. The method for calculating the sharpness of an out-of-focus blurred image according to claim 1, wherein the dividing of the plurality of second regions from all of the first regions specifically comprises:
selecting a region with the periphery adjacent to the first region in the first region as a central first region of the second region; and dividing a plurality of second areas from all the first areas according to the plurality of central first areas, wherein each second area at least comprises part of the first areas.
7. The method according to any one of claims 1 to 6, characterized by further comprising, after said adding up the sharpness values of each of said second areas:
and selecting a weighted value, multiplying the weighted value by the addition result of the definition values of the second area, and obtaining the definition value of the image.
8. The method according to claim 7, wherein the first regions are rectangular, the height of each of the first regions is the same, and the width of each of the first regions is the same.
9. A sharpness computing apparatus for an out-of-focus blurred image, comprising:
the first image dividing module is used for dividing the image into a plurality of first areas;
a first calculation module for calculating a luminance sum of each of the first areas;
the second image dividing module is used for dividing a plurality of second areas from all the first areas, and each second area at least comprises part of the first areas;
a luminance and ranking module for ranking the luminance sums of the respective first regions contained in each of the second regions;
the first definition calculating module is used for calculating definition values in each second area according to the sorting result;
and the second definition calculation module is used for adding the definition value of each second area to obtain the image definition value of the image.
10. An image processing apparatus comprising the sharpness calculation means of the out-of-focus blurred image as claimed in claim 9.
CN202311463142.7A 2023-11-06 Definition computing method, device and equipment for defocus blurred image Active CN117252783B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311463142.7A CN117252783B (en) 2023-11-06 Definition computing method, device and equipment for defocus blurred image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311463142.7A CN117252783B (en) 2023-11-06 Definition computing method, device and equipment for defocus blurred image

Publications (2)

Publication Number Publication Date
CN117252783A true CN117252783A (en) 2023-12-19
CN117252783B CN117252783B (en) 2024-05-17

Family

ID=

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903073A (en) * 2012-10-09 2013-01-30 深圳市掌网立体时代视讯技术有限公司 Image definition calculating method and apparatus
WO2019210707A1 (en) * 2018-05-02 2019-11-07 杭州海康威视数字技术股份有限公司 Image sharpness evaluation method, device and electronic device
CN111915523A (en) * 2020-08-04 2020-11-10 深圳蓝韵医学影像有限公司 Self-adaptive adjustment method and system for DR image brightness
CN115631171A (en) * 2022-10-28 2023-01-20 上海为旌科技有限公司 Picture definition evaluation method, system and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102903073A (en) * 2012-10-09 2013-01-30 深圳市掌网立体时代视讯技术有限公司 Image definition calculating method and apparatus
WO2019210707A1 (en) * 2018-05-02 2019-11-07 杭州海康威视数字技术股份有限公司 Image sharpness evaluation method, device and electronic device
CN110458789A (en) * 2018-05-02 2019-11-15 杭州海康威视数字技术股份有限公司 A kind of image definition evaluating method, device and electronic equipment
CN111915523A (en) * 2020-08-04 2020-11-10 深圳蓝韵医学影像有限公司 Self-adaptive adjustment method and system for DR image brightness
CN115631171A (en) * 2022-10-28 2023-01-20 上海为旌科技有限公司 Picture definition evaluation method, system and storage medium

Similar Documents

Publication Publication Date Title
US7995108B2 (en) Image processing apparatus, image processing program, electronic camera, and image processing method for image analysis of magnification chromatic aberration
US8494256B2 (en) Image processing apparatus and method, learning apparatus and method, and program
US8295606B2 (en) Device and method for detecting shadow in image
CN109741356B (en) Sub-pixel edge detection method and system
US20100188584A1 (en) Depth calculating method for two dimensional video and apparatus thereof
US20020146167A1 (en) Region segmentation of color image
JP2005310123A (en) Apparatus for selecting image of specific scene, program therefor and recording medium with the program recorded thereon
US8086024B2 (en) Defect detection apparatus, defect detection method and computer program
US7602967B2 (en) Method of improving image quality
CN110443800B (en) Video image quality evaluation method
EP1933275A2 (en) Apparatus and method to improve clarity of image
US7386167B2 (en) Segmentation technique of an image
CN114820417A (en) Image anomaly detection method and device, terminal device and readable storage medium
CN110288560B (en) Image blur detection method and device
CN114820334A (en) Image restoration method and device, terminal equipment and readable storage medium
CN112037185A (en) Chromosome split phase image screening method and device and terminal equipment
CN110807406B (en) Foggy day detection method and device
US9235773B2 (en) Image processing device capable of determining types of images accurately
CN117252783B (en) Definition computing method, device and equipment for defocus blurred image
US20050078884A1 (en) Method and apparatus for interpolating a digital image
CN117252783A (en) Definition computing method, device and equipment for defocus blurred image
CN112017163A (en) Image blur degree detection method and device, electronic equipment and storage medium
CN111445435A (en) No-reference image quality evaluation method based on multi-block wavelet transform
CN116129195A (en) Image quality evaluation device, image quality evaluation method, electronic device, and storage medium
CN114820418A (en) Image exception handling method and device, terminal equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant