CN111784702B - Grading method for image segmentation quality - Google Patents

Grading method for image segmentation quality Download PDF

Info

Publication number
CN111784702B
CN111784702B CN202010549866.3A CN202010549866A CN111784702B CN 111784702 B CN111784702 B CN 111784702B CN 202010549866 A CN202010549866 A CN 202010549866A CN 111784702 B CN111784702 B CN 111784702B
Authority
CN
China
Prior art keywords
segmentation
result
pixel points
area
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010549866.3A
Other languages
Chinese (zh)
Other versions
CN111784702A (en
Inventor
周品铎
杨欣悦
陆庭旺
史冉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010549866.3A priority Critical patent/CN111784702B/en
Publication of CN111784702A publication Critical patent/CN111784702A/en
Application granted granted Critical
Publication of CN111784702B publication Critical patent/CN111784702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The invention discloses a grading method for image segmentation quality, which aims to automatically perform a series of visual evaluations on segmented images and accord with subjective quality evaluation of human beings, thereby not only ensuring the accuracy of an object segmentation evaluation algorithm, but also improving the efficiency of the algorithm and enabling a machine segmentation result to obtain a quick and efficient evaluation result through the grading method. The invention provides a score close to subjective correctness for the machine segmentation result through an objective algorithm by adding an objective scoring rule, and can be suitable for image processing occasions with more scenes. Meanwhile, a more appropriate function and algorithm are adopted, and the problem of individual differentiation in the past is solved.

Description

Grading method for image segmentation quality
Technical Field
The invention belongs to the image processing technology, and particularly relates to a scoring method for image segmentation quality.
Background
Nowadays, more and more scenes are applied to the image segmentation technology, and the technology cannot be used in ordinary work and life. Some different application scenarios have their own segmentation quality requirements. For some applications, for example as an image editing program, the object segmentation quality directly affects their best performance in terms of visual quality, i.e. whether the program can be directly influenced by the direct factors of people's liking, and directly affects their survival status.
When we use an algorithm to segment an image, we always face a problem that the segmentation result is not good. Although the quality can be seen by eyes, the quality is subjective, and how to quantify the result of the segmentation is evaluated. It is currently most appreciated that a reliable method for assessing the segmentation quality of an object is subjective quality assessment. However, its feasibility is not high because subjective evaluation is time-consuming and cumbersome. Therefore, there is a great need for an index for automatically evaluating the subdivision quality for the design target, that is, after the picture segmentation is completed, the index needs to be evaluated in a machine language and a score is given so as to intuitively derive the segmentation quality of the picture.
Objective object segmentation metrics are generally divided into two categories: a region segmentation based metric and an edge segmentation based metric. For the former measure we have two important indicators. Jaccard index, also known as Jaccard similarity coefficient (Jaccard similarity coefficient), is used to compare similarity and difference between a finite set of samples, and is understood to be a measure of similarity in the study to be conducted. The larger the Jaccard coefficient value, the higher the sample similarity. Another parameter index, F1 Score (F1 Score), is an index used statistically to measure the accuracy of the two-class model. The method simultaneously considers the accuracy rate and the recall rate of the classification model. F1-Score can be viewed as a weighted average of model accuracy and recall, with a maximum of 1 and a minimum of 0. That is, the F1 score combines them with equal weights to assess the overall segmentation quality.
For edge-based segmentation, there are several methods, such as point-based detection, line-based detection, and edge-based detection. Hausdorff is introduced here, which is a measure describing the degree of similarity between two sets of points. CSURKA et al introduce a tolerance band around the ground truth boundary, whose width is adapted to the image size and whose width is adapted to the image size. Because the error is negligible within the tolerance band, its size can be adjusted. Finally, the quality was evaluated using F1-Score based on the edge segmentation metric.
Disclosure of Invention
The invention aims to provide a scoring method aiming at image segmentation quality, and solves the problem that the evaluation does not accord with the subjective evaluation of people because some special segmented images are ignored in the traditional method.
The technical solution for realizing the purpose of the invention is as follows: a scoring method aiming at image segmentation quality comprises the following steps:
step 1, respectively carrying out manual segmentation and machine segmentation on a background and an object of an image to be scored, determining a manual segmentation result and a machine segmentation result, identifying white pixel points and black pixel points in the manual segmentation result and the machine segmentation result, and obtaining a basic score P by utilizing an intersection set of the pixel points;
step 2, determining a classification error area according to white pixel points in the segmentation result, and solving an adding point1 by using the ratio of the number of pixel points in the classification error area to the circumscribed elliptical area in the classification error area;
step 3, comparing the intersection with the white pixel points of the manual segmentation reference result, and calculating a subtraction result according to the result; meanwhile, considering that the calculation and division result can be divided into two cases, the white pixel points of the intersection and the reference result are equal and unequal, and after the machine division results are compared, division calculation is carried out, and the division results are integrated into a division item point 2;
and 4, solving a final total score according to the basic score, the adding score and the subtracting score, namely representing the quality of the machine segmentation result.
Compared with the prior art, the invention has the remarkable advantages that:
(1) considering the different tolerance of human subjectivity to the excess and the missing parts, the adding item and the subtracting item for calculating the excess and the missing parts are respectively added on the basis of the basic score intersection set ratio in the prior art.
(2) For the addition and division item, a circumscribed ellipse is made for the classification error area, and the circumscribed ellipse area represents the classification error area, so that the area is more accurate.
(3) For the item of the division, the influence of the missing part on the result is mainly considered, and the subjective feeling is better fitted.
(4) The method evaluates the target segmentation result, so that the perception of human subjective evaluation can be met, and meanwhile, the performance of a project algorithm is ensured. Although the traditional method has a certain good result on the research of the segmented image evaluation, the differences of the segmented images are ignored, and when the target segmentation result is incorrect, the evaluation result of the traditional method is different from the subjective perception of human beings. Therefore, when processing the segmentation result, the method adopting different evaluation criteria in different areas can accurately process four conditions of the segmentation image, thereby not only ensuring the accuracy of the evaluation result, but also avoiding the occurrence of new problems.
Drawings
Fig. 1 is a block diagram of a scoring method for image segmentation quality according to the present invention.
Fig. 2 is a flowchart of a scoring method for image segmentation quality according to the present invention.
FIG. 3 is an exemplary scoring graph in an embodiment of the present invention, in which (a) is an original image, (b) is a reference result graph, and (c) is a machine segmentation result graph.
Detailed Description
When the segmentation result is obviously different from the original image, the segmentation result is called as error segmentation, the traditional method can accurately score non-error segmentation, but when the error segmentation is evaluated, error factors are not considered, so that the evaluation score is different from the subjective perception of people, namely, the accurate scoring cannot be performed.
With reference to fig. 1 and fig. 2, the method for scoring image segmentation quality according to the present invention includes the following steps:
step 1, respectively carrying out manual segmentation and machine segmentation on a background and an object of an image to be scored, determining a manual segmentation result and a machine segmentation result, identifying white pixel points and black pixel points in the manual segmentation result and the machine segmentation result, and obtaining a basic score P by utilizing an intersection set of the pixel points, wherein the method specifically comprises the following steps:
when the traditional method is used for evaluation, the evaluation result is inconsistent with the subjective feeling of human beings, namely when the target segmentation result is segmented incorrectly, the obtained evaluation result is different from the human feeling. Therefore, pixel point extraction calculation is adopted for the target segmentation result, different metering criteria are adopted for different error conditions of the segmentation result, and the error conditions are divided into FN and FP. FP is defined in the present invention as: the regions (pixel number) obtained by the machine segmentation and comparison with the reference result are the regions with wrong classification, and the regions originally belonging to the negative class are classified as the positive class. FN is defined in the present invention as: the regions (pixel number) lacking in the machine segmentation and comparison reference result are required to be obtained, namely the regions with classification errors, and the regions originally belonging to the positive class are classified into the negative class. Therefore, after judgment and comparison, different grading methods can be used for different segmentation results, and therefore an accurate evaluation result is obtained. An Imread function is introduced to traverse pixel points of a target segmentation result, and the condition of the target segmentation result is divided into three regions: a primary similarity region, an FN region and an FP region. The main similar area refers to a common area where a target segmentation result exists after pixel traversal, the FN area refers to an area where a part of pixels are lacked in a boundary area, and the FP area refers to an area where a part of pixels are added out from the boundary area. However, the following situations can exist when a general machine segments an image: the segmentation result is correct, the segmentation result has an FN region, the segmentation result has an FP region, and the segmentation result has both the FN region and the FP region, so that the method considers the situations comprehensively and assigns appropriate evaluation criteria to obtain an accurate evaluation result. Therefore, for the comparison of the main similar areas of the first part, the adopted method is a pixel point traversal method of the target segmentation result, namely, after pixel point traversal is carried out on the target segmentation, the intersection of the machine segmentation and the white pixel point of the reference result is obtained and defined as N; and simultaneously, the union of the machine segmentation and the white pixel point of the reference result can be obtained and defined as U. The evaluation result of the main similar area can be obtained through the ratio of the intersection to the union, so that the basic score P is obtained by the intersection ratio union: p is N/U. Since the machine segmentation has special differences, human vision can obviously sense the special differences of segmentation results, and since FN and FP are not considered, the primary similarity region method is not suitable for evaluating the segmentation condition of FN regions and FP regions, and therefore, the evaluation method required to be dealt with in the condition is continuously considered.
Step 2, determining a classification error area according to white pixel points in the segmentation result, and obtaining an adding point1 by using the ratio of the number of pixel points of the classification error area to the circumscribed elliptical area of the classification error area, wherein the specific steps are as follows:
for the FN region, the FN region has a main factor that is the value of the number of pixel points and we denote this value as T, and the FP region has a main factor that is the value of the pixel points and we denote F, and if the value T is larger after calculation, it means that the difference between the segmentation result and the reference result is larger. The number of white pixel points of the reference result can be obtained through traversal of an Imread function and is recorded as whitet, so that an FN value can be obtained through T ═ abs (N-whitet), and the FN value is the deviation between the intersection of target segmentation and the reference result. At the same time, the FP value can also be calculated as: f-abs (U-white) is intended to be the deviation of the union of the target segmentation and reference results from the reference results. Here, we mainly consider that the influence of low base score is solved by misclassifying part of area, and for target segmentation, if an FN region exists, that is, the target segmentation is lack of pixel points, the base score obtained by the first step is low. We define this method as the additive method. The concrete content is as follows: we need to define two quantities: x and y. x and y are the length of the transverse and longitudinal axes of the part of the machine segmentation result which is incorrect compared with the reference result, and the calculation mode is as follows: setting a variable flag with an initial value of 0, establishing x and y coordinate axes on a picture to be evaluated, sequentially traversing each pixel point and comparing the pixel points with a reference result to determine whether the pixel points are consistent or not, if so, not processing the pixel points, and if not, accessing coordinate values of the pixel points by using an array, wherein when the abscissa value is a coordinate which never appears, the value of flag +1 is obtained until the flag value after traversal is the value of x; and y can be obtained by the same method. Ellipses are created with the x, y lengths as major and minor axes, representing the areas of the misclassified FN and FP regions. Finally obtaining the area S: s ═ x × y/4, and the value of the point1 is the ratio of FP to the area S: point1 ═ F + T)/S.
Step 3, comparing the intersection with the white pixel points of the manual segmentation reference result, and calculating a subtraction result according to the result; meanwhile, the calculation of the point subtraction result is considered to be divided into two cases, the white pixel points of the intersection and the reference result are equal and unequal, the point subtraction calculation is carried out after the machine segmentation results are compared, and the point subtraction result is synthesized into a point subtraction item 2, which is specifically as follows:
for the FP region, unlike the FN region evaluation, this mainly considers the case where the deletion portion is subjectively less tolerant to deletion for subtraction processing. The FP evaluation method firstly judges whether an FN area exists or not, the intersection of white pixel points of machine segmentation and reference results is obtained and compared with the number of white pixel points of reference results, if the intersection is equal to the number of white pixel points of reference results, namely if N is whitet, the FN area is proved to be absent, if no classification error area FN exists, point2 is defined to be equal to the ratio of the point number of the FP area pixel point to the union of the pixel points: point2 ═ F/U. If N ≠ whitet, FN region is demonstrated, then point2 ≠ T/N. The point2 thus obtained is the subtraction value.
Step 4, calculating a final total score according to the basic score, the adding score and the subtracting score, namely representing the quality of the machine segmentation result, and specifically comprising the following steps:
for processing the final evaluation score, an addition method is adopted for the FP area result, and a subtraction method is adopted for the FN area result, and the final result is obtained after addition and subtraction are carried out under the condition of the basic score. And judging different target segmentation results to distinguish errors of the segmentation results, adopting different evaluation methods for different error conditions, and after obtaining evaluation results of the FP area and the FN area, integrating the obtained evaluation results to obtain a segmentation quality evaluation score of the final target segmentation result. The final score count is therefore: since the score is not in the range of [0-1], P + point1-point2, the result after normalization is: p + ((1-P) × point1) - (P × point 2). And representing the fraction of the target segmentation result by the numerical value of count.
Example 1
Step 1, importing an artificial segmentation reference result and a machine segmentation result of an image to be evaluated, identifying white pixel points and black pixel points in the manual segmentation reference result and the machine segmentation result, and obtaining a basic score P by using an intersection set of the pixel points, wherein the method specifically comprises the following steps:
step 1-1, traversing black and white pixel points of the manual segmentation reference result and black and white pixel points of the machine segmentation result by utilizing an Imread function in MATLAB, wherein the white pixel points represent pixel labels segmented as objects, and the black pixel points represent pixel labels segmented as backgrounds;
step 1-2, solving the intersection N of the white pixel point in the machine segmentation result and the white pixel point in the manual segmentation reference result;
step 1-3, solving a union U of white pixel points in a machine segmentation result and white pixel points in an artificial segmentation reference result;
step 1-4, the basic score P is obtained by the intersection set ratio union, namely:
P=N/U (1)
step 2, determining a classification error area according to white pixel points in the segmentation result, and obtaining an adding point1 by using the ratio of the number of pixel points of the classification error area to the circumscribed elliptical area of the classification error area, wherein the specific steps are as follows:
step 2-1, traversing by an Imread function to obtain the number of white pixel points in the manual segmentation reference result and recording the number as whitet;
step 2-2, solving the area with the machine segmentation result more than the artificial segmentation reference result, and recording the deviation of the union set and the correct area as FP, and recording the pixel point number of the FP area as F:
F=abs(U-whitet) (2)
step 2-3, obtaining a region lacking in the machine segmentation result compared with the manual segmentation reference result, and marking the intersection and the correct deviation as FN, and marking the pixel point number of the FN region as T:
T=abs(N-whitet) (3)
step 2-4, x and y are the length of the transverse axis and the longitudinal axis of the part of the machine segmentation result different from the manual segmentation reference result respectively, and the solving mode is as follows:
setting a variable flag with an initial value of 0, establishing an xy coordinate axis on the image to be evaluated, sequentially traversing each pixel point and comparing the pixel points with the manual segmentation reference result to determine whether the pixel points are consistent, and if so, judging that the pixel points are both black pixel points or white pixel points, and not processing; if the two pixel points are different in color, the coordinate values of the two pixel points are accessed by using an array, when the abscissa value is a coordinate which never appears, the value of flag +1 is obtained, and the value of flag till the traversal is finished is the value of x; obtaining y in the same way;
an ellipse is created with the lengths of x and y as the major and minor axes, respectively, with the ellipse area representing the sum S of the areas of the FN and FP regions:
S= Π*x*y/4 (4)
step 2-5, adding point1 is the ratio of the sum of FP and FN area pixel points to S:
point1=(F+T)/S (5)
step 3, considering the condition that the subjective feeling has smaller tolerance to the missing of the segmentation result, comparing the intersection with the white pixel points of the reference result so as to calculate the subtraction result; meanwhile, considering the division reduction situation, the division reduction situation can be divided into two types, the intersection is equal to or unequal to the white pixel points of the reference result, the division results of the machine are compared and then subjected to division reduction calculation, and the division reduction calculation is integrated into a division reduction item, and the method specifically comprises the following steps:
3-1, when N is not equal to whitet and indicates that a classification error area FN exists, switching to the step 3-3;
step 3-2, when N is whitet, the classification error area FN does not exist, and the step 3-4 is switched;
step 3-3, when a classification error area FN exists, the point number of the pixel in the FN area is compared with the intersection to obtain a subtraction term point 2:
point2=T/N (6)
step 3-4, when no classification error area FN exists, comparing the pixel point number of the FP area with the union set to obtain a subtraction term point 2:
point2=F/U (7)
and 4, solving a final total score according to the basic score, the adding score and the subtracting score, namely representing the quality of the machine segmentation result, wherein the quality is as follows:
step 4-1, obtaining a total score according to the basic score, the adding score and the subtracting score:
count=P+point1-point2 (8)
and 4-2, because the fraction is not in the interval of [0-1], calculating as follows:
count=P+((1-P)*point1)-(P*point2) (9)
after the calculation of the basic score, the adding score and the subtracting score is performed, the final score is 0.2333 (score is 1), as shown in fig. 3, it can be seen that we recognize the difference of the machine segmentation result example, that there is a large segmentation error.
In summary, the scoring method for image segmentation quality provided by the invention can effectively score a machine segmentation result data set and the scoring can meet subjective cognition of people, and has the innovation points that adding and subtracting items are introduced to the comparison of basic pixel points, and different evaluation schemes are adopted for different target segmentation results. Compared with the traditional method, the method ensures the accuracy of the evaluation score and also strengthens the accurate scoring processing of some incorrect segmentation results to a great extent.

Claims (2)

1. A scoring method for image segmentation quality is characterized by comprising the following steps:
step 1, respectively carrying out manual segmentation and machine segmentation on a background and an object of an image to be scored, determining a manual segmentation result and a machine segmentation result, identifying white pixel points and black pixel points in the manual segmentation result and the machine segmentation result, and obtaining a basic score P by utilizing an intersection set of the pixel points;
step 2, determining a classification error area according to white pixel points in the segmentation result, and obtaining an adding point1 by using the ratio of the number of pixel points of the classification error area to the circumscribed elliptical area of the classification error area;
step 3, comparing the intersection with the white pixel points of the manual segmentation reference result, and calculating a subtraction result according to the result; meanwhile, considering that the calculation and division result can be divided into two cases, the white pixel points of the intersection and the reference result are equal and unequal, and after the machine division results are compared, division calculation is carried out, and the division results are integrated into a division item point 2;
step 4, solving a final total score, namely representing the quality of a machine segmentation result, according to the basic score, the adding score and the subtracting score;
the step 1 is as follows:
step 1-1, traversing black and white pixel points of an artificial segmentation reference result and black and white pixel points of a machine segmentation result by utilizing an Imread function in MATLAB, wherein the white pixel points represent pixel labels segmented into objects, and the black pixel points represent pixel labels segmented into backgrounds;
step 1-2, solving the intersection N of the white pixel point in the machine segmentation result and the white pixel point in the manual segmentation reference result;
step 1-3, obtaining a union U of white pixel points in a machine segmentation result and white pixel points in an artificial segmentation reference result;
step 1-4, the basic score P is obtained by the intersection set ratio union, namely:
P=N/U (1)
in the step 2, the following steps are specifically performed:
step 2-1, traversing by an Imread function to obtain the number of white pixel points in the manual segmentation reference result and recording as whitet;
step 2-2, solving the area with the machine segmentation result more than the artificial segmentation reference result, and recording the deviation of the union set and the correct area as FP, and recording the pixel point number of the FP area as F:
F=abs(U-whitet) (2)
step 2-3, obtaining a region lacking in the machine segmentation result compared with the manual segmentation reference result, and marking the intersection and the correct deviation as FN, and marking the pixel point number of the FN region as T:
T=abs(N-whitet) (3)
step 2-4, x and y are the length of the transverse axis and the longitudinal axis of the part of the machine segmentation result different from the manual segmentation reference result respectively, and the solving mode is as follows:
setting a variable flag with an initial value of 0, establishing an xy coordinate axis on the image to be evaluated, sequentially traversing each pixel point and comparing the pixel points with the manual segmentation reference result to determine whether the pixel points are consistent, and if so, judging that the pixel points are both black pixel points or white pixel points, and not processing; if the two pixel points are different in color, the coordinate values of the two pixel points are accessed by using an array, when the abscissa value is a coordinate which never appears, the value of flag +1 is obtained, and the value of flag till the traversal is finished is the value of x; obtaining y in the same way;
an ellipse is created with the lengths of x and y as the major and minor axes, respectively, with the ellipse area representing the sum S of the areas of the FN and FP regions:
S=π*x*y/4 (4)
step 2-5, adding point1 is the ratio of the sum of FP and FN area pixel points to S:
point1=(F+T)/S (5)
in the step 3, the following steps are specifically performed:
3-1, when N is not equal to whitet and indicates that a classification error area FN exists, switching to the step 3-3;
step 3-2, when N is whitet, the classification error area FN does not exist, and the step 3-4 is switched;
step 3-3, when a classification error area FN exists, the point number of the pixel in the FN area is compared with the intersection to obtain a subtraction term point 2:
point2=T/N (6)
step 3-4, when no classification error area FN exists, comparing the pixel point number of the FP area with the union set to obtain a subtraction term point 2:
point2=F/U (7)。
2. a scoring method for image segmentation quality according to claim 1, wherein in the step 4, the following is specifically performed:
step 4-1, obtaining a total score according to the basic score, the adding score and the subtracting score:
count=P+point1-point2 (8)
and 4-2, because the fraction is not in the interval of [0-1], calculating as follows:
count=P+((1-P)*point1)-(P*point2) (9)。
CN202010549866.3A 2020-06-16 2020-06-16 Grading method for image segmentation quality Active CN111784702B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010549866.3A CN111784702B (en) 2020-06-16 2020-06-16 Grading method for image segmentation quality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010549866.3A CN111784702B (en) 2020-06-16 2020-06-16 Grading method for image segmentation quality

Publications (2)

Publication Number Publication Date
CN111784702A CN111784702A (en) 2020-10-16
CN111784702B true CN111784702B (en) 2022-09-27

Family

ID=72755936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010549866.3A Active CN111784702B (en) 2020-06-16 2020-06-16 Grading method for image segmentation quality

Country Status (1)

Country Link
CN (1) CN111784702B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785618B (en) * 2020-11-16 2022-10-21 南京理工大学 Object segmentation visual quality scoring method based on pixel certainty degree

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794714A (en) * 2015-04-18 2015-07-22 吉林大学 Image segmentation quality evaluating method based on ROC Graph
CN106529429A (en) * 2016-10-27 2017-03-22 中国计量大学 Image recognition-based facial skin analysis system
CN108109147A (en) * 2018-02-10 2018-06-01 北京航空航天大学 A kind of reference-free quality evaluation method of blurred picture
CN109829924A (en) * 2019-01-18 2019-05-31 武汉大学 A kind of image quality evaluating method based on body feature analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104794714A (en) * 2015-04-18 2015-07-22 吉林大学 Image segmentation quality evaluating method based on ROC Graph
CN106529429A (en) * 2016-10-27 2017-03-22 中国计量大学 Image recognition-based facial skin analysis system
CN108109147A (en) * 2018-02-10 2018-06-01 北京航空航天大学 A kind of reference-free quality evaluation method of blurred picture
CN109829924A (en) * 2019-01-18 2019-05-31 武汉大学 A kind of image quality evaluating method based on body feature analysis

Also Published As

Publication number Publication date
CN111784702A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN115082683B (en) Injection molding defect detection method based on image processing
CN115082467B (en) Building material welding surface defect detection method based on computer vision
WO2021143343A1 (en) Method and device for testing product quality
US8712112B2 (en) Dot templates for object detection in images
JP4515332B2 (en) Image processing apparatus and target area tracking program
US11373309B2 (en) Image analysis in pathology
CN104794714B (en) Image segmentation quality evaluating method based on ROC Graph
CN114897773B (en) Method and system for detecting distorted wood based on image processing
CN115082466B (en) PCB surface welding spot defect detection method and system
CN111709914B (en) Non-reference image quality evaluation method based on HVS characteristics
CN111507411B (en) Image comparison method and system
CN114140669B (en) Welding defect recognition model training method and device and computer terminal
CN117237646B (en) PET high-temperature flame-retardant adhesive tape flaw extraction method and system based on image segmentation
CN111784702B (en) Grading method for image segmentation quality
WO2023273296A1 (en) Vehicle image segmentation quality evaluation method and apparatus, device, and storage medium
CN112825120A (en) Face illumination evaluation method and device, computer readable storage medium and equipment
CN114677670B (en) Method for automatically identifying and positioning identity card tampering
CN109961413B (en) Image defogging iterative algorithm for optimized estimation of atmospheric light direction
CN111753722B (en) Fingerprint identification method and device based on feature point type
CN111210452B (en) Certificate photo portrait segmentation method based on graph segmentation and mean shift
CN114548250A (en) Mobile phone appearance detection method and device based on data analysis
CN110264477B (en) Image segmentation evaluation method based on tree structure
CN113223098A (en) Preprocessing optimization method for image color classification
CN112967292A (en) Automatic cutout and scoring method and system for E-commerce products
CN117152084A (en) Engine design evaluation method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant