CN115357517B - Evaluation method for development superiority and inferiority of machine vision underlying algorithm - Google Patents
Evaluation method for development superiority and inferiority of machine vision underlying algorithm Download PDFInfo
- Publication number
- CN115357517B CN115357517B CN202211283108.7A CN202211283108A CN115357517B CN 115357517 B CN115357517 B CN 115357517B CN 202211283108 A CN202211283108 A CN 202211283108A CN 115357517 B CN115357517 B CN 115357517B
- Authority
- CN
- China
- Prior art keywords
- algorithm
- test
- score
- developed
- standard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3684—Test management for test design, e.g. generating new test cases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an evaluation method for the development superiority and inferiority of a machine vision underlying algorithm, which comprises the following steps of S1, selecting a standard comparison library; s2, recording a test environment and making a test report; s3, dividing the input and output characteristic conditions into specific corresponding categories; s4, calculating an effect test score _ r; s5, obtaining an efficiency test score _ e; s6 calculating the total score of the developed bottom layer algorithm to be evaluated according to the effect test score r and the efficiency test score e; and S7, acquiring the superiority and inferiority of the bottom layer algorithm developed by the total equal division judgment according to the step S6. The invention ensures that the developed bottom-layer algorithm has a unified evaluation standard, and provides a powerful tool for testing, managing and accepting the bottom-layer algorithm development project, thereby reducing the risk in the test of the bottom-layer algorithm development project and the acceptance and management cost.
Description
Technical Field
The invention relates to the technical field of visual inspection algorithms, in particular to an evaluation method for the development superiority and inferiority of a machine vision underlying algorithm.
Background
In the machine vision industry, when an enterprise needs to realize complete algorithm autonomy and does not depend on a third-party library, the enterprise needs to develop an algorithm tool set consisting of a series of bottom-layer algorithms, and the developed bottom-layer algorithms need to pass a test before online use, and the known test strategies in the industry at present comprise:
1. the method comprises the steps of performing field test, namely directly applying a developed algorithm tool set to an actual project, directly performing actual combat test by taking an actual use result of the project as a reference, and performing test and feedback and modification on the actual project field until the actual application requirement is met finally;
2. the method comprises the following steps that developers carry out self-test, namely algorithm developers carry out self-test to automatically make a test strategy and take charge of whether the developed algorithm has problems in practical application after passing the test;
3. the tester tests, namely, the testing mode of the Internet industry is applied mechanically, and the tester has a special development department and a testing department, wherein the testing department is responsible for formulating a test case and testing an algorithm developed by the development department based on the test case.
However, the above three test strategies have the following problems:
(1) And (3) field testing: the time expenditure and the labor cost caused by algorithm test, feedback and modification are borne in the project, and the risk of project delay and even failure is increased; in the machine vision industry, different projects have different application requirements, algorithms in an algorithm tool set are not necessarily all used, and the emphasis points required by the used algorithms are different, so that the test result is one-sidedness, that is, the test in a certain project passes, the bottom algorithm tool set can not be ensured to be suitable for other projects, and each project needs to be additionally invested in test and feedback, modification time and development and test labor cost;
(2) Self-testing by developers: different developers and testing methods are different, and evaluation standards for algorithm effects and execution efficiency are different, and a bottom-layer algorithm tool set is often developed by a team consisting of a plurality of algorithm personnel, so that the algorithms in the same tool set can have a plurality of different testing methods and evaluation standards, the quality of the algorithms is different, and a plurality of problems can occur in use after the algorithms are combined into one tool set; if the flow of algorithm developers occurs, the developed algorithms are difficult to maintain along with the change of the algorithm personnel due to different algorithm personnel having different testing methods and standards;
(3) Testing by special testers: the testing method in the internet industry is end-to-end testing, namely, various testing cases are designed by simulating actual conditions, and the final result is taken as the standard, the mode is suitable for application products of a client, but specific testing needs to be carried out on each algorithm for a tool type product of a bottom layer algorithm tool set, the actual application condition is an application level algorithm consisting of a plurality of algorithms, the testing method for designing the testing cases based on the actual application condition in the internet industry does not accord with the characteristics of the tool type product (the tool type product does not need end-to-end testing and also needs independent testing of each module), and because of the variability and uncertainty of the requirements under the application condition of the machine vision industry, the testing cases with complete coverage can not be designed, and the tool type product can not be guaranteed to meet various application conditions; and the algorithm test needs to have corresponding theoretical knowledge, conventional testers rarely have the knowledge, and even if the testers with the knowledge can be found or cultured, a large amount of labor cost is increased.
Therefore, the methods are not test methods designed based on the theoretical characteristics of the algorithm, so that the test surface coverage is insufficient, the judgment standard is inaccurate, and the problem probability of the algorithm in project application is high.
Meanwhile, the current machine vision algorithm does not have a universal algorithm testing method, and all machine vision algorithms can be tested, because:
1. the bottom layer principle of the machine vision algorithm is various, each specific algorithm has the unique mathematical and logical principle, the principle is different, and the specific test method is also different;
2. the input and output of the machine vision algorithm are different, and the input and output of different algorithms are different, for example, 2 algorithms, one output is an image, one output is a number, the outputs are not the same thing at all, and there is no comparability, and the two algorithms cannot be tested simultaneously by using the same formula or method in nature;
3. the factors influencing the execution efficiency are different, for example, for the global binarization algorithm, the factors influencing the algorithm efficiency only have the size of an image; for the Fourier transform algorithm, factors influencing the algorithm, except the size of the image, whether the size of the image is prime number or not; for the polar coordinate transformation algorithm, factors influencing the efficiency of the algorithm are only related to the inner circle radius and the outer circle radius in the input parameters.
Therefore, because the factors influencing the algorithm execution efficiency are not completely the same, if the test case is designed only according to part of the above factors, the test case cannot meet the test requirements of the above 3 cases at the same time; when the types of the algorithms are more, the factors influencing the execution efficiency are more, so that a method which can simultaneously meet the test requirements of all the algorithms is difficult to exist; meanwhile, the test results of different algorithms are not comparable, for example, the error of some measurement algorithms can be accepted within 5 pixels earlier, and the error is very poor when exceeding 1 pixel; for another example, some algorithms are relatively fast within 50ms, but some algorithms are relatively slow when the execution time is more than 10 ms; therefore, it is difficult to have a test method capable of normalizing the test results of all algorithms into a score, and thus, it is possible to determine whether the tested algorithms can pass the test by simply setting a score threshold.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the method for evaluating the development superiority and inferiority of the machine vision underlying algorithm is provided, and the problem that the superiority and inferiority degree of algorithm development is evaluated or checked by a unified test method and evaluation standards which are not available when the underlying algorithm development project is large in algorithm variety, large in number and completely autonomous, i.e. a third-party algorithm library is not used is solved.
The technical scheme adopted by the invention for solving the technical problems is as follows: a method for evaluating the development superiority and inferiority of machine vision bottom-layer algorithm includes the following steps,
s1, selecting a standard comparison library for comparison with a developed bottom algorithm to be evaluated;
s2, recording a test environment and making a test report;
s3, dividing the developed bottom algorithm to be evaluated into specific corresponding categories according to the input and output characteristic conditions of the bottom algorithm;
s4, performing effect comparison test on the developed bottom layer algorithm to be evaluated, and calculating an effect test score _ r;
s5, carrying out N times of zooming on any one test image by using an image zooming algorithm to obtain N images with different sizes, respectively executing M times of algorithms on each of the obtained N images by using a developed bottom layer algorithm to be evaluated and a corresponding algorithm in a standard algorithm library, and respectively counting the time consumed by executing the M times of algorithms by using the bottom layer algorithm and the corresponding algorithm in the standard algorithm library to obtain an efficiency test score _ e;
s6, calculating a total score = alpha · score _ r + (1-alpha) score _ e of the developed to-be-evaluated underlying algorithm according to the effect test score _ r and the efficiency test score _ e; wherein, alpha is a weight coefficient;
and S7, obtaining the superiority and inferiority of the bottom layer algorithm developed by the total equal division judgment according to the step S6.
Further, in step S1 of the present invention, the developed underlying algorithm to be evaluated has a corresponding relationship with the algorithm in the standard comparison library, and all parameters of the developed underlying algorithm to be evaluated and the algorithm in the standard comparison library are consistent.
Furthermore, in step S2 of the present invention, the test report includes a test environment and a test result; the test environment comprises CPU information, memory information, GPU information and standard comparison library information; the test result comprises an effect score, an efficiency score and a total score of the developed underlying algorithm to be evaluated.
Further, in step S3 of the present invention, the corresponding categories include: image processing, segmentation, feature extraction, localization, and testing.
Furthermore, in step S4 of the present invention, the effect comparison test refers to comparing the output result of the developed underlying algorithm to be evaluated with the output result of the corresponding algorithm in the standard comparison library for the same input; and (4) according to the categories classified in the step (S3), obtaining an effect test score _ r by adopting a corresponding scoring function for each category.
Still further, the scoring function of the image processing class according to the present invention is:
wherein, I (I, j) and I' (I, j) represent the pixel gray value of the result image obtained by the developed algorithm and the pixel gray value of the result image obtained by the standard library in the ith row and the jth column respectively;
the score function of the segmentation class is as follows:
wherein R is i Refers to one connected domain in the output area on the developed algorithm;
R i ' refers to a connected domain in the output area on the corresponding algorithm of the standard library;
n is the number of connected domains in the output area on the corresponding algorithm of the standard library;
the scoring function of the feature extraction class is as follows:
score_r=1-mse,
wherein v is i Refers to the value of the developed algorithm output to the ith dimension;
v i ' means a standard library corresponding algorithm outputs to a value of the ith dimension;
n refers to the output vector dimension;
the scoring function of the positioning class is as follows:
wherein, x, y, x ', y ', z and z ' are x, y and z coordinates of the developed algorithm and the algorithm positioning result corresponding to the standard library; n is the dimension of the image; e is a positioning tolerance error, and the confidence' respectively refer to the confidence of the positioning result of the algorithm corresponding to the development algorithm and the standard library;
the scoring function of the test class is as follows:
wherein m and m' are respectively the measurement results of the development algorithm and the corresponding algorithm of the standard library, and F is the positioning tolerance error.
Further, in step S5 of the present invention, any one of the test images is scaled 18 times, and the sizes of the 18 obtained images are: 256x256, 512x512, 1280x960, 960x1280, 1277x953, 953x1277, 1920x1080, 1080x1920, 1913x1069, 1069x1913, 4000x4000, 8000x4000, 4000x8000, 8000x8000, 12000x12000, 8000x16000, 16000x8000 and 16000x16000.
The invention has the advantages that the defects in the background technology are overcome, the developed bottom-layer algorithm has a unified judgment standard by adopting the method, and a powerful tool is provided for the test, management and acceptance of the bottom-layer algorithm development project, so that the risk in the test of the bottom-layer algorithm development project and the acceptance and management cost are reduced.
Drawings
FIG. 1 is an overall evaluation flow diagram of the present invention;
fig. 2 is a schematic flow chart of the algorithm effect scoring method of the present invention.
Detailed Description
The invention will now be described in further detail with reference to the drawings and preferred embodiments. These drawings are simplified schematic views illustrating only the basic structure of the present invention in a schematic manner, and thus show only the constitution related to the present invention.
The method for evaluating the development superiority and inferiority of the machine vision underlying algorithm shown in fig. 1-2 comprises the steps of setting or selecting a standard comparison library, comparing the developed algorithm with the standard library according to a certain strategy, substituting a comparison result into a scoring formula to obtain a final score, and evaluating the superiority and inferiority of the developed algorithm according to the grade or the level of the score, so as to form a unified evaluation standard.
The method comprises the following specific steps:
s1, selecting a standard comparison library
Selecting mature and authoritative algorithm software or algorithm library products in the machine vision industry, such as halcon, matrix, opencv and the like, as a standard library for comparison with a bottom algorithm library to be developed;
the algorithm to be developed is corresponding to the algorithm in the standard library, and the algorithm in the standard library is generally available in the conventional bottom-layer algorithm and can be directly corresponding for comparison; for the bottom algorithms which are not in the standard library and need to be developed, the algorithms in the standard library can be combined into the corresponding algorithms;
if the algorithm corresponding to the standard library has a relevant routine image, selecting the routine image as a test image, if not, selecting a relatively representative image in the project as the test image, wherein the test image in the following steps is the test image;
in the following steps, all parameters of the developed algorithm are consistent with all parameters of the algorithm corresponding to the standard library, if the algorithm corresponding to the standard library has a default value, the default value is used, and if the algorithm does not have the default value, the value which is commonly used by the algorithm in the project is used.
S2, recording the test environment and making a test report
The test report is used for outputting a test report of each algorithm, each algorithm outputs a test report after being tested, and each test report is divided into a test environment column and a test result column 2;
(1) Test environment fence
Recording the testing environment of the algorithm, filling the corresponding position of the testing result list,
wherein, the test environment column includes:
cpu information, i.e. the brand model of cpu at the time of test;
memory information, i.e. the size of the computer memory under test;
GPU information, namely the brand model of the GPU during testing;
a standard comparison library, namely the name and version of the standard library mentioned in the step S1;
(2) Test result column
The test result column is used for filling various scores after the test is completed and the algorithm is tested, and the test result column comprises:
the effect score is used for recording the effect test score of the algorithm, and the specific description of the effect test score is shown in the step S4;
the efficiency score is used for recording the efficiency test score of the algorithm, and the specific description of the efficiency test score is shown in the step S5;
the total score is used for recording the total score obtained by calculating the effect test score and the efficiency test score, and the calculation method of the total score is detailed in the step S6;
the specific tests are shown in the following table:
s3, algorithm classification
Each algorithm to be developed is classified into specific categories according to the input and output characteristics of the algorithm, and the classification is carried out according to the following conditions, wherein the classification process is shown as the following table:
the "region" in the above table represents the segmented point set, such as connected region and outline.
The classification conditions were as follows:
1. image processing class
The input and the output are images, and the images comprise filtering algorithms such as Gaussian filtering and the like, enhancement algorithms such as image sharpening and the like, and image transformation algorithms such as Fourier transformation and the like, and generally belong to preprocessing algorithms in machine vision application;
2. segmentation class
Inputting the image and outputting the image as a segmented region; the method comprises various binarization algorithms, image gray segmentation algorithms and boundary extraction algorithms;
3. class of feature extraction
The input is an image or an area, and the output is a characteristic value or a characteristic vector, including various gray scale and shape characteristic calculation algorithms;
4. location class
The input is an image or an area, and the output is a matching result, namely the coordinate and the confidence coefficient (the coordinate is a 2-dimensional or 3-dimensional vector, and the confidence coefficient is a numerical value) obtained by matching, including various template matching algorithms;
5. measurement class
The input is an image, and the output is a measured value, and the measured value comprises various algorithms for 2D and 3D size measurement;
s4, calculating the score of the effect test
The effect test refers to comparing the output result of the development operator with the output result of the corresponding algorithm in the standard library for the same input; the effect test score _ r is obtained by respectively using the corresponding scoring function (formula) for each type of algorithms classified above, as shown in fig. 2, the calculation formulas of the effect test scores of different types of algorithms are different, specifically as follows:
(1) Image processing class-pro _ max _ abs _ diff
The formula is as follows:
wherein:
i (I, j) and I' (I, j) respectively represent the gray value of the pixel of the result image obtained by the developed algorithm and the gray value of the pixel of the result image obtained by the standard library in the ith row and the jth column,
abs represents the operation of an absolute value,
max represents the maximum value to be taken,
index (max.)) represents the value of i, j when the maximum value is taken, i.e., index _ i, index _ j,
min represents the minimum value;
(2) Segmentation class-miou
The formula is as follows:
wherein:
R i refers to a connected domain in the output region on the development algorithm;
R i ' refers to a connected domain in the output area on the corresponding algorithm of the standard library;
n is the number of connected domains in the output area on the corresponding algorithm of the standard library;
(3) Feature extraction class-reverse _ mse
The formula is as follows:
score_r=1-mse
wherein:
v i the value of the ith dimension is output by a development algorithm;
v i ' refers to the value of the ith dimension output by the algorithm corresponding to the standard library;
n refers to the output vector dimension, when the output is a numerical value, n =1;
(4) Location class mean _ pro _ mse
The formula is as follows:
wherein:
x, y, x ', y ', z and z ' are x, y and z coordinates of the algorithm positioning result corresponding to the developed algorithm and the standard library; when a two-dimensional image positioning task is processed, the values of z and z' are 0, n =2;
when a three-dimensional image positioning task is processed, n =3;
e is a positioning tolerance error which is artificially set, and the default value in the invention is 5;
the confidence and the confidence' respectively refer to the confidence of the positioning result of the algorithm corresponding to the development algorithm and the standard library;
n = max (n 1, n 2), n1 represents the number of positioning results obtained by developing the algorithm, and n2 represents the number of positioning results obtained by the algorithm corresponding to the standard library;
pro_mse i a pro _ mse value of the ith positioning result obtained by calculation according to a pro _ mse calculation formula is represented;
if n1 is not equal to n2, when i>min (n 1, n 2), pro _ mse i =1;
(5) Measurement class-pro _ accuracy
The specific formula is as follows:
wherein:
m and m' are respectively the measurement results of the development algorithm and the corresponding algorithm of the standard library,
abs represents the operation of the absolute value,
f is a positioning tolerance error which is artificially set, and the default value is 0.001;
s5, calculating efficiency test scores
The test image is scaled to the following dimensions using an image scaling algorithm (a very conventional algorithm, such as the resize function of opencv):
256x256,512x512,1280x960,960x1280,1277x953,953x1277,1920x1080,1080x1920,1913x1069,1069x1913,4000x4000,8000x4000,4000x8000,8000x8000,12000x12000,8000x16000,16000x8000,16000x16000;
a total of 18 images (the 18 images cover almost all possible size levels used in the project and cover factors that may affect the operating efficiency of almost all machine vision algorithms, such as aspect ratio, whether length and width are prime numbers, and whether length and width are integral powers of 2);
the 18 images are brought into algorithms corresponding to the development algorithms and the standard library, the algorithms are executed for 11 times on each image, and the execution time consumption of 11 times is counted;
the efficiency score _ e is calculated as follows:
wherein:
i denotes the ith image of the 18 different size images,
j indicates that it takes time to execute the j-th algorithm (starting with 2 is to prevent the first execution from generating extra time due to the program bottom mechanism),
t ij representing the time consumed by the j execution time of the ith image;
mean _ time _ std and mean _ time _ n respectively represent mean _ time values obtained by a development algorithm and a standard library corresponding algorithm according to the evaluation formula of mean _ time;
the effect comparison test and the efficiency comparison test can be carried out simultaneously;
s6, calculating a total score
The final score of the development algorithm is calculated as follows:
score=ɑ*score_r+(1-ɑ)score_e
alpha is a weight coefficient, the value range is 0-1, and the value is 0.8 in the invention;
the range of the score value of the final score of the developed algorithm is 0-1, and the higher the score is, the higher the quality of the algorithm is;
s7, obtaining a test result
Calculating the final score of each algorithm according to the previous steps by all developed algorithms, summarizing the algorithms and the scores thereof into a score list, namely a test result list, judging and comparing the development advantages and disadvantages of the algorithms by utilizing the list, and setting a score threshold to determine which algorithms can pass the test, wherein the size of the threshold is determined according to the specific requirements of the project;
the invention designs a unified test method and a judgment standard for the algorithm, so that the development of the bottom-layer algorithm can realize the judgment standardization, and the problems of inconsistent algorithm quality and high maintenance cost caused by self-test of developers are solved.
The test method and the judgment standard can be developed into test software through computer programming, so that automatic test is realized, and the problem of labor cost increase caused by self test of developers and test of special testers is solved.
The invention is based on the statistics and analysis of the theoretical knowledge of the computer vision algorithm, the designed evaluation standard accords with the theoretical characteristics of most machine vision algorithms, the obtained evaluation score is in positive correlation with the actual use effect of the algorithm, namely the higher the score is, the lower the possibility of the problem in the actual use scene is, and the problem of higher problem probability of the algorithm in project application caused by insufficient test surface coverage and inaccurate evaluation standard of various conventional test methods is solved through the correlation keeping function of the evaluation score.
The use scene of the invention is to test in the development stage, and the test is used in the project site after the test is successful, thereby solving the problems that the site test brings extra time and labor cost to the project, and causes delay and even failure risk.
While particular embodiments of the present invention have been described in the foregoing specification, various modifications and alterations to the previously described embodiments will become apparent to those skilled in the art from this description without departing from the spirit and scope of the invention.
Claims (5)
1. A method for evaluating the development superiority and inferiority of a machine vision underlying algorithm is characterized by comprising the following steps: comprises the following steps of (a) carrying out,
s1, selecting a standard comparison library for comparison with a developed bottom algorithm to be evaluated;
s2, recording a test environment and making a test report;
s3, dividing the developed bottom algorithm to be evaluated into specific corresponding categories according to the input and output characteristic conditions of the bottom algorithm;
the corresponding categories include: the method comprises the steps of image processing, segmentation, feature extraction, positioning and testing;
s4, performing effect comparison test on the developed bottom layer algorithm to be evaluated, and calculating an effect test score _ r;
the scoring function of the image processing class is as follows:
wherein, I (I, j) and I' (I, j) represent the pixel gray value of the result image obtained by the developed algorithm and the pixel gray value of the result image obtained by the standard library in the ith row and the jth column respectively;
the scoring function of the segmentation class is as follows:
wherein R is i Refers to one connected domain in the output area on the developed algorithm;
R i ' refers to a connected domain in the output area on the corresponding algorithm of the standard library;
n is the number of connected domains in the output area on the corresponding algorithm of the standard library;
the scoring function of the feature extraction class is as follows:
wherein v is i Refers to the value of the developed algorithm output to the ith dimension;
v i ' means a standard library corresponding algorithm outputs to a value of the ith dimension;
n refers to the output vector dimension;
the scoring function of the positioning class is as follows:
wherein, x, y, x ', y ', z and z ' are x, y and z coordinates of the developed algorithm and the algorithm positioning result corresponding to the standard library; n is the dimension of the image; e is a positioning tolerance error, and the confidence' respectively refer to the confidence of the positioning result of the algorithm corresponding to the development algorithm and the standard library;
the scoring function of the test class is as follows:
wherein m and m' are respectively the measurement results of the development algorithm and the corresponding algorithm of the standard library, and F is the positioning tolerance error;
s5, carrying out N times of zooming on any one test image by using an image zooming algorithm to obtain N images with different sizes, respectively executing M times of algorithms on each of the obtained N images by using a developed bottom layer algorithm to be evaluated and a corresponding algorithm in a standard algorithm library, and respectively counting the time consumed by executing the M times of algorithms by using the bottom layer algorithm and the corresponding algorithm in the standard algorithm library to obtain an efficiency test score _ e;
s6 the development is calculated according to the effect test score r and the efficiency test score e the total score of the underlying algorithm to be evaluated score = a · score _ r + (1-a) score _ e; wherein, alpha is a weight coefficient;
and S7, obtaining the total score according to the step S6, and judging the superiority and inferiority of the developed bottom-layer algorithm.
2. A method for evaluating the superiority or inferiority of the development of the machine-vision underlying algorithm as claimed in claim 1, wherein: in the step S1, the developed underlying algorithm to be evaluated has a corresponding relationship with the algorithm in the standard comparison library, and all parameters of the developed underlying algorithm to be evaluated and the algorithm in the standard comparison library are kept consistent.
3. A method for evaluating the superiority or inferiority of the development of the machine-vision underlying algorithm as claimed in claim 1, wherein: in the step S2, the test report comprises a test environment and a test result; the test environment comprises CPU information, memory information, GPU information and standard comparison library information; the test result comprises an effect score, an efficiency score and a total score of the developed bottom-layer algorithm to be evaluated.
4. A method for assessing the goodness of development of a machine vision underlying algorithm in accordance with claim 1, wherein: in the step S4, the effect comparison test refers to comparing the developed output result of the bottom layer algorithm to be evaluated with the output result of the corresponding algorithm in the standard comparison library for the same input; and (4) according to the categories classified in the step (S3), obtaining an effect test score _ r by adopting a corresponding scoring function for each category.
5. A method for evaluating the superiority or inferiority of the development of the machine-vision underlying algorithm as claimed in claim 1, wherein: in step S5, any one of the test images is scaled 18 times, and the sizes of the 18 obtained images are: 256x256, 512x512, 1280x960, 960x1280, 1277x953, 953x1277, 1920x1080, 1080x1920, 1913x1069, 1069x1913, 4000x4000, 8000x4000, 4000x8000, 8000x8000, 12000x12000, 8000x16000, 16000x8000 and 16000x16000.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211283108.7A CN115357517B (en) | 2022-10-20 | 2022-10-20 | Evaluation method for development superiority and inferiority of machine vision underlying algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211283108.7A CN115357517B (en) | 2022-10-20 | 2022-10-20 | Evaluation method for development superiority and inferiority of machine vision underlying algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115357517A CN115357517A (en) | 2022-11-18 |
CN115357517B true CN115357517B (en) | 2022-12-30 |
Family
ID=84007753
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211283108.7A Active CN115357517B (en) | 2022-10-20 | 2022-10-20 | Evaluation method for development superiority and inferiority of machine vision underlying algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115357517B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109034242A (en) * | 2018-07-26 | 2018-12-18 | 北京小米移动软件有限公司 | Methods of marking, the apparatus and system of image processing algorithm |
CN110287356A (en) * | 2019-05-16 | 2019-09-27 | 罗普特科技集团股份有限公司 | It is a kind of for the evaluation and test of face recognition algorithms engine, call method and system |
CN115100150A (en) * | 2022-06-27 | 2022-09-23 | 征图新视(江苏)科技股份有限公司 | Machine vision universal detection algorithm |
-
2022
- 2022-10-20 CN CN202211283108.7A patent/CN115357517B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109034242A (en) * | 2018-07-26 | 2018-12-18 | 北京小米移动软件有限公司 | Methods of marking, the apparatus and system of image processing algorithm |
CN110287356A (en) * | 2019-05-16 | 2019-09-27 | 罗普特科技集团股份有限公司 | It is a kind of for the evaluation and test of face recognition algorithms engine, call method and system |
CN115100150A (en) * | 2022-06-27 | 2022-09-23 | 征图新视(江苏)科技股份有限公司 | Machine vision universal detection algorithm |
Also Published As
Publication number | Publication date |
---|---|
CN115357517A (en) | 2022-11-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI750553B (en) | Method of defect detection on a specimen and system thereof | |
CN108257121B (en) | Method, apparatus, storage medium and the terminal device that product defects detection model updates | |
CN110992349A (en) | Underground pipeline abnormity automatic positioning and identification method based on deep learning | |
Tyystjärvi et al. | Automated defect detection in digital radiography of aerospace welds using deep learning | |
US20220092856A1 (en) | Crack detection, assessment and visualization using deep learning with 3d mesh model | |
Li et al. | Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images | |
US11386553B2 (en) | Medical image data | |
US12001516B2 (en) | Method and assistance system for parameterizing an anomaly detection method | |
CN107111871A (en) | Local quality measurement is determined from body image record | |
CN113205511B (en) | Electronic component batch information detection method and system based on deep neural network | |
CN112215217B (en) | Digital image recognition method and device for simulating doctor to read film | |
US11373309B2 (en) | Image analysis in pathology | |
CN115345894B (en) | Welding seam ray detection image segmentation method | |
US20220076404A1 (en) | Defect management apparatus, method and non-transitory computer readable medium | |
CN115082444B (en) | Copper pipe weld defect detection method and system based on image processing | |
CN115115841A (en) | Shadow spot image processing and analyzing method and system | |
US20210150078A1 (en) | Reconstructing an object | |
CN115357517B (en) | Evaluation method for development superiority and inferiority of machine vision underlying algorithm | |
JP2002230562A (en) | Image processing method and device therefor | |
CN107833631A (en) | A kind of medical image computer-aided analysis method | |
CN112912924B (en) | Accuracy of predictive algorithm segmentation | |
CN112419244A (en) | Concrete crack segmentation method and device based on YOLOv4 target detection model and tubular flow field algorithm | |
KR20210105779A (en) | Method and apparatus for forming inspection criteria data with cloud outsorcing work by using artificial intelligent | |
US20240296661A1 (en) | Method of analyzing a component, method of training a system, apparatus, computer program, and computer-readable storage medium | |
CN115457038B (en) | Training method of hierarchical prediction model, hierarchical prediction method and related products |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |