CN116129142A - Image recognition model testing method and device, terminal equipment and storage medium - Google Patents

Image recognition model testing method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN116129142A
CN116129142A CN202310096283.3A CN202310096283A CN116129142A CN 116129142 A CN116129142 A CN 116129142A CN 202310096283 A CN202310096283 A CN 202310096283A CN 116129142 A CN116129142 A CN 116129142A
Authority
CN
China
Prior art keywords
frame
recognition model
image
annotation
intersection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310096283.3A
Other languages
Chinese (zh)
Inventor
李良伟
伍伟锋
许洁斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xuanwu Wireless Technology Co Ltd
Original Assignee
Guangzhou Xuanwu Wireless Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xuanwu Wireless Technology Co Ltd filed Critical Guangzhou Xuanwu Wireless Technology Co Ltd
Priority to CN202310096283.3A priority Critical patent/CN116129142A/en
Publication of CN116129142A publication Critical patent/CN116129142A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention discloses a method, a device, a terminal device and a storage medium for testing an image recognition model, wherein the method comprises the following steps: when the identification model is tested, after the sample image set is identified for the second time, comparing the second marking frame result output by the second identification with the marking frames of the original first marking frame result of the sample image set, and calculating the ratio of the intersection of each second marking frame and the corresponding first marking frame to the union to obtain a plurality of intersection ratios; and judging the target object identification result corresponding to the second labeling frame with the intersection ratio larger than the preset value as accurate, and judging the target object identification result corresponding to the second labeling frame with the intersection ratio not larger than the preset value as error. The invention can automatically judge whether the second labeling frame result and the original first labeling frame recognition result of the sample image set are correct, thereby determining the accuracy of the image recognition model without adopting a manual statistics mode, and improving the test efficiency.

Description

Image recognition model testing method and device, terminal equipment and storage medium
Technical Field
The invention relates to the technical field of precision testing of algorithm models, in particular to a testing method, a testing device, terminal equipment and a storage medium based on an image recognition model.
Background
In the test of the identified algorithm model, a tester needs to deploy and test the algorithm model to ensure that the accuracy of the model in intelligent identification meets the user standard in actual application. When a model is tested specifically, particularly for an algorithm model for image recognition, in the prior art, a tester is required to observe and compare the recognition result of each image with an original image to obtain a comparison result, and the recognition accuracy after manual statistics recognition is required, but when a large number of images are required to be checked manually, the time consumption is long for testing, and the recognition accuracy of an automatic statistics recognition model cannot be realized.
Disclosure of Invention
The embodiment of the invention provides a method, a device, terminal equipment and a storage medium for testing an image recognition model, which can effectively solve the problems that the test is long in time consumption and the recognition accuracy of an automatic statistics recognition model cannot be realized when a large number of pictures are needed to be checked manually in the prior art.
An embodiment of the present invention provides a method for testing an image recognition model, including:
acquiring a sample image set of an image recognition model; the sample image set comprises a plurality of sample images, and each sample image comprises a plurality of first annotation frames corresponding to target objects;
inputting each sample image into the image recognition model so that the image recognition model recognizes target objects in each sample image and generates a second annotation frame corresponding to each target object;
for each sample image, calculating the intersection of each second annotation frame and the corresponding first annotation frame, calculating the union of each second annotation frame and the corresponding first annotation frame, and calculating the ratio of the intersection to the union to obtain the corresponding intersection ratio of each second annotation frame; the target object identification result corresponding to the second labeling frame with the intersection ratio being larger than the preset value is judged to be accurate, and the target object identification result corresponding to the second labeling frame with the intersection ratio being not larger than the preset value is judged to be wrong;
and determining the accuracy of the image recognition model according to the number of the target objects, the target object recognition results of which are accurate, in all the sample images.
Preferably, the calculating the intersection of each second labeling frame and the corresponding first labeling frame, calculating the union of each second labeling frame and the corresponding first labeling frame, and calculating the ratio of the intersection to the union, to obtain the corresponding intersection ratio of each second labeling frame specifically includes:
calculating the intersection of each second annotation frame and the corresponding first annotation frame through the IOU intersection comparison function, and calculating the union of each second annotation frame and the corresponding first annotation frame;
and calculating the ratio of the intersection set to the union set through an IOU intersection ratio function to obtain the intersection ratio corresponding to each second labeling frame.
Preferably, the determining the accuracy of the image recognition model according to the number of the target objects whose target object recognition results are accurate in all the sample images specifically includes:
and calculating the ratio of the number of the target objects in all sample images to the number of the target objects in all sample images, wherein the target object identification result is accurate, and taking the ratio as the accuracy of the image identification model.
Preferably, the determining the accuracy of the image recognition model according to the number of the target objects whose target object recognition results are accurate in all the sample images specifically includes:
calculating the number of target objects with accurate target object identification results in each sample image and the number of second annotation frames in each sample image to obtain the recall rate corresponding to each sample image;
and calculating average values corresponding to the recall rates of all the sample images, and taking the average values as the accuracy rate of the image recognition model.
Preferably, the method further comprises:
and generating a confusion matrix corresponding to each target picture according to the number of the first annotation frames corresponding to each sample image, the number of the second annotation frames corresponding to each sample image, the target object identification result corresponding to each second annotation frame in each sample image and the recall rate corresponding to each sample image.
Preferably, the method further comprises:
and sending the sample image which is judged to be wrong by the target object identification result corresponding to each second annotation frame to a preset error file.
On the basis of the method embodiment, the invention correspondingly provides the device item embodiment.
An embodiment of the present invention provides a test device for an image recognition model, including: the system comprises a sample image set acquisition module, a labeling frame generation module, a recognition result determination module and an accuracy determination module;
the sample image set acquisition module is used for acquiring a sample image set of an image recognition model; the sample image set comprises a plurality of sample images, and each sample image comprises a plurality of first annotation frames corresponding to target objects;
the annotation frame generation module is used for inputting each sample image into the image recognition model so that the image recognition model recognizes target objects in each sample image and generates a second annotation frame corresponding to each target object;
the recognition result determining module is used for calculating the intersection of each second annotation frame and the corresponding first annotation frame for each sample image, calculating the union of each second annotation frame and the corresponding first annotation frame, and calculating the ratio of the intersection to the union to obtain the corresponding intersection ratio of each second annotation frame; the target object identification result corresponding to the second labeling frame with the intersection ratio being larger than the preset value is judged to be accurate, and the target object identification result corresponding to the second labeling frame with the intersection ratio being not larger than the preset value is judged to be wrong;
the accuracy rate determining module is used for determining the accuracy rate of the image recognition model according to the number of the target objects, the target object recognition results of which are accurate, in all the sample images.
Based on the method embodiment, the invention correspondingly provides the terminal equipment item embodiment.
Another embodiment of the present invention provides a terminal device, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor implements a testing party of an image recognition model according to the embodiment of the present invention when the processor executes the computer program.
Based on the method embodiments described above, the present invention correspondingly provides storage medium item embodiments.
Another embodiment of the present invention provides a computer readable storage medium, which includes a stored computer program, where the computer program controls a device in which the computer readable storage medium is located to execute a testing party of an image recognition model according to the embodiment of the present invention when the computer program runs.
The invention has the following beneficial effects:
the embodiment of the invention provides a testing method, a testing device, terminal equipment and a storage medium of an image recognition model, wherein the testing method based on the recognition model comprises the following steps: acquiring a sample image set of an image recognition model; the sample image set comprises a plurality of sample images, and each sample image comprises a plurality of first annotation frames corresponding to target objects; inputting each sample image into the image recognition model so that the image recognition model recognizes target objects in each sample image and generates a second annotation frame corresponding to each target object; for each sample image, calculating the intersection of each second annotation frame and the corresponding first annotation frame, calculating the union of each second annotation frame and the corresponding first annotation frame, and calculating the ratio of the intersection to the union to obtain the corresponding intersection ratio of each second annotation frame; the target object identification result corresponding to the second labeling frame with the intersection ratio being larger than the preset value is judged to be accurate, and the target object identification result corresponding to the second labeling frame with the intersection ratio being not larger than the preset value is judged to be wrong; and determining the accuracy of the image recognition model according to the number of the target objects, the target object recognition results of which are accurate, in all the sample images. Compared with the prior art, when the identification model is tested, after the sample image set is identified for the second time, the second marking frame result output by the second identification is compared with the marking frames of the original first marking frame result of the sample image set, and the intersection ratio of the intersection of each second marking frame and the corresponding first marking frame to the union is calculated to obtain the intersection ratio; and judging the target object identification result corresponding to the second labeling frame with the intersection ratio larger than the preset value as accurate, and judging the target object identification result corresponding to the second labeling frame with the intersection ratio not larger than the preset value as error. The invention can automatically identify whether the second marking frame result output by the second identification and the original first marking frame result of the sample picture set are correct, thereby continuously determining the accuracy of the image identification model according to the number of the target objects with the accurate target object identification result in all the sample images without adopting a manual statistics mode, and compared with the manual statistics mode, when a large number of pictures are identified, the invention can automatically identify whether each picture is identified and whether the identification is correct or not, and determine the accuracy of the image identification model, thereby improving the test efficiency.
Drawings
Fig. 1 is a flowchart of a testing method of an image recognition model according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a testing device for an image recognition model according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic flow chart of a testing method of an image recognition model according to an embodiment of the present invention;
the embodiment of the invention provides a testing method of an image recognition model, which comprises the following steps:
step S1: acquiring a sample image set of an image recognition model; the sample image set comprises a plurality of sample images, and each sample image comprises a plurality of first annotation frames corresponding to target objects;
step S2: inputting each sample image into the image recognition model so that the image recognition model recognizes target objects in each sample image and generates a second annotation frame corresponding to each target object;
step S3: for each sample image, calculating the intersection of each second annotation frame and the corresponding first annotation frame, calculating the union of each second annotation frame and the corresponding first annotation frame, and calculating the ratio of the intersection to the union to obtain the corresponding intersection ratio of each second annotation frame; the target object identification result corresponding to the second labeling frame with the intersection ratio being larger than the preset value is judged to be accurate, and the target object identification result corresponding to the second labeling frame with the intersection ratio being not larger than the preset value is judged to be wrong;
step S4: and determining the accuracy of the image recognition model according to the number of the target objects, the target object recognition results of which are accurate, in all the sample images.
For step S1, in a preferred embodiment, a sample image set of an image recognition model is obtained; the sample image set comprises a plurality of sample images, and each sample image comprises a plurality of first annotation frames corresponding to target objects;
specifically, each sample image has a plurality of images, one image includes a plurality of labeling frames, and the labeling frames can be used as reference labeling frames (i.e., first labeling frames), where the first labeling frames are labeled manually, so as to compare with a test result after testing the sample image (i.e., a second labeling frame obtained after identifying the sample image by a model), so as to determine whether the test result is accurate.
In another preferred embodiment, when the sample image is updated, only one newly added sample image is needed to be placed in the sample image set, the original sample set is not needed to be modified, the situation of repeatedly preparing test data is avoided, and labor cost is reduced. And the sample picture data is recalled to generate a new identification result when in each test, and the new identification result is compared with the result of the sample data set. Therefore, only the correctness of the sample data is ensured, the test script is maintained zero, and the workload of data maintenance is greatly reduced.
For step S2, in a preferred embodiment, each sample image is input into the image recognition model, so that the image recognition model recognizes the target object in each sample image and generates a second labeling frame corresponding to each target object, and specifically includes:
each different image recognition model is provided with a corresponding sample data set, namely a sample image set, each sample image set stores sample images which are manually marked, namely a first marking frame exists in each sample image, and the first marking frame is used as reference data of a comparison reference; inputting sample images into the image recognition model, wherein the image recognition model can recognize target objects (namely objects needing to be marked) in each sample image and generate a second marking frame corresponding to each target object; the second labeling frame is identified through the image identification model, the first labeling frame is labeled manually, and the identification accuracy of the image identification model can be obtained by comparing the first labeling frame with the second labeling frame.
In the invention, an automatic test platform client is set, a user selects an image recognition model address and a sample image set corresponding to the image recognition model based on the automatic test platform client to upload to a test server, after clicking to start a test, the test server is connected to an algorithm service according to the execution test, a new model is deployed on a remote algorithm server and an identification program is operated, and a complete communication session link is established. On the basis, the pictures in the sample image set are read, target objects in the images, namely product information in the pictures, are identified, the pictures and the product information are transmitted to algorithm service, the images are identified for the second time through the image identification model, the target objects in each sample image are obtained, and a second annotation frame corresponding to each target object is generated.
According to the embodiment of the invention, only the identification sample image set is required to be identified to obtain the identification result, and then the sample data set is only required to be prepared once, so that subjective judgment of different testers is avoided, manual intervention is reduced, and the test reliability is improved. When the running model under different environments is required to be tested, the environment and model address information are filled in the automatic test platform to initiate the test again, so that the test cost is reduced.
For step S3, in a preferred embodiment, for each sample image, calculating an intersection of each second label frame and the corresponding first label frame, calculating a union of each second label frame and the corresponding first label frame, and calculating a ratio of the intersection to the union to obtain a corresponding intersection ratio of each second label frame, which specifically includes:
calculating the intersection of each second annotation frame and the corresponding first annotation frame through the IOU intersection comparison function, and calculating the union of each second annotation frame and the corresponding first annotation frame;
calculating the ratio of the intersection set to the union set through an IOU intersection ratio function to obtain the intersection ratio corresponding to each second labeling frame;
after the intersection ratio corresponding to each second labeling frame is obtained, the target object identification result corresponding to the second labeling frame with the intersection ratio larger than the preset value is judged to be accurate, and the target object identification result corresponding to the second labeling frame with the intersection ratio not larger than the preset value is judged to be wrong; the IOU ratio can be customized, generally if the IOU is more than or equal to 0.5, the detection is correct, and if the predicted second label frame and the actual first label frame are perfectly overlapped, the IOU intersection ratio is 1.
According to the embodiment of the invention, a tester is not required to compare and count the identification result of each picture, and the comparison calculation of each picture is automatically completed by means of the calculation mode of the IOU algorithm, so that the test efficiency is greatly improved, and the error rate of manual acquisition and calculation is also reduced. The method can also flexibly select various algorithm models to test different image recognition models by self-defining the area calculation size of the IOU intersection ratio, has low threshold for testers, and is convenient for testing the accuracy of the image recognition models.
For step S4, in a preferred embodiment, determining the accuracy of the image recognition model according to the number of target objects whose target object recognition results are accurate in all the sample images specifically includes:
and calculating the ratio of the target object identification result in all the sample images to the target object number in all the sample images, and taking the ratio as the accuracy of the image identification model.
According to the embodiment of the invention, the accuracy of the image recognition model can be obtained by collecting the ratio of the number of the target objects (namely the number of the second labeling frames) with the accurate recognition result to the number of the target objects (namely the number of the first labeling frames) in all the sample images on the basis of the number of which the recognition result is accurate or the recognition result is incorrect.
In another preferred embodiment, determining the accuracy of the image recognition model according to the number of target objects whose target object recognition results are accurate in all the sample images specifically includes:
calculating the number of target objects with accurate target object identification results in each sample image and the number of second annotation frames in each sample image to obtain the recall rate corresponding to each sample image;
and calculating average values corresponding to the recall rates of all the sample images, and taking the average values as the accuracy rate of the image recognition model.
According to the embodiment of the invention, the identification result of each image can be obtained by calculating the number of the target objects with accurate identification results in each sample image and the number of the second annotation frames in each sample image, so that the identification result of each image can be obtained, namely, the identification accuracy of the whole sample data set can be analyzed, and the identification accuracy of each image can be analyzed.
In a preferred embodiment, further comprising: and generating a confusion matrix corresponding to each target picture according to the number of the first annotation frames corresponding to each sample image, the number of the second annotation frames corresponding to each sample image, the target object identification result corresponding to each second annotation frame in each sample image and the recall rate corresponding to each sample image.
The confusion matrix of the embodiment of the invention is generated according to the identification result of each target object in each image, and can intuitively see the identification rate of which target object (product) in which picture is to be improved and optimized for the follow-up.
In a preferred embodiment, further comprising: and sending the sample image which is judged to be wrong by the target object identification result corresponding to each second annotation frame to a preset error file.
The embodiment of the invention can also be divided into three folders according to error types (misprediction, misprediction and multiprediction) by generating the error folder in the root directory, and each folder records a specific error picture and marks the reason of the error on the picture.
The misprediction folder comprises sample images which are judged to be wrong by the target object recognition result, the misprediction folder comprises sample images corresponding to the first annotation frame when the first annotation frame cannot find the corresponding second annotation frame, and the multi-prediction folder comprises sample images corresponding to the second annotation frame when the second annotation frame cannot find the corresponding first annotation frame.
By generating an error folder, it is possible to facilitate viewing of images identifying errors, and for generating test reports.
The embodiment of the invention can automatically identify whether the second marking frame result output by the second identification and the original first marking frame result of the sample picture set are correct, thereby continuously determining the accuracy of the image identification model according to the number of the target objects with the accurate target object identification result in all sample images without adopting a manual statistics mode, and compared with the manual statistics mode, when a large number of pictures are identified, the invention can automatically identify whether each picture is identified and whether the identification is correct or not, and determine the accuracy of the image identification model, thereby improving the test efficiency.
As shown in fig. 2, on the basis of the embodiments of the test methods of the various image recognition models, the invention correspondingly provides device item embodiments;
an embodiment of the present invention provides a test device for an image recognition model, including: the system comprises a sample image set acquisition module, a labeling frame generation module, a recognition result determination module and an accuracy determination module;
the sample image set acquisition module is used for acquiring a sample image set of an image recognition model; the sample image set comprises a plurality of sample images, and each sample image comprises a plurality of first annotation frames corresponding to target objects;
the annotation frame generation module is used for inputting each sample image into the image recognition model so that the image recognition model recognizes target objects in each sample image and generates a second annotation frame corresponding to each target object;
the recognition result determining module is used for calculating the intersection of each second annotation frame and the corresponding first annotation frame for each sample image, calculating the union of each second annotation frame and the corresponding first annotation frame, and calculating the ratio of the intersection to the union to obtain the corresponding intersection ratio of each second annotation frame; the target object identification result corresponding to the second labeling frame with the intersection ratio being larger than the preset value is judged to be accurate, and the target object identification result corresponding to the second labeling frame with the intersection ratio being not larger than the preset value is judged to be wrong;
the accuracy rate determining module is used for determining the accuracy rate of the image recognition model according to the number of the target objects, the target object recognition results of which are accurate, in all the sample images.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the invention, the connection relation between the modules represents that the modules have communication connection, and can be specifically implemented as one or more communication buses or signal lines. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
It will be clearly understood by those skilled in the art that, for convenience and brevity, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Based on the embodiments of the testing methods of the various image recognition models, the invention correspondingly provides the embodiments of the terminal equipment item.
An embodiment of the present invention provides a terminal device, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor implements a method for testing an image recognition model according to any one of the embodiments of the present invention when the processor executes the computer program.
The terminal equipment can be computing terminal equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory.
The processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the terminal device, and which connects various parts of the entire terminal device using various interfaces and lines.
The memory may be used to store the computer program, and the processor may implement various functions of the terminal device by running or executing the computer program stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the cellular phone, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid state storage device.
Based on the above embodiments of the test methods of the various image recognition models, the present invention correspondingly provides storage medium item embodiments.
An embodiment of the present invention provides a storage medium, where the storage medium includes a stored computer program, where when the computer program runs, the device where the computer readable storage medium is located is controlled to execute a method for testing an image recognition model according to any one of the embodiments of the present invention.
The storage medium is a computer readable storage medium, and the computer program is stored in the computer readable storage medium, and when executed by a processor, the computer program can implement the steps of the above-mentioned method embodiments. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.

Claims (9)

1. A method for testing an image recognition model, comprising:
acquiring a sample image set of an image recognition model; the sample image set comprises a plurality of sample images, and each sample image comprises a plurality of first annotation frames corresponding to target objects;
inputting each sample image into the image recognition model so that the image recognition model recognizes target objects in each sample image and generates a second annotation frame corresponding to each target object;
for each sample image, calculating the intersection of each second annotation frame and the corresponding first annotation frame, calculating the union of each second annotation frame and the corresponding first annotation frame, and calculating the ratio of the intersection to the union to obtain the corresponding intersection ratio of each second annotation frame; the target object identification result corresponding to the second labeling frame with the intersection ratio being larger than the preset value is judged to be accurate, and the target object identification result corresponding to the second labeling frame with the intersection ratio being not larger than the preset value is judged to be wrong;
and determining the accuracy of the image recognition model according to the number of the target objects, the target object recognition results of which are accurate, in all the sample images.
2. The method for testing an image recognition model according to claim 1, wherein calculating an intersection of each second labeling frame and the corresponding first labeling frame, calculating a union of each second labeling frame and the corresponding first labeling frame, and calculating a ratio of the intersection to the union, to obtain a corresponding intersection ratio of each second labeling frame, comprises:
calculating the intersection of each second annotation frame and the corresponding first annotation frame through the IOU intersection comparison function, and calculating the union of each second annotation frame and the corresponding first annotation frame;
and calculating the ratio of the intersection set to the union set through an IOU intersection ratio function to obtain the intersection ratio corresponding to each second labeling frame.
3. The method for testing an image recognition model according to claim 1, wherein determining the accuracy of the image recognition model according to the number of target objects whose target object recognition results are accurate in all sample images specifically comprises:
and calculating the ratio of the number of the target objects in all sample images to the number of the target objects in all sample images, wherein the target object identification result is accurate, and taking the ratio as the accuracy of the image identification model.
4. The method for testing an image recognition model according to claim 1, wherein determining the accuracy of the image recognition model according to the number of target objects whose target object recognition results are accurate in all sample images specifically comprises:
calculating the number of target objects with accurate target object identification results in each sample image and the number of second annotation frames in each sample image to obtain the recall rate corresponding to each sample image;
and calculating average values corresponding to the recall rates of all the sample images, and taking the average values as the accuracy rate of the image recognition model.
5. The method of testing an image recognition model of claim 4, further comprising:
and generating a confusion matrix corresponding to each target picture according to the number of the first annotation frames corresponding to each sample image, the number of the second annotation frames corresponding to each sample image, the target object identification result corresponding to each second annotation frame in each sample image and the recall rate corresponding to each sample image.
6. The method for testing an image recognition model of claim 1, further comprising:
and sending the sample image which is judged to be wrong by the target object identification result corresponding to each second annotation frame to a preset error file.
7. A test device for an image recognition model, comprising: the system comprises a sample image set acquisition module, a labeling frame generation module, a recognition result determination module and an accuracy determination module;
the sample image set acquisition module is used for acquiring a sample image set of an image recognition model; the sample image set comprises a plurality of sample images, and each sample image comprises a plurality of first annotation frames corresponding to target objects;
the annotation frame generation module is used for inputting each sample image into the image recognition model so that the image recognition model recognizes target objects in each sample image and generates a second annotation frame corresponding to each target object;
the recognition result determining module is used for calculating the intersection of each second annotation frame and the corresponding first annotation frame for each sample image, calculating the union of each second annotation frame and the corresponding first annotation frame, and calculating the ratio of the intersection to the union to obtain the corresponding intersection ratio of each second annotation frame; the target object identification result corresponding to the second labeling frame with the intersection ratio being larger than the preset value is judged to be accurate, and the target object identification result corresponding to the second labeling frame with the intersection ratio being not larger than the preset value is judged to be wrong;
the accuracy rate determining module is used for determining the accuracy rate of the image recognition model according to the number of the target objects, the target object recognition results of which are accurate, in all the sample images.
8. A terminal device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing a method of testing an image recognition model according to any one of claims 1 to 6 when the computer program is executed.
9. A storage medium comprising a stored computer program, wherein the computer program, when run, controls a device in which the storage medium is located to perform a method of testing an image recognition model according to any one of claims 1 to 6.
CN202310096283.3A 2023-02-07 2023-02-07 Image recognition model testing method and device, terminal equipment and storage medium Pending CN116129142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310096283.3A CN116129142A (en) 2023-02-07 2023-02-07 Image recognition model testing method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310096283.3A CN116129142A (en) 2023-02-07 2023-02-07 Image recognition model testing method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116129142A true CN116129142A (en) 2023-05-16

Family

ID=86304404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310096283.3A Pending CN116129142A (en) 2023-02-07 2023-02-07 Image recognition model testing method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116129142A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215154A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Mask-based model evaluation method applied to face detection system
CN113298161A (en) * 2021-05-28 2021-08-24 平安科技(深圳)有限公司 Image recognition model testing method and device, computer equipment and storage medium
CN114387501A (en) * 2020-10-21 2022-04-22 中国石油天然气股份有限公司 Remote sensing intelligent identification method and device for deposition of flood-washing fan
CN115376074A (en) * 2022-10-25 2022-11-22 济南信通达电气科技有限公司 Method and system for evaluating recognition effect of power transmission line monitoring device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112215154A (en) * 2020-10-13 2021-01-12 北京中电兴发科技有限公司 Mask-based model evaluation method applied to face detection system
CN114387501A (en) * 2020-10-21 2022-04-22 中国石油天然气股份有限公司 Remote sensing intelligent identification method and device for deposition of flood-washing fan
CN113298161A (en) * 2021-05-28 2021-08-24 平安科技(深圳)有限公司 Image recognition model testing method and device, computer equipment and storage medium
CN115376074A (en) * 2022-10-25 2022-11-22 济南信通达电气科技有限公司 Method and system for evaluating recognition effect of power transmission line monitoring device

Similar Documents

Publication Publication Date Title
CN108683562B (en) Anomaly detection positioning method and device, computer equipment and storage medium
CN111198815B (en) Compatibility testing method and device for user interface
CN107451112B (en) Form tool data checking method, device, terminal equipment and storage medium
CN112181854A (en) Method, device, equipment and storage medium for generating flow automation script
CN111966600B (en) Webpage testing method, webpage testing device, computer equipment and computer readable storage medium
CN113505895B (en) Machine learning engine service system, model training method and configuration method
CN112527676A (en) Model automation test method, device and storage medium
CN112445490A (en) File sequence processing method and device, terminal equipment and storage medium
CN111610965A (en) Standard SDK (software development kit) making method and device of access control platform
CN116129142A (en) Image recognition model testing method and device, terminal equipment and storage medium
CN116738091A (en) Page monitoring method and device, electronic equipment and storage medium
CN114510305B (en) Model training method and device, storage medium and electronic equipment
CN115048302A (en) Front-end compatibility testing method and device, storage medium and electronic equipment
CN114416597A (en) Test case record generation method and device
CN111159003B (en) Batch processing test method and device
CN114328181A (en) Test case generation and execution method, device and storage medium
CN114416441A (en) Real-time database automatic testing method and system, electronic equipment and storage medium
CN112667513A (en) Test method, test device, test equipment and storage medium
CN112346994A (en) Test information correlation method and device, computer equipment and storage medium
CN113282482A (en) Compatibility test method and system for software package
CN116204670B (en) Management method and system of vehicle target detection data and electronic equipment
CN113742553B (en) Data processing method and device
CN117252188B (en) Software image monitoring method and system based on artificial intelligence
CN115484560B (en) Intelligent short message processing method and device, electronic equipment and storage medium
CN116302925A (en) Automatic test method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination