CN114972807B - Method and device for determining image recognition accuracy, electronic equipment and medium - Google Patents
Method and device for determining image recognition accuracy, electronic equipment and medium Download PDFInfo
- Publication number
- CN114972807B CN114972807B CN202210541027.6A CN202210541027A CN114972807B CN 114972807 B CN114972807 B CN 114972807B CN 202210541027 A CN202210541027 A CN 202210541027A CN 114972807 B CN114972807 B CN 114972807B
- Authority
- CN
- China
- Prior art keywords
- determining
- identification
- value
- image
- index value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000012545 processing Methods 0.000 claims abstract description 14
- 238000013136 deep learning model Methods 0.000 claims description 34
- 238000012512 characterization method Methods 0.000 claims 3
- 230000008030 elimination Effects 0.000 claims 1
- 238000003379 elimination reaction Methods 0.000 claims 1
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The disclosure provides a method, a device, equipment, a medium and a product for determining image recognition accuracy, and relates to the field of artificial intelligence, in particular to the technical fields of deep learning, image processing and the like. The method for determining the image identification accuracy comprises the following steps: aiming at a target object in an image set, acquiring a real object identifier aiming at the target object and an identification object identifier aiming at the target object, wherein the identification object identifier is obtained by identifying the target object in the image set; determining a first index value aiming at the identification object identifier by taking the real object identifier as a reference; determining a second index value aiming at the real object identification by taking the identification object identification as a reference; and determining the image identification accuracy rate based on the first index value and the second index value.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence, specifically to the technical fields of deep learning, image processing, and the like, and more specifically, to a method, an apparatus, an electronic device, a medium, and a program product for determining an image recognition accuracy.
Background
In some scenarios, a video or an image may be captured by a camera, and the video or the image may be processed by using an image recognition technique to recognize a target object. Therefore, the recognition accuracy of the image recognition technology is crucial. In order to ensure the effect of image recognition, the recognition accuracy of the image recognition technology needs to be evaluated.
Disclosure of Invention
The present disclosure provides a method, an apparatus, an electronic device, a storage medium, and a program product for determining an image recognition accuracy.
According to an aspect of the present disclosure, there is provided a method for determining an image recognition accuracy, including: aiming at a target object in an image set, acquiring a real object identifier aiming at the target object and a recognition object identifier aiming at the target object, wherein the recognition object identifier is obtained by recognizing the target object in the image set; determining a first index value aiming at the identification object identifier by taking the real object identifier as a reference; determining a second index value for the real object identifier with the identification object identifier as a reference; and determining the image identification accuracy rate based on the first index value and the second index value.
According to another aspect of the present disclosure, there is provided an apparatus for determining an image recognition accuracy rate, including: the device comprises an acquisition module, a first determination module, a second determination module and a third determination module. The device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a real object identifier for a target object and a recognition object identifier for the target object aiming at the target object in an image set, and the recognition object identifier is obtained by recognizing the target object in the image set; a first determination module, configured to determine a first metric value for the identification object identifier with reference to the real object identifier; a second determining module, configured to determine a second index value for the real object identifier with reference to the identification object identifier; and the third determination module is used for determining the image identification accuracy rate based on the first index value and the second index value.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of determining image recognition accuracy described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described method of determining an image recognition accuracy.
According to another aspect of the present disclosure, there is provided a computer program product comprising computer program/instructions which, when executed by a processor, implement the steps of the above-described method of determining image recognition accuracy.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 schematically illustrates a system architecture for determination of image recognition accuracy, according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a method of determining image recognition accuracy, according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a schematic diagram of determining a first indicator value according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of determining a second index value according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a block diagram of an apparatus for determining image recognition accuracy, according to an embodiment of the present disclosure; and
FIG. 6 is a block diagram of an electronic device for performing a determination of image recognition accuracy for implementing embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.).
Fig. 1 schematically illustrates a system architecture for determination of image recognition accuracy according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in fig. 1, the system architecture 100 according to this embodiment may include data acquisition devices 101, 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium for communication links between the data acquisition devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The data acquisition devices 101, 102, 103 interact with a server 105 over a network 104 to receive or send messages and the like. The data acquisition devices 101, 102, 103 have functions of acquiring images or acquiring videos, and the data acquisition devices 101, 102, 103 include, but are not limited to, cameras, smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, for example, processes data collected by the data collection devices 101, 102, and 103, and the server 105 may be a cloud server, that is, the server 105 has a cloud computing function.
For example, the server 105 has an image processing function, and after the data acquisition device 101 acquires an image or a video, the image or the video is transmitted to the server 105 for image recognition. The server 105 may identify the image or the video through the deep learning model, and obtain an identification result, where the identification result includes a target object in the image or the video.
It should be noted that the method for determining the image recognition accuracy provided by the embodiment of the present disclosure may be executed by the server 105. Accordingly, the apparatus for determining the image recognition accuracy provided by the embodiment of the present disclosure may be disposed in the server 105.
It should be understood that the number of clients, networks, and servers in FIG. 1 is merely illustrative. There may be any number of clients, networks, and servers, as desired for an implementation.
In security, supermarket, etc. scenes, images or videos of an object are usually captured by one or more cameras. The object includes a pedestrian. When an image or video is captured by multiple cameras, the scene is a cross-shot scene.
In some cases, in order to ensure accuracy of image processing, it is necessary to evaluate a deep learning model for performing image recognition. For example, it is necessary to evaluate the image recognition accuracy of the deep learning model.
For example, features of target objects in an image or a video may be extracted by using a pedestrian re-identification (REID) technique in a deep learning model, and feature similarity of two target objects may be calculated, where a high similarity indicates that the probability of representing the same target is high. For example, a threshold is set, and if the similarity exceeds the threshold, two target objects are considered to be the same target object.
In the field of Multi-Object Tracking (MOT), the recognition Accuracy of the deep learning model can be generally evaluated by a Multi-Object Tracking Accuracy (MOTA) index and a cross-camera Tracking Accuracy (MCTA) index.
In an example, when an image or a video is acquired through one camera, the image or the video is identified through a deep learning model to obtain an identification result, and the identification accuracy of the deep learning model can be evaluated based on the identification result through an MOTA index, wherein the MOTA index is shown in formula (1).
The MOTA index is mainly used for evaluating the recognition accuracy of a specific target under a single lens, and a multi-frame image is obtained through shooting by a camera. And detecting a target object in the multi-frame image through the deep learning model, and marking out a detection frame if the target object is detected.
In the formula (1), FN and FP are evaluation indexes for the detection frame. FN represents that the area of the current frame image in which the object appears has no detection frame; FP indicates a region presence detection frame in which no object appears in the current frame image. For example, FN refers to the sum of false negatives of all frame images, i.e., assume FN t Is false negative number of the t frame, FN = ∑ E t fn t Similarly, FP = ∑ Σ t fP t 。
T is the sum of the number of target objects actually existing in all frame images, i.e. the T-th frame is assumed to be actually existing g t For each target object, T = ∑ Σ t g t 。
Phi denotes the number of transitions (Fragmentation) of the target object in all frame images, phi t Target hopping for t-th frameNumber, then phi = ∑ Σ t φ t . The hop count represents the number of changes of the same target object between different frame images.
FN, FP, phi respectively represent the missing rate, the misjudgment rate, the mismatching rate.
In another example, when images or videos are captured through a plurality of cameras, the images or videos are identified through a deep learning model to obtain an identification result, and the identification accuracy of the deep learning model can be evaluated based on the identification result through an MCTA index, which is shown in formula (2).
The MCTA index is mainly used for evaluating the recognition accuracy of a specific target object under multiple shots, images or videos are obtained through shooting by multiple cameras, the target object in the images or videos is detected through a deep learning model, and if the target object is detected, the Identification (ID) of the target object is marked.
In formula (2), P and R represent the accuracy and recall of target recognition on an image, respectively. For example, one frame of image includes 5 targets, if the deep learning model detects 3 targets, wherein 2 of the 3 targets are correctly identified, the recall rate is, for example, 2/5, and the precision rate is 2/3.
M ω The error matching number of the target object ID is obtained by carrying out target detection on the image shot by a single camera; t is ω The number of correct detections of the target object ID indicating the target detection of the image taken by a single camera.
M h The error matching number of the target object ID is obtained by carrying out target detection on the images shot by the plurality of cameras; t is a unit of h The number of correct detections of the target object ID indicating the target detection of the images captured by the plurality of cameras is such that a certain target object disappears from an image captured by a certain camera and reappears in an image captured by another camera next time.
The value of the MCTA index is typically in the range of [0,1 ].
For the Person re-identification (REID) technique, the identification accuracy of the deep learning model can be generally evaluated by RankN, mAP indexes.
In an example, N in the RankN index may take a positive integer, for example, N = 1. First, an indication function is definedFor indicating whether the two images q, i have the same label, which may represent the ID of the target object:
for each query image in Q query images, similarity comparison is carried out on the query image and a plurality of candidate images, n =1 candidate images with the largest similarity are selected, whether the IDs of the query image and the target object in the n =1 candidate images are consistent or not is determined, and if the IDs are consistent, the query image meets the condition. And taking the ratio of the number of the images satisfying the condition in the Q query images to Q as Rank1.
Similarly, n may be equal to 5, 10, etc. When n =5, n =5 candidate images with the largest similarity to the query image are determined from the plurality of candidate images, whether an image with the ID consistent with the ID of the object in the query image exists in the 5 candidate images is determined, and if yes, the query image meets the condition.
In another example, the mAP index represents the accuracy of all detection results.
Wherein, P represents the precision, that is, for a certain query image, the number proportion of the first k candidate images to the image with the same ID in the query image is calculated.
AP @ n denotes average accuracy, i.e. only the candidate image with the same ID as the query image is subjected to accuracy averaging in the first n candidate images, i.e. n q Indicating how many candidate images with the same ID as the query image q are among the first k candidate images.
mAP @ n represents the AP calculated for all Q query images, the mean calculated for all APs.
On the basis of formulas (3) and (4), taking n =5 as an example, if the number of pictures with object IDs in agreement with the object ID in the query image is 3 among 5 candidate images, ap @ n is 3/5. Averaging the AP @ n values of Q query images to obtain mAP @ n in formula (5).
For the two types of indexes, namely the MOT index (including the MOTA index and the MCTA index) and the REID index (including the RankN index and the mAP index), the two types of indexes are difficult to accurately reflect the real effect of cross-shot tracking (different shot scenes).
The MOT index is greatly influenced by the single-shot tracking index, but the technical schemes used for cross-shot tracking and single-shot tracking are relatively independent and unrelated. Therefore, when the index is used for evaluation, the evaluation result is often influenced by the single shot tracking effect and cannot accurately represent the cross-shot matching effect, and the calculation is complex.
For REID indexes, the cross-shot tracking usually adopts REID matching features, but the REID feature matching is only a ring in the whole process and cannot represent the effect of the whole process, and other links in the process can also influence the final result, so the evaluation effect is not accurate enough.
In view of this, the present disclosure proposes an optimized determination method of image recognition accuracy, and the determination method of image recognition accuracy according to an exemplary embodiment of the present disclosure is described below with reference to fig. 2 to 4. The method for determining the image recognition accuracy of the embodiment of the present disclosure may be performed by, for example, a server shown in fig. 1, which is, for example, the same as or similar to the electronic device below.
Fig. 2 schematically illustrates a flow chart of a method of determining image recognition accuracy according to an embodiment of the present disclosure.
As shown in fig. 2, the method 200 for determining an image recognition accuracy of the embodiment of the present disclosure may include, for example, operations S210 to S240.
In operation S210, for a target object in the image set, a real object identifier for the target object and a recognition object identifier for the target object are acquired.
In operation S220, a first index value for the recognition object identifier is determined with reference to the real object identifier.
In operation S230, a second index value for the real object identification is determined with reference to the recognition object identification.
In operation S240, an image recognition accuracy is determined based on the first index value and the second index value.
Illustratively, one or more images are included in the set of images. In an example, the set of images may include a video including a plurality of video frames, the set of images being captured by a data capture device, for example, including a camera.
After the image set is obtained, the target object in the image set can be marked to obtain a real target object label. The identification object identifier for the target object is obtained by identifying the target object in the image set, for example. For example, the image set is recognized by an image recognition technique to obtain a target object, and the target object obtained by recognition is identified to obtain a recognition object identifier, which is a prediction result of the target object. When the image recognition technology includes a deep learning model, the target object in the image set may be recognized by using the deep learning model, resulting in a recognition object identifier. The deep learning model includes, for example, an object detection model.
After the real object identification and the recognition object identification are obtained, the image recognition accuracy of the deep learning model can be evaluated based on the real object identification and the recognition object identification. For example, a first index value for the identification object identifier is determined with the identification object identifier as a reference, and a second index value for the identification object identifier is determined with the identification object identifier as a reference, the first index value and the second index value characterizing, for example, the accuracy or the error rate of the identification object identifier. Next, an image recognition accuracy is determined based on the first index value and the second index value.
According to the embodiment of the disclosure, the image recognition accuracy of the deep learning model is evaluated based on the real object identification and the recognition object identification, the evaluation effect is improved, the calculated amount is reduced, the evaluation mode is convenient and accurate, and the calculation complexity is low.
Fig. 3 schematically shows a schematic diagram of determining a first index value according to an embodiment of the present disclosure.
As shown in fig. 3, the image set includes, for example, N image sets, where the N image sets are acquired by, for example, N data acquisition devices, and N is an integer greater than 1.
Determining M from a set of N images 1 A set of images, the identification object identification is to M 1 Obtained by identifying individual sets of images, M 1 Not greater than N. For example, the deep learning model is used to respectively identify the N image sets to obtain identification results, and the identification results represent M 1 Each image set in the image sets comprises a certain target object, and the recognition result is a prediction result of the deep learning algorithm and is not necessarily completely accurate. Marking the identified target object to obtain an identification object identifier 'A _ 1', namely M 1 Each of the image sets has a recognition object identification "a _1".
Identifying a target object obtained by aiming at the deep learning model, wherein the target object is at M 1 The identification of the corresponding recognition object in each image set is "a _1". Next, it is determined that the target objects corresponding to the identification object identifiers "a _1" are respectively M 1 Identification of real objects in the respective image sets, i.e. determining the identification of object "A _1" at M 1 M corresponding to each other in image set 1 And identifying the real object. M 1 The real object identifiers are, for example, "a _1", "a _2", "a _3", "a _1", and 1 ”,M 1 the individual real object identifications may be represented as an id _1 set. It can be seen that the recognition result of the deep learning model includes a correct result and an incorrect result.
Next, based on M 1 And determining a first index value for the identification object identifier.
Illustratively, based on M 1 And identifying the real object, and determining a first numerical value TP representing correct image recognition and a second numerical value FP representing wrong image recognition.
For example, remove M 1 Repeating the real object identification in the real object identification to obtain the residual K 1 Individual real object identification, K 1 Not more than M 1 . The rest of K 1 Each real object identifier comprises ' A _1 ', ' A _2 ', ' A _3 ', ' A _ K 1 ”。
Will M 1 And K 1 Is determined as a first value TP, as in equation (6):
TP=M 1 -K 1 =M 1 -len(set(id_1)) (6)
wherein id _1 includes M 1 The real object identifiers "a _1", "a _2", "a _3", "a _1", 1 "; set (id _ 1) denotes the pair M 1 Obtaining residual K after de-duplication of each real object identifier 1 An individual real object identification; len (set (id _ 1)) represents K 1 Number of real object identifications, i.e. K 1 =len(set(id_1))。
Based on K 1 Determining a second numerical value FP as in equation (7):
FP=K 1 -1=len(set(id_1))-1 (7)
finally, the first numerical value TP and the second numerical value FP are determined as a first index value for the identification object identifier.
According to the embodiment of the disclosure, a first numerical value TP represents a correct recognition result of a deep learning model for a target object, a second numerical value FP represents an incorrect recognition result of the deep learning model for the target object, and a first index value used for representing the accuracy of a recognition object identifier is obtained based on the first numerical value TP and the second numerical value FP.
It can be understood that, the first index value is determined based on the first numerical value and the second numerical value, the process is convenient and fast to calculate and high in accuracy, and the calculation complexity is reduced.
Fig. 4 schematically illustrates a schematic diagram of determining a second index value according to an embodiment of the present disclosure.
As shown in fig. 4, the image set includes, for example, N image sets, where N is an integer greater than 1.
Determining M from a set of N images 2 A set of images, M 2 Each image set comprising a target object corresponding to a real object identification, M 2 Not greater than N. In other words, for a target object, the real object identifier "B _1" of the target object is obtained, and M having the target object is determined from the N image sets 2 A set of images.
For M 2 Each image set is processed by a deep learning model to the M 2 The image sets are identified to obtain identification results, and the identification results represent M 2 Whether the target object exists in each image set in the image sets.
For example, determining that the real object is identified at M 2 M corresponding to each other in image set 2 Identification of individual recognition objects, M 2 The identification objects are respectively ' B _1 ', ' B _2 ', ' B _3 ', ' B _1 2 ”。M 2 The individual identification object identifications may be represented as an id _2 set. If the identification object identification is consistent with the real object identification 'B _ 1', the identification result of the deep learning model to a certain image set is correct, otherwise, the identification result is wrong.
Next, based on M 2 And identifying the object identifier, and determining a second index value aiming at the real object identifier.
For example, remove M 2 Repeated identification object identification in each identification object identification to obtain the residual K 2 Identification of individual recognition objects, K 2 Not more than M 2 . Remaining K 2 The identification of each identification object comprises 'B _ 1'B_2”、“B_3”、......、“B_K 2 ”。
Next, based on K 2 And determining a second index value FN for the real object identification, as shown in equation (8).
FN=K 2 -1=len(set(id_2))-1 (8)
Wherein id _2 includes "B _1", "B _2", "B _3", "B _1", 2 "; set (id _ 2) denotes the pair M 2 Obtaining the residual K after the identification of each identification object is de-duplicated 2 Individual identification object identity, len (set (id _ 2)) means K 2 Number of identification object identifiers, i.e. K 2 =len(set(id_2))。
According to the embodiment of the disclosure, the second index value FN represents the error recognition result of the deep learning model on the target object, the process of determining the second index by the method is convenient and fast to calculate, the accuracy is high, and the calculation complexity is reduced.
According to the embodiment of the present disclosure, after the first index value and the second index value are obtained, the image recognition accuracy may be determined based on the first index value and the second index value.
For example, the precision rate is obtained based on the first index value, and the recall rate is obtained based on the first index value and the second index value.
In an example, the image recognition accuracy can be determined based on the precision rate. Alternatively, image recognition accuracy is determined based on recall. Alternatively, to improve accuracy, the accuracy and recall may be combined to determine image recognition accuracy.
For example, the first index value includes the first value TP and the second value FP. The precision rate can be obtained based on the first index value. For example, a first sum of the first value TP and the second value FP is determined, and a ratio between the first value TP and the first sum is determined as the precision ratio. Precision is shown in equation (9):
illustratively, the second index value is denoted as FN. The recall rate may be obtained based on the first index value (including the first numerical value TP and the second numerical value FP) and the second index value FN.
For example, a second sum of the first value TP and the second index value FN is determined, and a ratio between the first value TP and the second sum is determined as a recall rate. The Recall rate Recall is shown in equation (10):
after the precision rate and the recall rate are obtained, an image recognition accuracy rate can be determined based on the precision rate and the recall rate, and the image recognition accuracy rate F is shown as formula (11):
according to the embodiment of the disclosure, the image recognition accuracy is determined based on the precision rate and the recall rate, so that the image recognition accuracy is more accurate. The image recognition accuracy represents the recognition accuracy of the deep learning model to a plurality of image sets (across shots), so that the recognition effect of the deep learning model is evaluated through the image recognition accuracy, the evaluation effect is improved, the deep learning model is evaluated in time, and the deep learning model is conveniently adjusted in time to improve the recognition effect of the deep learning model.
Fig. 5 schematically shows a block diagram of an apparatus for determining an image recognition accuracy according to an embodiment of the present disclosure.
As shown in fig. 5, the apparatus 500 for determining an image recognition accuracy rate according to an embodiment of the present disclosure includes, for example, an acquisition module 510, a first determination module 520, a second determination module 530, and a third determination module 540.
The obtaining module 510 may be configured to obtain, for a target object in the image set, a real object identifier for the target object and a recognition object identifier for the target object, where the recognition object identifier is obtained by recognizing the target object in the image set. According to the embodiment of the present disclosure, the obtaining module 510 may perform, for example, the operation S210 described above with reference to fig. 2, which is not described herein again.
The first determining module 520 may be configured to determine a first metric value for the identification object identifier with reference to the real object identifier. According to the embodiment of the present disclosure, the first determining module 520 may perform, for example, operation S220 described above with reference to fig. 2, which is not described herein again.
The second determining module 530 may be configured to determine a second index value for the real object identifier with reference to the identification object identifier. According to an embodiment of the present disclosure, the second determining module 530 may perform, for example, the operation S230 described above with reference to fig. 2, which is not described herein again.
The third determination module 540 may be configured to determine the image recognition accuracy based on the first index value and the second index value. According to an embodiment of the present disclosure, the third determining module 540 may, for example, perform operation S240 described above with reference to fig. 2, which is not described herein again.
According to an embodiment of the present disclosure, an image set includes N image sets, N being an integer greater than 1; the first determination module 520 includes: a first determination submodule, a second determination submodule, and a third determination submodule. A first determining submodule for determining M from the N image sets 1 A set of images, wherein the identification object is M 1 Obtained by identifying individual sets of images, M 1 Not more than N; a second determining submodule for determining that the identification of the recognition object is at M 1 M corresponding to each other in image set 1 An individual real object identification; a third determination submodule for determining whether to use M 1 And determining a first index value for the identification object identifier.
According to an embodiment of the present disclosure, the third determination submodule includes: a first determination unit and a second determination unit. A first determination unit for determining whether to perform a predetermined operation based on M 1 The real object identification is used for determining a first numerical value representing correct image recognition and a second numerical value representing wrong image recognition; a second determination unit for determining the first value and the second value as the second value for the identification object identifierAn index value.
According to an embodiment of the present disclosure, the first determination unit includes: the removal subunit, the first determination subunit and the second determination subunit. A removal subunit for removing M 1 Repeating the real object identification in the real object identification to obtain the residual K 1 Identity of the real object, K 1 Not more than M 1 (ii) a A first determining subunit for determining M 1 And K 1 Is determined as a first value; a second determining subunit for determining whether to use the K-based 1 And determining a second value.
According to an embodiment of the present disclosure, the image sets include N image sets, N being an integer greater than 1; the second determining module 530 includes: a fourth determination submodule, a fifth determination submodule, and a sixth determination submodule. A fourth determination submodule for determining M from the set of N images 2 A set of images, wherein M 2 Each image set comprising a target object corresponding to a real object identification, M 2 Not more than N; a fifth determining sub-module for determining that the real object is identified at M 2 M corresponding to each other in each image set 2 An individual recognition object identification; a sixth determining submodule for determining based on M 2 And identifying the object identifier, and determining a second index value aiming at the real object identifier.
According to an embodiment of the present disclosure, the sixth determination submodule includes: a removal unit and a third determination unit. A removal unit for removing M 2 Repeated identification object identification in the identification object identification to obtain the residual K 2 Identification of individual identification object, K 2 Not more than M 2 (ii) a A third determination unit for determining whether to use the first and second determination units based on K 2 And determining a second index value for the real object identification.
According to an embodiment of the present disclosure, the third determining module 540 includes: a first obtaining submodule, a second obtaining submodule, and a seventh determining submodule. The first obtaining submodule is used for obtaining the accuracy rate based on the first index value; the second obtaining submodule is used for obtaining a recall rate based on the first index value and the second index value; and the seventh determining submodule is used for determining the image recognition accuracy rate based on the precision rate and the recall rate.
According to an embodiment of the present disclosure, the first index value includes a first value and a second value; the first obtaining submodule includes: a fourth determination unit and a fifth determination unit. A fourth determination unit configured to determine a first sum of the first numerical value and the second numerical value; a fifth determining unit for determining a ratio between the first value and the first sum as the precision rate.
According to an embodiment of the present disclosure, the second obtaining sub-module includes: a sixth determining unit and a seventh determining unit. A sixth determining unit for determining a second sum of the first numerical value and the second index value; a seventh determining unit for determining a ratio between the first value and the second sum value as a recall rate.
According to an embodiment of the present disclosure, the apparatus 500 may further include: and the processing module is used for identifying the target object in the image set by using the deep learning model to obtain an identification object identifier.
According to the embodiment of the disclosure, the N image sets are acquired by the N data acquisition devices respectively, and the image sets comprise video data.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure, application and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations, necessary confidentiality measures are taken, and the customs of the public order is not violated.
In the technical scheme of the disclosure, before the personal information of the user is acquired or collected, the authorization or the consent of the user is acquired.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the above-described method of determining image recognition accuracy.
According to an embodiment of the present disclosure, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the method of determining image recognition accuracy described above.
FIG. 6 is a block diagram of an electronic device for performing a determination of image recognition accuracy for implementing embodiments of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. The electronic device 600 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable image recognition accuracy determination apparatus, such that the program codes, when executed by the processor or controller, cause the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combining a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (24)
1. A method for determining image recognition accuracy rate comprises the following steps:
acquiring a real object identifier for a target object and a recognition object identifier for the target object for respective target objects in N image sets, wherein the recognition object identifiers are obtained by recognizing the target objects in the image sets; the N image sets include M 1 A set of images and M 2 A set of images, said M 1 Each image set comprises target objects corresponding to the same identification object identifier, M 2 Each image set comprises target objects corresponding to the same real object identification; n is an integer greater than 1, M 1 Not more than N, M 2 Not more than N;
determining a first index value aiming at the identification object identifier by taking the real object identifier as a reference; the first index value comprises a value based on K 1 A determined second value, said K 1 Characterizing pairs with said M 1 M corresponding to each image set one by one 1 The number of the real object identifications remaining after the de-duplication operation is performed on each real object identification; k 1 Not more than M 1 ;
Based on K and using the identification object mark as reference 2 Determining a second index value for the real object identification; said K 2 Characterization is directed to the M 2 M corresponding to image sets one by one 2 The number of the identification object identifiers left after the duplication removing operation is executed on each identification object identifier; k 2 Not more than M 2 (ii) a And
and determining the image identification accuracy rate based on the first index value and the second index value.
2. The method according to claim 1, wherein the determining a first metric value for the recognition object identifier with reference to the real object identifier comprises:
determining M from the N image sets 1 A set of images, wherein the identification object identification is for the M 1 Identifying each image set;
determining the identification objectIs identified in the M 1 M corresponding to each other in image set 1 An individual real object identification; and
based on the M 1 A real object identifier, determining the first index value for the identification object identifier.
3. The method of claim 2, wherein the basing on the M 1 A real object identifier, the determining the first index value for the recognition object identifier comprising:
based on the M 1 The real object identification is used for determining a first numerical value representing correct image recognition and a second numerical value representing wrong image recognition; and
determining the first numerical value and the second numerical value as the first index value for the identification object identifier.
4. The method of claim 3, wherein the basing on the M 1 Determining a first value characterizing image recognition correctness and a second value characterizing image recognition errors for each real object identifier comprises:
removing the M 1 Repeating the real object identification to obtain the residual K 1 A real object identification;
will M 1 And K 1 Is determined as the first value; and
based on the K 1 And determining the second value.
5. The method according to any of claims 1-4, wherein said determining a second index value for said real object identification with reference to said identified object identification comprises:
determining M from the N image sets 2 A set of images, wherein M 2 Each image set comprises a target object corresponding to the real object identification;
determining that the real object identity is at the M 2 One in each image setCorresponding M 2 An individual recognition object identification; and
based on the M 2 A recognition object identification, determining the second index value for the real object identification.
6. The method of claim 5, wherein the basing on the M 2 A recognition object identification, determining the second index value for the real object identification comprising:
removing the M 2 Repeated identification object identification in the identification object identification to obtain the residual K 2 An identification object identifier; and
based on the K 2 Determining the second index value for the real object identification.
7. The method of any of claims 1-4, wherein the determining an image recognition accuracy rate based on the first index value and the second index value comprises:
obtaining a precision rate based on the first index value;
obtaining a recall rate based on the first index value and the second index value; and
determining the image recognition accuracy rate based on the accuracy rate and the recall rate.
8. The method of claim 7, wherein the first index value comprises a first value and a second value; the obtaining of the precision rate based on the first index value comprises:
determining a first sum of said first value and said second value; and
determining a ratio between the first value and the first sum as the precision rate.
9. The method of claim 8, wherein said deriving a recall based on the first indicator value and the second indicator value comprises:
determining a second sum of the first numerical value and the second index value; and
determining a ratio between the first value and the second sum value as the recall rate.
10. The method of claim 1, further comprising:
and identifying the target object in the image set by using a deep learning model to obtain the identification object.
11. The method of claim 1, wherein the N image sets are acquired by N data acquisition devices, respectively, the image sets comprising video data.
12. An apparatus for determining an image recognition accuracy rate, comprising:
an obtaining module, configured to obtain, for respective target objects in N image sets, a real object identifier for the target object and a recognition object identifier for the target object, where the recognition object identifiers are obtained by recognizing the target objects in the image sets; the N image sets include M 1 A set of images and M 2 A set of images, said M 1 Each image set comprises target objects corresponding to the same identification object identifier, M 2 Each image set comprises target objects corresponding to the same real object identifier; n is an integer greater than 1, M 1 Not more than N, M 2 Not more than N;
a first determination module, configured to determine a first metric value for the identification object identifier with reference to the real object identifier; the first index value comprises a value based on K 1 A determined second value, said K 1 Characterization is directed to the M 1 M corresponding to image sets one by one 1 The number of the remaining real object identifiers after the duplication elimination operation is performed on each real object identifier; k is 1 Not more than M 1 ;
A second determination module for taking the identification of the identified object as reference based onK 2 Determining a second index value for the real object identification; said K 2 Characterization is directed to the M 2 M corresponding to image sets one by one 2 The number of the identification object identifiers left after the duplication removing operation is executed on each identification object identifier; k 2 Not more than M 2 (ii) a And
and the third determination module is used for determining the image identification accuracy rate based on the first index value and the second index value.
13. The apparatus of claim 12, wherein the first determining means comprises:
a first determining sub-module for determining M1 image sets from the N image sets, wherein the identification object identification is for the M image sets 1 The image sets are obtained by identification;
a second determining sub-module for determining that the identification object is identified at the M 1 M corresponding to each other in each image set 1 A real object identification; and
a third determination submodule for determining whether to perform a predetermined operation based on the M 1 A real object identifier, determining the first index value for the identification object identifier.
14. The apparatus of claim 13, wherein the third determination submodule comprises:
a first determination unit for determining M 1 The real object identification is used for determining a first numerical value representing correct image recognition and a second numerical value representing wrong image recognition; and
a second determination unit configured to determine the first numerical value and the second numerical value as the first index value for the identification target identifier.
15. The apparatus of claim 14, wherein the first determining unit comprises:
a removal subunit for removing the M 1 Repeated real object identification in each real object identification to obtain the residueThe rest of K 1 An individual real object identification;
a first determining subunit for determining M 1 And K 1 Is determined as the first value; and
a second determining subunit for determining K 1 And determining the second value.
16. The apparatus of any of claims 12-15, wherein the second determining means comprises:
a fourth determining submodule for determining M from said set of N images 2 A set of images, wherein M 2 Each image set comprises a target object corresponding to the real object identification;
a fifth determining submodule for determining that the real object identifier is at M 2 M corresponding to each other in image set 2 An individual recognition object identification; and
a sixth determination submodule for determining based on the M 2 A recognition object identification, determining the second index value for the real object identification.
17. The apparatus of claim 16, wherein the sixth determination submodule comprises:
a removal unit for removing the M 2 Repeated identification object identification in each identification object identification to obtain the residual K 2 An individual recognition object identification; and
a third determination unit for determining K 2 Determining the second index value for the real object identification.
18. The apparatus of any of claims 12-15, wherein the third determining means comprises:
the first obtaining submodule is used for obtaining the precision rate based on the first index value;
the second obtaining submodule is used for obtaining a recall rate based on the first index value and the second index value; and
a seventh determining sub-module, configured to determine the image recognition accuracy rate based on the accuracy rate and the recall rate.
19. The apparatus of claim 18, wherein the first indicator value comprises a first value and a second value; the first obtaining sub-module includes:
a fourth determination unit configured to determine a first sum of the first numerical value and the second numerical value; and
a fifth determining unit, configured to determine a ratio between the first numerical value and the first sum as the precision ratio.
20. The apparatus of claim 19, wherein the second obtaining submodule comprises:
a sixth determining unit configured to determine a second sum of the first numerical value and the second index value; and
a seventh determining unit configured to determine a ratio between the first numerical value and the second sum value as the recall rate.
21. The apparatus of claim 12, further comprising:
and the processing module is used for identifying the target object in the image set by using a deep learning model to obtain the identification object identifier.
22. The apparatus of claim 12, wherein the N image sets are acquired by N data acquisition devices, respectively, the image sets comprising video data.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210541027.6A CN114972807B (en) | 2022-05-17 | 2022-05-17 | Method and device for determining image recognition accuracy, electronic equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210541027.6A CN114972807B (en) | 2022-05-17 | 2022-05-17 | Method and device for determining image recognition accuracy, electronic equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114972807A CN114972807A (en) | 2022-08-30 |
CN114972807B true CN114972807B (en) | 2023-03-28 |
Family
ID=82983420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210541027.6A Active CN114972807B (en) | 2022-05-17 | 2022-05-17 | Method and device for determining image recognition accuracy, electronic equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114972807B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112633384A (en) * | 2020-12-25 | 2021-04-09 | 北京百度网讯科技有限公司 | Object identification method and device based on image identification model and electronic equipment |
CN113591592A (en) * | 2021-07-05 | 2021-11-02 | 珠海云洲智能科技股份有限公司 | Overwater target identification method and device, terminal equipment and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111126514A (en) * | 2020-03-30 | 2020-05-08 | 同盾控股有限公司 | Image multi-label classification method, device, equipment and medium |
CN112270252A (en) * | 2020-10-26 | 2021-01-26 | 西安工程大学 | Multi-vehicle target identification method for improving YOLOv2 model |
CN112464766B (en) * | 2020-11-17 | 2024-07-02 | 北京农业智能装备技术研究中心 | Automatic farmland land identification method and system |
CN113344055B (en) * | 2021-05-28 | 2023-08-22 | 北京百度网讯科技有限公司 | Image recognition method, device, electronic equipment and medium |
CN114332533A (en) * | 2021-12-24 | 2022-04-12 | 中国地质大学(武汉) | Landslide image identification method and system based on DenseNet |
CN114492764A (en) * | 2022-02-21 | 2022-05-13 | 深圳市商汤科技有限公司 | Artificial intelligence model testing method and device, electronic equipment and storage medium |
-
2022
- 2022-05-17 CN CN202210541027.6A patent/CN114972807B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112633384A (en) * | 2020-12-25 | 2021-04-09 | 北京百度网讯科技有限公司 | Object identification method and device based on image identification model and electronic equipment |
CN113591592A (en) * | 2021-07-05 | 2021-11-02 | 珠海云洲智能科技股份有限公司 | Overwater target identification method and device, terminal equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114972807A (en) | 2022-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10528844B2 (en) | Method and apparatus for distance measurement | |
WO2022166207A1 (en) | Face recognition method and apparatus, device, and storage medium | |
CN110879986A (en) | Face recognition method, apparatus and computer-readable storage medium | |
CN115359308B (en) | Model training method, device, equipment, storage medium and program for identifying difficult cases | |
CN114972807B (en) | Method and device for determining image recognition accuracy, electronic equipment and medium | |
CN115953434B (en) | Track matching method, track matching device, electronic equipment and storage medium | |
US20220392192A1 (en) | Target re-recognition method, device and electronic device | |
JP6244887B2 (en) | Information processing apparatus, image search method, and program | |
CN113255484B (en) | Video matching method, video processing device, electronic equipment and medium | |
CN112966609B (en) | Target detection method and device | |
CN113177479B (en) | Image classification method, device, electronic equipment and storage medium | |
CN114648735A (en) | Flame detection method, system, device and storage medium | |
CN115393755A (en) | Visual target tracking method, device, equipment and storage medium | |
CN115116130A (en) | Call action recognition method, device, equipment and storage medium | |
CN111401197B (en) | Picture risk identification method, device and equipment | |
CN108875638B (en) | Face matching test method, device and system | |
CN114120410A (en) | Method, apparatus, device, medium and product for generating label information | |
CN113361402A (en) | Training method of recognition model, method, device and equipment for determining accuracy | |
CN111898529A (en) | Face detection method and device, electronic equipment and computer readable medium | |
CN114115640B (en) | Icon determination method, device, equipment and storage medium | |
CN117746069B (en) | Graph searching model training method and graph searching method | |
CN114564398A (en) | Test method and device, electronic equipment and storage medium | |
CN114299595A (en) | Face recognition method, face recognition device, face recognition equipment, storage medium and program product | |
CN117115892A (en) | Face recognition method, device, electronic equipment and computer readable storage medium | |
CN114331798A (en) | Image data processing method, image data detection method, device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |