CN110781813A - Image recognition method and device, electronic equipment and storage medium - Google Patents

Image recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110781813A
CN110781813A CN201911018602.9A CN201911018602A CN110781813A CN 110781813 A CN110781813 A CN 110781813A CN 201911018602 A CN201911018602 A CN 201911018602A CN 110781813 A CN110781813 A CN 110781813A
Authority
CN
China
Prior art keywords
image
recognition
images
category
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911018602.9A
Other languages
Chinese (zh)
Other versions
CN110781813B (en
Inventor
黄怀毅
章余琪
栾智荣
杨磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201911018602.9A priority Critical patent/CN110781813B/en
Publication of CN110781813A publication Critical patent/CN110781813A/en
Application granted granted Critical
Publication of CN110781813B publication Critical patent/CN110781813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present disclosure relates to an image recognition method and apparatus, an electronic device, and a storage medium, the method including: performing face recognition on an image to be processed including a target object to obtain a first recognition result of the target object, wherein the first recognition result includes a plurality of category labels of the target object and a first probability of each category label; under the condition that the first recognition result does not meet the recognition condition, correcting the first probability of all or part of category labels in the first recognition result according to the reference categories of the objects in the plurality of first images of the trust set, and determining a second recognition result of the target object; and determining the category of the target object in the image to be processed according to the second recognition result. The embodiment of the disclosure can improve the identification precision.

Description

Image recognition method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image recognition method and apparatus, an electronic device, and a storage medium.
Background
Face recognition is one of the important tasks in the field of computer vision. Based on the existing large-scale face recognition data set and the deep learning network, the face recognition can reach high precision. However, the face recognition technology in reality has many challenges, for example, the performance of face recognition may be affected by the human posture, the picture quality, the environmental conditions, and the like, and the recognition accuracy is reduced.
Disclosure of Invention
The present disclosure provides an image recognition technical solution.
According to an aspect of the present disclosure, there is provided an image recognition method including: performing face recognition on an image to be processed including a target object to obtain a first recognition result of the target object, wherein the first recognition result includes a plurality of category labels of the target object and a first probability of each category label; under the condition that the first identification result does not meet the identification condition, correcting the first probability of all or part of category labels in the first identification result according to the reference categories of the objects in a plurality of first images of a trust set, and determining the second identification result of the target object; and determining the category of the target object in the image to be processed according to the second recognition result.
In a possible implementation manner, the second recognition result includes a second probability of the all or part of the category labels, wherein the correcting the first recognition result according to the reference category of the object in the plurality of first images of the trust set, and determining the second recognition result of the target object includes:
determining a second person relation matrix according to the multiple reference categories of the trust set and a preset first person relation matrix; and correcting the first probability of all or part of the category labels in the first identification result according to the plurality of reference categories and the second person relation matrix to obtain a second probability of all or part of the category labels.
In one possible implementation, the method further includes: and determining the category of the target object in the image to be processed according to the first recognition result under the condition that the first recognition result meets the recognition condition.
In one possible implementation, the method further includes: and if the first recognition result meets the recognition condition, adding the to-be-processed image and the category of the target object in the to-be-processed image into the trust set.
In one possible implementation, the method further includes: identifying second images of a plurality of first data in an acquired first data set, and determining a third image from the second images of the plurality of first data, wherein each first data in the first data set comprises a text and a second image corresponding to the text, and the third image is labeled with a class label of an object; labeling the texts of the plurality of first data respectively to obtain text labeling information of the plurality of first data; determining a second data set according to the text labeling information of the plurality of first data and the category label of the object in the third image; all or part of the third images in the second data set and class labels of objects in the third images are used for training a first recognition network, and the first recognition network is used for carrying out face recognition on the images to be processed.
In one possible implementation, the method further includes: and determining the first human relationship matrix according to the text labeling information of the plurality of first data and the class label of the object in the third image.
In one possible implementation, the identifying a second image of the plurality of first data in the acquired first data set and determining a third image from the second image of the plurality of first data includes: respectively carrying out face detection on the second images of the plurality of first data, and determining a plurality of fourth images comprising faces from the plurality of second images; and identifying the plurality of fourth images comprising human faces by using a second identification network, and determining the third image from the plurality of fourth images comprising human faces.
In one possible implementation, the method further includes: and training the second recognition network according to a preset third data set, wherein the third data set comprises a plurality of labeled sample images.
In one possible implementation, the identification condition includes: a third probability of the first probabilities of the category labels is greater than or equal to a first threshold, and a difference between the third probability and a fourth probability is greater than or equal to a second threshold, wherein the third probability is a maximum of the first probabilities of the category labels, and the fourth probability is a maximum of the first probabilities of the category labels except for the third probability.
According to an aspect of the present disclosure, there is provided an image recognition apparatus including: the system comprises a first identification module, a second identification module and a third identification module, wherein the first identification module is used for carrying out face identification on an image to be processed comprising a target object to obtain a first identification result of the target object, and the first identification result comprises a plurality of class labels of the target object and first probabilities of the class labels; a result correction module, configured to, when the first recognition result does not satisfy the recognition condition, correct, according to a reference category of an object in a plurality of first images of a trust set, a first probability of all or part of category labels in the first recognition result, and determine a second recognition result of the target object; and the first class determination module is used for determining the class of the target object in the image to be processed according to the second recognition result.
In one possible implementation, the second recognition result includes a second probability of the all or part of the category label, wherein the result correction module includes: the relation determining submodule is used for determining a second person relation matrix according to the multiple reference categories of the trust set and a preset first person relation matrix; and the correction submodule is used for correcting the first probability of all or part of the category labels in the first identification result according to the plurality of reference categories and the second character relation matrix to obtain a second probability of all or part of the category labels.
In one possible implementation, the apparatus further includes: and the second category determination module is used for determining the category of the target object in the image to be processed according to the first recognition result under the condition that the first recognition result meets the recognition condition.
In one possible implementation, the apparatus further includes: and the category adding module is used for adding the categories of the image to be processed and the target object in the image to be processed into the trust set under the condition that the first recognition result meets the recognition condition.
In one possible implementation, the apparatus further includes: the second identification module is used for identifying second images of a plurality of pieces of first data in the acquired first data set and determining a third image from the second images of the plurality of pieces of first data, wherein each piece of first data in the first data set comprises a text and the second image corresponding to the text, and the third image is labeled with a class label of an object; the text labeling module is used for labeling the texts of the plurality of first data respectively to obtain text labeling information of the plurality of first data; the data set determining module is used for determining a second data set according to the text labeling information of the plurality of first data and the category label of the object in the third image; all or part of the third images in the second data set and class labels of objects in the third images are used for training a first recognition network, and the first recognition network is used for carrying out face recognition on the images to be processed.
In one possible implementation, the apparatus further includes: and the relationship determining module is used for determining the first human relationship matrix according to the text labeling information of the plurality of first data and the class label of the object in the third image.
In one possible implementation, the second identification module includes: the face detection submodule is used for respectively carrying out face detection on the second images of the plurality of first data and determining a plurality of fourth images comprising faces from the plurality of second images; and the image recognition submodule is used for recognizing the plurality of fourth images comprising human faces by using a second recognition network and determining the third image from the plurality of fourth images comprising human faces.
In one possible implementation, the apparatus further includes: and the network training module is used for training the second recognition network according to a preset third data set, wherein the third data set comprises a plurality of labeled sample images.
In one possible implementation, the identification condition includes: a third probability of the first probabilities of the category labels is greater than or equal to a first threshold, and a difference between the third probability and a fourth probability is greater than or equal to a second threshold, wherein the third probability is a maximum of the first probabilities of the category labels, and the fourth probability is a maximum of the first probabilities of the category labels except for the third probability.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the above method is performed.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
According to the embodiment of the disclosure, the object in the image can be identified to obtain the identification result, when the identification result does not meet the identification condition, the identification result is determined again according to the reference category of the object in the trust set, and the category of the object in the image is determined according to the re-determined identification result, so that the identification precision is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of an image recognition method according to an embodiment of the present disclosure.
Fig. 2 illustrates a block diagram of an image recognition apparatus according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Fig. 4 shows a block diagram of an electronic device in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 illustrates a flowchart of an image recognition method according to an embodiment of the present disclosure, as illustrated in fig. 1, the image recognition method including:
in step S11, performing facial recognition on an image to be processed including a target object to obtain a first recognition result of the target object, where the first recognition result includes a plurality of category labels of the target object and a first probability of each category label;
in step S12, when the first recognition result does not satisfy the recognition condition, correcting a first probability of all or part of the category labels in the first recognition result according to the reference categories of the objects in the plurality of first images of the trust set, and determining a second recognition result of the target object;
in step S13, a category of a target object in the image to be processed is determined according to the second recognition result.
According to the embodiment of the disclosure, the object in the image can be identified to obtain the identification result, when the identification result does not meet the identification condition, the identification result is determined again according to the reference category of the object in the trust set, and the category of the object in the image is determined according to the re-determined identification result, so that the identification precision is improved.
In one possible implementation, the image recognition method may be performed by an electronic device such as a terminal device or a server, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
In one possible implementation, the image to be processed may be an image obtained in any manner, such as an image captured by a capture device (e.g., a camera), an image downloaded via the internet or obtained in other manners (e.g., an image in news), and so on. The target object may be a person in the image.
In a possible implementation manner, in step S11, facial feature extraction may be performed on the image to be processed to obtain facial feature information of the target object in the image to be processed, and the facial feature information is subjected to recognition comparison to determine a first recognition result of the target object. The first recognition result may include a plurality of category labels (e.g., names of persons) of the target object and first probabilities of the respective category labels (e.g., probability of the target object being person a is 0.5 and probability of being person B is 0.3 … …).
In one possible implementation, facial recognition may be performed on the image to be processed in various ways (e.g., neural networks). For example, any one of the trained neural networks ResNet-18, ResNet-34, ResNet-50, and ResNet-101 can be used as a recognition network (referred to as a first recognition network) to perform face recognition on an image to be processed. The present disclosure is not limited to the specific manner of face identification and the type of network.
In a possible implementation manner, after the first recognition result is obtained, whether the first recognition result meets a preset recognition condition may be determined. The recognition condition may include, for example: a third probability of the first probabilities of the respective class labels is greater than or equal to a first threshold, and a difference between the third probability and a fourth probability is greater than or equal to a second threshold,
the third probability is the maximum value of the first probabilities of the class labels, and the fourth probability is the maximum value of the first probabilities of the class labels except for the third probability.
That is, if the maximum value of the first probabilities of the plurality of category labels is large (i.e., greater than or equal to the first threshold value), and the difference between the maximum value and the maximum value other than the maximum value (i.e., the second highest probability value of the first probabilities) is large (i.e., greater than or equal to the second threshold value), it may be considered that the category (e.g., the name of the person) of the target object can be directly determined from the first recognition result, and the recognition condition is satisfied. Otherwise, the type of the target object cannot be determined according to the first recognition result, and the recognition condition is not satisfied. Wherein the first threshold may be set to 0.5-0.8 (e.g. 0.6), for example, and the second threshold may be set to 0.2-0.4 (e.g. 0.25), for example, and those skilled in the art can set the first threshold and the second threshold according to practical situations, which the present disclosure does not limit.
In one possible implementation, the method may further include: and under the condition that the first recognition result meets the recognition condition, determining the category of the target object in the image to be processed according to the first recognition result. That is, if the first recognition result satisfies the recognition condition, the maximum value of the probabilities of the plurality of category labels of the first recognition result may be directly determined as the category of the target object corresponding to the category label, thereby completing the entire process of image recognition.
In one possible implementation, the method may further include: and if the first recognition result meets the recognition condition, adding the to-be-processed image and the category of the target object in the to-be-processed image into the trust set. The trust set may be a set formed by human faces with high reliability identified by facial features in the picture, and the trust set may include images directly identifiable by the first identification result and identification results thereof (category labels of the target objects).
By adding the images to be processed and the categories of which the first identification result meets the identification condition into the trust set, the data volume in the trust set can be continuously expanded, and the richness and the accuracy of the relationship between the persons in the trust set are improved.
In one possible implementation, in step S12, if the first recognition result does not satisfy the recognition condition, the first probability of all or part of the category labels in the first recognition result may be corrected according to the reference category of the object in the plurality of first images of the trust set, and the second recognition result of the target object may be determined. Wherein the second recognition result comprises a second probability of the all or part of the category label. That is, if the category of the target object in the image cannot be directly determined, the recognition result can be improved by further correction processing.
In one possible implementation, step S12 may include: determining a second person relation matrix according to the multiple reference categories of the trust set and a preset first person relation matrix; and correcting the first probability of all or part of the category labels in the first identification result according to the plurality of reference categories and the second person relation matrix to obtain a second probability of all or part of the category labels.
For example, a first human relationship matrix R (initial human relationship atlas matrix) may be preset, and the first human relationship matrix R is used to represent human relationships obtained by using characters and images in the data set respectively. The data set (NewsNet) may for example comprise a plurality of pieces of news data, each piece of news data comprising text and images corresponding to the text. The text has been labeled with text labeling information (e.g., person name, location, event, etc.), and the object (face) in the image has been labeled with a category label (e.g., person name).
In one possible implementation, the first human relationship matrix R may be determined by equation (1) as follows:
Figure BDA0002246473820000091
in formula (1), a may represent a normalized people statistical relationship derived from the text; b may represent normalized people statistics from the annotated images. And if the number of the class labels in the data set is m, both A and B are m-m dimensional matrixes, and the sum of each column of A and B is 1 respectively.
Where α denotes the weight of A and K denotes the maximum number of hops for the relationship of the person in the dataset for any category label (name) a, b, c, a and b are related (1 hop related) if a and b appear in the same image (or same text), b and c are related (1 hop related) if b and c appear in the same image (or same text), and correspondingly, a and c are not appearing in the same image (or same text), but a and b are related, b and c are related, and a and c are considered to be 2 hop related.
Wherein, for category labels (person names) a and b,
Figure BDA0002246473820000101
can be used to represent the number of times a and b appear in the same text (number of 1 hop correlation); can be used to represent the number of times between a and b that is K hop dependent (1. ltoreq. K. ltoreq. K); a. the kThe number of times that there is a k-hop correlation between all category labels that can be used to represent text. Accordingly, B kThe number of k-hop correlations between all class labels that can be used to represent an image.
In this way, a person relationship network (person relationship matrix) can be constructed through the person background relationship contained in the text and the image in the data set, and the face which cannot be accurately recognized is presumed by the accurately recognized face, so that the face recognition accuracy is improved.
As previously described, a trust set may be established that includes a plurality of first images that can be directly identified and a reference category for an object in the plurality of first images. From the multiple reference categories, a people relationship atlas matrix C in the trust set may be established. From this matrix C, a second persona relationship matrix R can be determined +And (augmenting the figure background relation atlas matrix) to further obtain a second identification result.
In one possible implementation, the second persona relationship matrix R may be determined from the persona relationship atlas matrix C and the first persona relationship matrix R in the trust set +(augmented person context relationship atlas matrix), as shown in equation (2):
Figure BDA0002246473820000103
in equation (2), β represents the weight of R,
Figure BDA0002246473820000104
l represents the maximum number of hops of the relationship of the people in the trust set, C lThe number of times that all category labels that can be used to represent the first images in the trust set are l-hop related.
In one possible implementation, the rootSecond human relationship matrix R according to formula (2) +The first probability of all or part of the category labels in the first recognition result of the target object may be corrected to obtain the second probability of all or part of the category labels. All the category labels in the first recognition result can be calculated, and part of the category labels can also be selected for calculation. For example, a fixed number (e.g., 5) of category labels with the highest first probability among the plurality of category labels is selected, or a category label with a first probability exceeding a preset threshold (e.g., 0.2) among the plurality of category labels is selected. The present disclosure is not so limited.
In one possible implementation, for the jth class label of the ith target object, the second probability thereof can be expressed as formula (3):
Figure BDA0002246473820000111
in formula (3), U iCan represent a set of all people in the trust set that are related to the target object i, for U iEach character u, c in uIndicating the reference category to which u corresponds.
In this way, the second probability of all or part of the category labels in the first recognition result (i.e., the second recognition result) may be determined, respectively, and the category of the target object may be determined in step S13 according to the second probability of all or part of the category labels. The category label corresponding to the maximum value in the second probabilities of all or part of the category labels can be determined as the category of the target object, so that the whole process of image recognition is completed.
By the method, the face which cannot be accurately identified is presumed by the accurately identified face through the relation between the text of the data set and the person background contained in the image and the relation between the person background contained in the image of the trust set, so that the face identification accuracy is improved.
In a possible implementation manner, before the image recognition to be processed is performed in step S11, the method may further include a data set preparation and processing procedure. Wherein the method may further comprise:
identifying second images of a plurality of first data in an acquired first data set, and determining a third image from the second images of the plurality of first data, wherein each first data in the first data set comprises a text and a second image corresponding to the text, and the third image is labeled with a class label of an object; labeling the texts of the plurality of first data respectively to obtain text labeling information of the plurality of first data; determining a second data set according to the text labeling information of the plurality of first data and the category label of the object in the third image;
all or part of the third images in the second data set and class labels of objects in the third images are used for training a first recognition network, and the first recognition network is used for carrying out face recognition on the images to be processed.
For example, a plurality of first data may be collected to form a first data set. The first data may include, for example, news data, and a large amount (on the order of millions to tens of millions) of news data may be collected from a media website (e.g., Guardian (Guardian), New York Times (New York Times), daily postings (DailyMail), etc.) as the first data, each of the first data including text (news content) and a second image (a picture corresponding to the news content) corresponding to the text.
In a possible implementation manner, the text of each first data may be labeled respectively, so as to obtain text labeling information of each first data. For example, a Google Cloud Platform (Google Cloud Platform) can be used to label the text, and obtain text label information (information such as name, location, and event).
In one possible implementation, identifying a second image of the plurality of first data in the acquired first data set, and determining a third image from the second image of the plurality of first data may include:
respectively carrying out face detection on the second images of the plurality of first data, and determining a plurality of fourth images comprising faces from the plurality of second images; and identifying the plurality of fourth images comprising human faces by using a second identification network, and determining the third image from the plurality of fourth images comprising human faces.
For example, a second image of each first data may be face-detected using a multi-task learning network MTCNN, and a plurality of fourth images including faces may be determined from the second image. Then, a plurality of fourth images including faces can be identified through the second identification network, and each fourth image is compared with the faces labeled in the existing Ms1M data set (Ms-Celer-1M data set) so as to label the name of the person (category label), so that the third image with the category label of the object is determined from the fourth images. It should be understood that other labeled datasets of the related art may also be employed, and the present disclosure is not limited in this regard.
In one possible implementation, the fourth image may be identified by a second identification network. The method may comprise: and training the second recognition network according to a preset third data set, wherein the third data set comprises a plurality of labeled sample images. The third data set may, for example, comprise an annotated Ms1M data set, including an annotated mass of sample images. The second identification network may, for example, comprise ResNet-101, and the specific type of second identification network is not limited by this disclosure.
In one possible implementation, the existing Ms1M dataset may be processed using the ArcFace list, clearing the Ms1M dataset of noise signatures. The second recognition network is trained by using the denoised Ms1M data set as a third data set, and the specific training mode of the second recognition network is not limited by the present disclosure.
In one possible implementation, the face similarity threshold may be defined as t, and when the similarity between two face features is greater than t, the two faces are considered to be the same person. By adjusting the threshold value t, the face recognition rate of ResNet-101 can reach 99.99%. The present disclosure does not limit the specific value of the threshold t.
In one possible implementation, a number of category labels (e.g., 1252625 category labels are labeled) may be labeled from the fourth images to identify the third image having the category label of the object. In this way, from the third image and the text corresponding to the third image, a second data set (NewsNet data set) composed of valid data can be constructed. By the method, the NewsNet data set with text-picture inaccurate correspondence and large data volume can be obtained.
In one possible implementation, the method may further include: and determining the first human relationship matrix according to the text labeling information and the class labels of the objects in the plurality of third images.
As shown in formula (1), according to the text labeling information, the appearance condition of a category label (person name) in the text can be determined, so that the character relationship based on the text can be determined; according to the class label (person name) of the object in the image, the appearance condition of the class label in the image can be determined, so that the person relation based on the image can be determined; further, a first human relationship matrix R may be determined based on the text-based human relationship and the image-based human relationship.
In this way, a person relationship network (person relationship matrix) can be constructed through the person background relationship contained in the text and the image in the data set, and the face which cannot be accurately recognized is presumed by the accurately recognized face, so that the face recognition accuracy is improved.
In one possible implementation, the method according to the present disclosure may be implemented by a neural network, which may include a first recognition network for performing facial recognition on an image to be processed.
Wherein the method further comprises: and training the first recognition network according to all or part of the third images in the second data set and the class labels of the objects in the third images.
For example, the first identification network can be any one of ResNet-18, ResNet-34, ResNet-50 and ResNet-101, and can also be any other type of neural network, and the disclosure does not limit the specific network type of the first identification network.
In one possible implementation, after obtaining the second data set (NewsNet data set), all data in the NewsNet data set may be used as a training set; one part of the NewsNet data set can be used as a training set, and the other part of the NewsNet data set can be used as a test set, and the NewsNet data set is divided into the training set and the test set according to a 7:3 ratio by taking text (news) as a unit. The present disclosure does not limit the specific partitioning of the training set and the test set.
After the division, the training set at least includes a portion of the third image in the second data set and the class label of the object in the third image. According to the training set, the first recognition network can be trained, and the training mode of the first recognition network is not limited by the disclosure. In this way, a first identification network with high accuracy can be obtained.
After the training of the first recognition network is completed, the images in the test set may be input into the first recognition network for processing, so as to obtain a first recognition result (a first probability of each class label) of the object in the image. If the first recognition result meets the recognition condition, the image and the class label thereof can be added into a trust set of the test set; if the first recognition result does not meet the recognition condition, determining an augmented character background relationship graph matrix R according to the character relationship graph matrix C and the first character relationship matrix R of the current trust set +According to R +And calculating a second probability of each class label, and further determining the prediction class of the object in the image.
After the test, the consistency between the prediction category of the object in each image of the test set and the category label marked in the NewsNet data set is higher, so that the accuracy of face recognition is effectively improved according to the method disclosed by the embodiment of the invention.
According to the image recognition method disclosed by the embodiment of the disclosure, a NewsNet data set which simultaneously has inaccurate corresponding relation of text and picture and large data volume can be obtained; the face recognition can be carried out by means of the text and the character background relation in the image, and the accuracy of the face recognition can be improved by utilizing corresponding text information even if the text does not accurately describe the image.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted.
In addition, the present disclosure also provides an image recognition apparatus, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any image recognition method provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the methods section are not repeated. It will be understood by those skilled in the art that the order of writing of the steps in the above methods of the embodiments does not imply a strict order of execution and that the particular order of execution of the steps should be determined by their function and possibly their inherent logic.
Fig. 2 illustrates a block diagram of an image recognition apparatus according to an embodiment of the present disclosure, which includes, as illustrated in fig. 2:
the first identification module 21 is configured to perform face identification on an image to be processed including a target object to obtain a first identification result of the target object, where the first identification result includes a plurality of category labels of the target object and a first probability of each category label; a result correction module 22, configured to, when the first recognition result does not satisfy the recognition condition, correct a first probability of all or part of the category labels in the first recognition result according to the reference categories of the objects in the plurality of first images of the trust set, and determine a second recognition result of the target object; and a first class determining module 23, configured to determine a class of the target object in the image to be processed according to the second recognition result.
In one possible implementation, the second recognition result includes a second probability of the all or part of the category label, wherein the result correction module includes: the relation determining submodule is used for determining a second person relation matrix according to the multiple reference categories of the trust set and a preset first person relation matrix; and the correction submodule is used for correcting the first probability of all or part of the category labels in the first identification result according to the plurality of reference categories and the second character relation matrix to obtain a second probability of all or part of the category labels.
In one possible implementation, the apparatus further includes: and the second category determination module is used for determining the category of the target object in the image to be processed according to the first recognition result under the condition that the first recognition result meets the recognition condition.
In one possible implementation, the apparatus further includes: and the category adding module is used for adding the categories of the image to be processed and the target object in the image to be processed into the trust set under the condition that the first recognition result meets the recognition condition.
In one possible implementation, the apparatus further includes: the second identification module is used for identifying second images of a plurality of pieces of first data in the acquired first data set and determining a third image from the second images of the plurality of pieces of first data, wherein each piece of first data in the first data set comprises a text and the second image corresponding to the text, and the third image is labeled with a class label of an object; the text labeling module is used for labeling the texts of the plurality of first data respectively to obtain text labeling information of the plurality of first data; the data set determining module is used for determining a second data set according to the text labeling information of the plurality of first data and the category label of the object in the third image; all or part of the third images in the second data set and class labels of objects in the third images are used for training a first recognition network, and the first recognition network is used for carrying out face recognition on the images to be processed.
In one possible implementation, the apparatus further includes: and the relationship determining module is used for determining the first human relationship matrix according to the text labeling information of the plurality of first data and the class label of the object in the third image.
In one possible implementation, the second identification module includes: the face detection submodule is used for respectively carrying out face detection on the second images of the plurality of first data and determining a plurality of fourth images comprising faces from the plurality of second images; and the image recognition submodule is used for recognizing the plurality of fourth images comprising human faces by using a second recognition network and determining the third image from the plurality of fourth images comprising human faces.
In one possible implementation, the apparatus further includes: and the network training module is used for training the second recognition network according to a preset third data set, wherein the third data set comprises a plurality of labeled sample images.
In one possible implementation, the identification condition includes: a third probability of the first probabilities of the category labels is greater than or equal to a first threshold, and a difference between the third probability and a fourth probability is greater than or equal to a second threshold, wherein the third probability is a maximum of the first probabilities of the category labels, and the fourth probability is a maximum of the first probabilities of the category labels except for the third probability.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and for specific implementation, reference may be made to the description of the above method embodiments, and for brevity, details are not described here again
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 3 is a block diagram illustrating an electronic device 800 according to an example embodiment. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 3, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 4 is a block diagram illustrating an electronic device 1900 according to an example embodiment. For example, the electronic device 1900 may be provided as a server. Referring to fig. 4, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. An image recognition method, comprising:
performing face recognition on an image to be processed including a target object to obtain a first recognition result of the target object, wherein the first recognition result includes a plurality of category labels of the target object and a first probability of each category label;
under the condition that the first identification result does not meet the identification condition, correcting the first probability of all or part of category labels in the first identification result according to the reference categories of the objects in a plurality of first images of a trust set, and determining the second identification result of the target object;
and determining the category of the target object in the image to be processed according to the second recognition result.
2. The method of claim 1, wherein the second recognition result comprises a second probability of the all or part of the category label,
wherein the correcting the first recognition result according to the reference category of the object in the plurality of first images of the trust set to determine the second recognition result of the target object comprises:
determining a second person relation matrix according to the multiple reference categories of the trust set and a preset first person relation matrix;
and correcting the first probability of all or part of the category labels in the first identification result according to the plurality of reference categories and the second person relation matrix to obtain a second probability of all or part of the category labels.
3. The method of claim 2, further comprising:
and determining the category of the target object in the image to be processed according to the first recognition result under the condition that the first recognition result meets the recognition condition.
4. The method of claim 3, further comprising:
and if the first recognition result meets the recognition condition, adding the to-be-processed image and the category of the target object in the to-be-processed image into the trust set.
5. The method according to any one of claims 2 to 4, further comprising:
identifying second images of a plurality of first data in an acquired first data set, and determining a third image from the second images of the plurality of first data, wherein each first data in the first data set comprises a text and a second image corresponding to the text, and the third image is labeled with a class label of an object;
labeling the texts of the plurality of first data respectively to obtain text labeling information of the plurality of first data;
determining a second data set according to the text labeling information of the plurality of first data and the category label of the object in the third image;
all or part of the third images in the second data set and class labels of objects in the third images are used for training a first recognition network, and the first recognition network is used for carrying out face recognition on the images to be processed.
6. The method of claim 5, further comprising:
and determining the first human relationship matrix according to the text labeling information of the plurality of first data and the class label of the object in the third image.
7. The method of claim 5 or 6, wherein identifying the second image of the plurality of first data in the acquired first data set and determining the third image from the second image of the plurality of first data comprises:
respectively carrying out face detection on the second images of the plurality of first data, and determining a plurality of fourth images comprising faces from the plurality of second images;
and identifying the plurality of fourth images comprising human faces by using a second identification network, and determining the third image from the plurality of fourth images comprising human faces.
8. An image recognition apparatus, comprising:
the system comprises a first identification module, a second identification module and a third identification module, wherein the first identification module is used for carrying out face identification on an image to be processed comprising a target object to obtain a first identification result of the target object, and the first identification result comprises a plurality of class labels of the target object and first probabilities of the class labels;
a result correction module, configured to, when the first recognition result does not satisfy the recognition condition, correct, according to a reference category of an object in a plurality of first images of a trust set, a first probability of all or part of category labels in the first recognition result, and determine a second recognition result of the target object;
and the first class determination module is used for determining the class of the target object in the image to be processed according to the second recognition result.
9. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the method of any one of claims 1 to 7.
10. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 7.
CN201911018602.9A 2019-10-24 2019-10-24 Image recognition method and device, electronic equipment and storage medium Active CN110781813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911018602.9A CN110781813B (en) 2019-10-24 2019-10-24 Image recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911018602.9A CN110781813B (en) 2019-10-24 2019-10-24 Image recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110781813A true CN110781813A (en) 2020-02-11
CN110781813B CN110781813B (en) 2023-04-07

Family

ID=69386330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911018602.9A Active CN110781813B (en) 2019-10-24 2019-10-24 Image recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110781813B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339846A (en) * 2020-02-12 2020-06-26 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN111738325A (en) * 2020-06-16 2020-10-02 北京百度网讯科技有限公司 Image recognition method, device, equipment and storage medium
CN111930935A (en) * 2020-06-19 2020-11-13 普联国际有限公司 Image classification method, device, equipment and storage medium
CN113344055A (en) * 2021-05-28 2021-09-03 北京百度网讯科技有限公司 Image recognition method, image recognition device, electronic equipment and medium
CN113393265A (en) * 2021-05-25 2021-09-14 浙江大华技术股份有限公司 Method for establishing database of feature library of passing object, electronic device and storage medium
CN113673546A (en) * 2020-05-15 2021-11-19 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113673546B (en) * 2020-05-15 2024-04-16 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070174272A1 (en) * 2005-06-24 2007-07-26 International Business Machines Corporation Facial Recognition in Groups
CN102024056A (en) * 2010-12-15 2011-04-20 中国科学院自动化研究所 Computer aided newsmaker retrieval method based on multimedia analysis
CN102339391A (en) * 2010-07-27 2012-02-01 株式会社理光 Multiobject identification method and device
CN109934293A (en) * 2019-03-15 2019-06-25 苏州大学 Image-recognizing method, device, medium and obscure perception convolutional neural networks
CN109948736A (en) * 2019-04-04 2019-06-28 上海扩博智能技术有限公司 Commodity identification model active training method, system, equipment and storage medium
CN110163291A (en) * 2019-05-28 2019-08-23 北京史河科技有限公司 A kind of indicator light recognition methods and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070174272A1 (en) * 2005-06-24 2007-07-26 International Business Machines Corporation Facial Recognition in Groups
CN102339391A (en) * 2010-07-27 2012-02-01 株式会社理光 Multiobject identification method and device
CN102024056A (en) * 2010-12-15 2011-04-20 中国科学院自动化研究所 Computer aided newsmaker retrieval method based on multimedia analysis
CN109934293A (en) * 2019-03-15 2019-06-25 苏州大学 Image-recognizing method, device, medium and obscure perception convolutional neural networks
CN109948736A (en) * 2019-04-04 2019-06-28 上海扩博智能技术有限公司 Commodity identification model active training method, system, equipment and storage medium
CN110163291A (en) * 2019-05-28 2019-08-23 北京史河科技有限公司 A kind of indicator light recognition methods and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339846A (en) * 2020-02-12 2020-06-26 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN111339846B (en) * 2020-02-12 2022-08-12 深圳市商汤科技有限公司 Image recognition method and device, electronic equipment and storage medium
CN113673546A (en) * 2020-05-15 2021-11-19 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113673546B (en) * 2020-05-15 2024-04-16 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN111738325A (en) * 2020-06-16 2020-10-02 北京百度网讯科技有限公司 Image recognition method, device, equipment and storage medium
CN111930935A (en) * 2020-06-19 2020-11-13 普联国际有限公司 Image classification method, device, equipment and storage medium
CN113393265A (en) * 2021-05-25 2021-09-14 浙江大华技术股份有限公司 Method for establishing database of feature library of passing object, electronic device and storage medium
CN113344055A (en) * 2021-05-28 2021-09-03 北京百度网讯科技有限公司 Image recognition method, image recognition device, electronic equipment and medium
CN113344055B (en) * 2021-05-28 2023-08-22 北京百度网讯科技有限公司 Image recognition method, device, electronic equipment and medium

Also Published As

Publication number Publication date
CN110781813B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US11120078B2 (en) Method and device for video processing, electronic device, and storage medium
CN110781813B (en) Image recognition method and device, electronic equipment and storage medium
WO2021051650A1 (en) Method and apparatus for association detection for human face and human hand, electronic device and storage medium
CN110287874B (en) Target tracking method and device, electronic equipment and storage medium
CN111753822A (en) Text recognition method and device, electronic equipment and storage medium
CN109615006B (en) Character recognition method and device, electronic equipment and storage medium
CN111539410B (en) Character recognition method and device, electronic equipment and storage medium
CN110633755A (en) Network training method, image processing method and device and electronic equipment
CN111931844B (en) Image processing method and device, electronic equipment and storage medium
CN109685041B (en) Image analysis method and device, electronic equipment and storage medium
US11335348B2 (en) Input method, device, apparatus, and storage medium
CN111242303A (en) Network training method and device, and image processing method and device
CN109344703B (en) Object detection method and device, electronic equipment and storage medium
CN114332503A (en) Object re-identification method and device, electronic equipment and storage medium
CN110633715B (en) Image processing method, network training method and device and electronic equipment
CN110929545A (en) Human face image sorting method and device
CN111523599B (en) Target detection method and device, electronic equipment and storage medium
CN110955800A (en) Video retrieval method and device
CN110781975B (en) Image processing method and device, electronic device and storage medium
CN111625671A (en) Data processing method and device, electronic equipment and storage medium
CN113065361B (en) Method and device for determining user intimacy, electronic equipment and storage medium
CN114118278A (en) Image processing method and device, electronic equipment and storage medium
CN114168809A (en) Similarity-based document character string code matching method and device
CN114154395A (en) Model processing method and device for model processing
CN113673433A (en) Behavior recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant