WO2020019591A1 - Procédé et dispositif utilisés pour la génération d'informations - Google Patents

Procédé et dispositif utilisés pour la génération d'informations Download PDF

Info

Publication number
WO2020019591A1
WO2020019591A1 PCT/CN2018/116182 CN2018116182W WO2020019591A1 WO 2020019591 A1 WO2020019591 A1 WO 2020019591A1 CN 2018116182 W CN2018116182 W CN 2018116182W WO 2020019591 A1 WO2020019591 A1 WO 2020019591A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face image
recognition model
extracted
information
Prior art date
Application number
PCT/CN2018/116182
Other languages
English (en)
Chinese (zh)
Inventor
陈日伟
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2020019591A1 publication Critical patent/WO2020019591A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Definitions

  • Embodiments of the present application relate to the field of computer technology, and in particular, to a method and an apparatus for generating information.
  • the key points of the face refer to the points in the face that have obvious semantic discrimination, such as the points corresponding to the nose and the points corresponding to the eyes.
  • the detection of key points on a face usually refers to the detection of key points on a face.
  • functions such as adding special effects, 3D modeling of the face, and taking pictures of the beauty can be realized.
  • the embodiments of the present application provide a method and device for generating information.
  • an embodiment of the present application provides a method for generating information.
  • the method includes: acquiring an image to be identified, where the image to be identified includes a face image; extracting a face image from the image to be identified, and extracting the extracted image.
  • the candidate recognition model that matches the recognition result of the second recognition model is used as the second recognition model.
  • the candidate recognition model in the candidate recognition model set is a pre-trained model for identifying different classes of faces to generate key point information.
  • the face image is input to the selected second recognition model to obtain key point information corresponding to the extracted face image, wherein the key point information is used to characterize the position of the key point in the face image in the face image.
  • inputting the extracted face image into a pre-trained first recognition model to obtain a recognition result corresponding to the extracted face image includes: inputting the extracted face image into a pre-trained first recognition model, Obtain the recognition result and reference keypoint information corresponding to the extracted face image, wherein the reference keypoint information is used to characterize the position of the reference keypoint in the face image in the face image; and input the extracted face image into the selected Obtaining the keypoint information corresponding to the extracted face image includes: inputting the extracted face image and the obtained reference keypoint information into the selected second recognition model to obtain the extracted face image Corresponding key point information.
  • extracting a face image from an image to be identified includes: inputting the image to be identified into a pre-trained third recognition model, and obtaining a position for characterizing a position of a face image in the image to be identified in the image to be identified Information; based on the obtained position information, a face image is extracted from the image to be identified.
  • inputting the extracted face image into the selected second recognition model, and obtaining key point information corresponding to the extracted face image includes: inputting the extracted face image into the selected second recognition model.
  • the matching information includes a degree of matching used to characterize the category of the face corresponding to the input face image and the category of the face corresponding to the second recognition model Match index.
  • obtaining the image to be identified includes: selecting an image to be identified from an image sequence corresponding to the target video, where the target video is a video obtained by photographing a face.
  • the method further includes: selecting an image sequence located in the image sequence.
  • the image after the image to be recognized and adjacent to the image to be recognized is a candidate to be recognized image; a face image is extracted from the candidate to be recognized image as a candidate face image, and the extracted face image in the image to be recognized is used as a reference face Image; determining whether the matching index in the matching information corresponding to the determined reference face image meets a preset condition; and in response to determining yes, inputting the extracted candidate face image into the second reference model entered in the determined reference face image To obtain keypoint information and matching information corresponding to the extracted candidate face image.
  • an embodiment of the present application provides an apparatus for generating information.
  • the apparatus includes: an image acquiring unit configured to acquire an image to be identified, where the image to be identified includes a face image; and a first input unit, which is Configured to extract a face image from an image to be recognized, and input the extracted face image into a pre-trained first recognition model to obtain a recognition result corresponding to the extracted face image, wherein the recognition result is used to characterize the corresponding face image
  • the type of face the model selection unit is configured to select a candidate recognition model matching the obtained recognition result from the candidate recognition model set as the second recognition model, wherein the candidate recognition model in the candidate recognition model set is a A trained model for identifying different types of faces to generate key point information; a second input unit configured to input the extracted face image into the selected second recognition model to obtain a corresponding face image of the extracted face Keypoint information, where the keypoint information is used to characterize the position of a keypoint in the face image in the face image.
  • the first input unit is further configured to: input the extracted face image into a pre-trained first recognition model to obtain a recognition result and reference keypoint information corresponding to the extracted face image, wherein the reference The key point information is used to characterize the position of the reference key point in the face image in the face image; and the second input unit is further configured to: input the extracted face image and the obtained reference key point information into the selected second Recognize the model and obtain key point information corresponding to the extracted face image.
  • the first input unit includes: a first input module configured to input an image to be identified into a pre-trained third recognition model, to obtain a characterization of a face image in the image to be identified in the image to be identified; The location information of the location; the image extraction module is configured to extract a face image from the image to be identified based on the obtained location information.
  • the second input unit is further configured to input the extracted face image into the selected second recognition model to obtain keypoint information and matching information corresponding to the extracted face image, wherein the matching information A matching index is used to characterize the degree of matching between the category of the face corresponding to the input face image and the category of the face corresponding to the second recognition model.
  • the image acquisition unit is further configured to select an image to be identified from an image sequence corresponding to the target video, where the target video is a video obtained by photographing a face.
  • the apparatus further includes: an image selection unit configured to select an image located behind the image to be identified and adjacent to the image to be identified from the image sequence as a candidate image to be identified; and the image determination unit is configured to Extracting a face image from a candidate to-be-recognized image as a candidate face image, and determining the extracted face image in the to-be-recognized image as a reference face image; a condition determining unit configured to determine a corresponding one of the determined reference face image Whether the matching index in the matching information meets a preset condition; the third input unit is configured to, in response to determining yes, input the extracted candidate face image into the second reference model input to the determined reference face image to obtain the extracted Key point information and matching information corresponding to the candidate face image of.
  • an embodiment of the present application provides an electronic device including: one or more processors; a storage device that stores one or more programs thereon; when one or more programs are processed by one or more processors Execution causes one or more processors to implement the method of any one of the foregoing methods for generating information.
  • an embodiment of the present application provides a computer-readable medium having stored thereon a computer program that, when executed by a processor, implements the method of any one of the foregoing methods for generating information.
  • the method and device for generating information obtain an extracted image by acquiring an image to be identified, and then extracting a face image from the image to be identified, and inputting the extracted face image into a pre-trained first recognition model.
  • the recognition result corresponding to the face image wherein the recognition result is used to characterize the category of the face corresponding to the face image, and then a candidate recognition model matching the obtained recognition result is selected from the candidate recognition model set as the second recognition model ,
  • pre-trained candidate recognition models for identifying different types of faces can be used to identify face images to generate key point information, and then different types of face images can be identified, improving the comprehensiveness of information generation, and, Recognition using a candidate recognition model that matches the category corresponding to the face image can provide The accuracy of the information generated.
  • FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present application can be applied;
  • FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present application can be applied;
  • FIG. 2 is a flowchart of an embodiment of a method for generating information according to the present application
  • FIG. 3 is a schematic diagram of an application scenario of a method for generating information according to an embodiment of the present application
  • FIG. 4 is a flowchart of still another embodiment of a method for generating information according to the present application.
  • FIG. 5 is a schematic structural diagram of an embodiment of an apparatus for generating information according to the present application.
  • FIG. 6 is a schematic structural diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present application.
  • FIG. 1 illustrates an exemplary system architecture 100 of an embodiment of a method for generating information or an apparatus for generating information to which the present application can be applied.
  • the system architecture 100 may include terminal devices 101, 102, and 103, a network 104, and a server 105.
  • the network 104 is a medium for providing a communication link between the terminal devices 101, 102, 103 and the server 105.
  • the network 104 may include various types of connections, such as wired, wireless communication links, or fiber optic cables, and so on.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages and the like.
  • Various communication client applications can be installed on the terminal devices 101, 102, and 103, such as image processing applications, Meitu software, web browser applications, search applications, and social platform software.
  • the terminal devices 101, 102, and 103 may be hardware or software.
  • the terminal device 101, 102, 103 When the terminal device 101, 102, 103 is hardware, it can be various electronic devices with a display screen, including but not limited to smartphones, tablets, e-book readers, MP3 players (Moving Pictures Experts Group Audio Layer III, Motion picture expert compression standard audio layer 3), MP4 (Moving Picture Experts Group Audio Layer 4), player, laptop portable computer and desktop computer, etc.
  • the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above. It can be implemented as multiple software or software modules (such as multiple software or software modules used to provide distributed services), or it can be implemented as a single software or software module. It is not specifically limited here.
  • the server 105 may be a server that provides various services, for example, an image processing server that processes an image to be identified sent by the terminal devices 101, 102, and 103.
  • the image processing server may perform analysis and processing on the received data such as the image to be identified, and obtain a processing result (for example, key point information).
  • the server may be hardware or software.
  • the server can be implemented as a distributed server cluster consisting of multiple servers or as a single server.
  • the server can be implemented as multiple software or software modules (for example, multiple software or software modules used to provide distributed services), or it can be implemented as a single software or software module. It is not specifically limited here.
  • the numbers of terminal devices, networks, and servers in FIG. 1 are merely exemplary. According to implementation needs, there can be any number of terminal devices, networks, and servers.
  • the above-mentioned system architecture may not include a network, but only a terminal device or a server.
  • a flowchart 200 of one embodiment of a method for generating information according to the present application is shown.
  • the method for generating information includes the following steps:
  • Step 201 Obtain an image to be identified.
  • an execution subject for example, a server shown in FIG. 1
  • the image to be identified may include a face image.
  • the face image included in the image to be identified may include an animal face image, and may also include a human face image.
  • the animal face corresponding to the animal face image may be various types of animal faces, such as a dog face, a cat face, and the like.
  • execution subject may obtain the image to be identified stored in advance locally, or may obtain the image to be identified sent by an electronic device (such as the terminal device shown in FIG. 1) communicatively connected thereto.
  • the execution subject may select an image to be identified from an image sequence corresponding to a target video, where the target video is a video that can be obtained by shooting a face. Specifically, the execution subject may first obtain an image sequence corresponding to the target video from a local or electronic device connected thereto, and then select an image from the image sequence as an image to be identified.
  • the above-mentioned executing subject may select the image to be identified from the above-mentioned image sequence in various ways, for example, a random selection manner may be adopted; or, the first-ranked image may be selected from the image sequence.
  • a video is essentially an image sequence arranged in chronological order, so any video can correspond to an image sequence.
  • step 202 a face image is extracted from the image to be recognized, and the extracted face image is input to a first trained first recognition model to obtain a recognition result corresponding to the extracted face image.
  • the execution subject may first extract a face image from the to-be-recognized image, and then input the extracted face image into a pre-trained first recognition model to obtain the extracted The recognition result corresponding to the face image.
  • the recognition result may include, but is not limited to, at least one of the following: text, numbers, symbols, images, and audio.
  • the recognition result may be used to characterize the category of the face corresponding to the face image.
  • the execution subject may extract a face image from the image to be identified in various ways.
  • a threshold segmentation method in an image segmentation technique may be used to segment a face image in an image to be identified from other image regions, and then extract a face image from the image to be identified.
  • the image segmentation technology is a well-known technology that is widely studied and applied at present, and will not be repeated here.
  • the above-mentioned execution subject may also extract a face image from the image to be identified by the following steps:
  • Step 2021 Input the to-be-recognized image into a pre-trained third recognition model, and obtain position information for characterizing a position of a face image in the to-be-recognized image in the to-be-recognized image.
  • the location information may include, but is not limited to, at least one of the following: text, numbers, symbols, and images.
  • the position information may be a quadrilateral image in which a face image is frame-selected in the image to be identified.
  • the third recognition model may be used to characterize the correspondence between the image to be identified including the face image and the position information used to characterize the position of the face image in the image to be identified.
  • the third recognition model may be a model obtained by training an initial model (such as a Convolutional Neural Network (CNN), a residual network (ResNet), etc.) based on training samples and using a machine learning method.
  • CNN Convolutional Neural Network
  • ResNet residual network
  • the third recognition model can be obtained by training as follows: first, a training sample set is obtained, where the training sample may include a sample to-be-recognized image including a sample face image, and a sample face image in the sample-to-be-recognized image is previously The labeled sample position information, where the sample position information can be used to characterize the position of the sample face image in the sample to-be-recognized image in the sample to-be-recognized image.
  • training samples can be selected from the training sample set and the following training steps are performed: inputting the sample to-be-recognized image of the selected training sample into the initial model, obtaining position information corresponding to the sample face image in the sample-to-be-recognized image; The sample position information corresponding to the input sample to-be-recognized image is used as the expected output of the initial model. Based on the obtained position information and sample position information, adjust the parameters of the initial model; determine whether there are unselected training samples in the training sample set; response In the absence of unselected training samples, the adjusted initial model is determined as the third recognition model.
  • the method may further include the following steps: in response to determining that there are unselected training samples, reselecting training samples from the unselected training samples, and using the most recently adjusted initial model as the new initial model, and continuing Perform the training steps described above.
  • the execution subject of the steps used to generate the model may be the same as or different from the execution subject of the method used to generate the information. If they are the same, the execution subject of the step for generating the model can store the trained model locally after the model is trained. If they are different, the execution body of the step for generating the model may send the trained model to the execution body of the method for generating information after training to obtain the model.
  • step 2022 a face image is extracted from the image to be identified based on the obtained position information.
  • the above-mentioned executing subject may extract a face image from the image to be identified in various ways.
  • a to-be-recognized image may be cropped based on the obtained position information to obtain a face image.
  • the execution subject may generate a recognition result corresponding to the face image.
  • the image to be recognized may include at least one face image.
  • the execution subject may input the face image into a pre-trained first recognition model to obtain the The recognition result corresponding to the face image.
  • the first recognition model may be used to characterize a correspondence between a face image and a recognition result corresponding to the face image.
  • the first recognition model may be a model obtained by training an initial model (such as a convolutional neural network, a residual network, and the like) based on training samples and using a machine learning method.
  • an initial model such as a convolutional neural network, a residual network, and the like
  • the execution subject used to obtain the first recognition model may be trained to obtain the first recognition model by using a training method similar to the training method of the third recognition model, and specific training steps are not described herein again.
  • the training samples in the corresponding training sample set may include sample face images and sample recognition results pre-labeled for the sample face images, where the sample recognition results may be used for characterization The category of the face corresponding to the sample face image.
  • Step 203 Select a candidate recognition model that matches the obtained recognition result from the candidate recognition model set as the second recognition model.
  • the execution entity may select a candidate recognition model that matches the obtained recognition result from the candidate recognition model set as the second recognition model.
  • the candidate recognition models in the candidate recognition model set may be pre-trained models for identifying different classes of faces to generate key point information.
  • the execution body may use various methods to select a candidate recognition model that matches the obtained recognition result from the candidate recognition model set as the second recognition model. For example, a technician may preset a correspondence relationship between the recognition result and the candidate recognition model in the candidate recognition model set (for example, a correspondence table) in the execution body, and further, the execution body may use the obtained recognition result to find the correspondence. Relationship to determine a candidate recognition model that matches the obtained recognition result as a second recognition model.
  • a technician can preset category information corresponding to the candidate recognition model, where the category information can be used to characterize the types of faces that the candidate recognition model can recognize.
  • the category information may include but is not limited to at least one of the following: numbers, text, symbols, pictures.
  • the above-mentioned execution body may match the obtained recognition result with the category information corresponding to the candidate recognition model in the candidate recognition model set (for example, perform similarity calculation) to determine a candidate that matches the obtained recognition result.
  • a recognition model (a candidate recognition model whose calculation result obtained by performing similarity calculation is greater than or equal to a preset threshold) is used as the second recognition model.
  • the candidate recognition model may be used to characterize a correspondence between a face image and keypoint information corresponding to the face image.
  • the keypoint information can be used to characterize the position of keypoints in the face image in the face image.
  • the key points of the face image may be points with obvious semantic discrimination, such as points used to characterize the nose, points used to characterize the eyes, and so on.
  • the candidate recognition model may be a model obtained by training an initial model (such as a convolutional neural network, a residual network, etc.) based on training samples and using a machine learning method.
  • an initial model such as a convolutional neural network, a residual network, etc.
  • the execution subject used for training to obtain the candidate recognition model may be trained to obtain the candidate recognition model by using a training method similar to the training method of the third recognition model, and specific training steps are not described herein again.
  • the training samples in the corresponding training sample set may include sample face images and sample keypoint information pre-labeled for the sample face images, where The types of faces corresponding to the sample face images may be the same (for example, they are all cat faces or all dog faces).
  • the sample keypoint information can be used to characterize the position of keypoints in the sample face image in the sample face image.
  • the execution subject may extract at least one face image, and for each face image in the extracted face image, the execution subject may obtain a recognition result, and further, for the obtained For each recognition result in the recognition result, based on step 203, the execution subject may select a candidate recognition model as the second recognition model corresponding to the recognition result.
  • Step 204 Input the extracted face image into the selected second recognition model to obtain key point information corresponding to the extracted face image.
  • the execution subject may input the extracted face image into the selected second recognition model to obtain key point information corresponding to the extracted face image. It should be noted that, for the face image in the extracted face image, the execution subject may input the face image into a second recognition model corresponding to the face image to obtain key point information corresponding to the face image. It can be understood that the correspondence between the face image and the second recognition model can be determined by the correspondence between the recognition result corresponding to the face image and the second recognition model.
  • the above-mentioned executing subject may also determine the keypoint information corresponding to the image to be identified in a backward inference manner, where the keypoint information corresponding to the image to be identified may be used It is used to characterize the positions of key points in the image to be identified (that is, key points included in the face image in the image to be identified) in the image to be identified.
  • the candidate recognition model may be used to characterize the correspondence between the face image and the keypoint information and the matching information corresponding to the face image
  • the matching information may include But not limited to at least one of the following: numbers, text, symbols, images, audio.
  • the matching information may include a matching index used to characterize the degree of matching between the category of the face corresponding to the input face image and the category of the face corresponding to the second recognition model.
  • the size of the matching index and the level of matching may have a corresponding relationship.
  • the corresponding relationship may be that the larger the matching index, the higher the matching degree; or the smaller the matching index, the higher the matching degree.
  • the training samples in the corresponding training sample set may include a sample face image and sample keypoint information and samples pre-labeled for the sample face image.
  • Matching information where sample keypoint information can be used to characterize the position of keypoints in the sample face image in the sample face image.
  • the sample matching information may include a sample matching index.
  • the sample matching index can be used to characterize the degree of sample matching between the category of the face corresponding to the input sample face image and the category of the face predetermined for the candidate recognition model.
  • the correspondence between the sample matching index and the degree of sample matching can be set in advance by a technician. For example, it can be set that the larger the sample matching index, the higher the degree of sample matching.
  • the execution subject may input the extracted face image into the second recognition model to obtain key point information and matching information corresponding to the extracted face image. Furthermore, through the above-mentioned second recognition model, the degree of matching between the input face image and the second recognition model can be determined, and matching information is generated, so that subsequent operations (such as re-selecting the second recognition model) based on the matching information can further improve Information processing accuracy.
  • the image to be identified is an image selected from an image sequence corresponding to the target video
  • the extracted face image is input to the selected second recognition model to obtain the following information generation steps:
  • an image located behind the image to be identified and adjacent to the image to be identified is selected as a candidate image to be identified from the image sequence.
  • a face image is extracted from the candidate to-be-recognized image as a candidate face image, and the extracted face image in the to-be-recognized image is determined as a reference face image.
  • the execution subject may use the above-mentioned method for extracting a face image for an image to be identified to extract a face image from a candidate image to be identified as a candidate face image, and details are not described herein again.
  • the preset condition may be used to limit the degree of matching between the category of the face corresponding to the reference face image and the category of the face corresponding to the second recognition model input to the reference face image.
  • a technician can set a matching threshold in advance.
  • the above preset condition may be that the matching index is greater than or equal to the matching threshold; when the corresponding relationship between the matching index and the degree of matching is the smaller the matching index.
  • the above preset condition may be that the matching index is less than or equal to the matching threshold.
  • the extracted candidate face image is input to the second recognition model input to the determined reference face image to obtain the extracted Key point information and matching information corresponding to the candidate face image of.
  • the matching information corresponding to the image located before the image and the second recognition model can be used to generate the image by using the method described in this implementation manner.
  • the specific steps can be referred to the above information generation steps, which will not be repeated here.
  • FIG. 3 is a schematic diagram of an application scenario of the method for generating information according to this embodiment.
  • the server 301 first obtains an image to be identified 303 sent by the terminal device 302, where the image to be identified 303 includes face images 3031 and 3032. Then, the server 301 extracts the face image 3031 and the face image 3032 from the to-be-recognized image 303, and inputs the face image 3031 and the face image 3032 into the first trained first recognition model 304, respectively, and obtains the recognition result corresponding to the face image 3031 " The recognition result "dog” 3052 corresponding to the "cat” 3051 and the face image 3032.
  • the server 301 may obtain a candidate recognition model set 306, where the candidate recognition model set 306 includes candidate recognition models 3061, 3062, and 3063.
  • the technician presets the correspondence between the candidate recognition model and the recognition result as follows: the candidate recognition model 3061 corresponds to the recognition result "cat”; the candidate recognition model 3062 corresponds to the recognition result "dog”; the candidate recognition model 3063 corresponds to the recognition result " Person ".
  • the server 301 may select a candidate recognition model 3061 that matches the obtained recognition result "cat" 3051 from the candidate recognition model set 306 as the second recognition model 3071 corresponding to the face image 3031; from the candidate recognition model set 306, A candidate recognition model 3062 that matches the obtained recognition result "dog” 3052 is selected as the second recognition model 3072 corresponding to the face image 3032.
  • the server 301 may input the face image 3031 into the second recognition model 3071, and obtain the key point information 3081 corresponding to the face image 3031. Enter the face image 3032 into the second recognition model 3072, and obtain the key point information 3082 corresponding to the face image 3032.
  • the key point information can be used to characterize the position of key points in the face image in the face image.
  • the method provided by the foregoing embodiments of the present application obtains a corresponding image of the extracted face image by acquiring an image to be identified, and then extracting a face image from the image to be identified, and inputting the extracted face image into a first trained first recognition model.
  • Recognition result wherein the recognition result is used to characterize the category of the face corresponding to the face image, and then a candidate recognition model matching the obtained recognition result is selected as a second recognition model from the candidate recognition model set, and finally the extracted
  • the face image is input to the selected second recognition model to obtain the key point information corresponding to the extracted face image.
  • the key point information is used to characterize the position of the key point in the face image in the face image, so that pre-training can be used.
  • the candidate recognition model for identifying different types of faces recognizes face images to generate key point information, which can identify different types of face images, improves the comprehensiveness of information generation, and uses the correspondence with face images. Recognition of candidate recognition models that match the categories, can provide accuracy of information generation
  • a flowchart 400 of yet another embodiment of a method for generating information is shown.
  • the process 400 of the method for generating information includes the following steps:
  • Step 401 Obtain an image to be identified.
  • an execution subject for example, a server shown in FIG. 1
  • the image to be identified may include a face image.
  • the face image included in the image to be identified may include an animal face image, and may also include a human face image.
  • the animal face corresponding to the animal face image may be various types of animal faces, such as a dog face, a cat face, and the like.
  • step 401 may be implemented in a manner similar to step 201 in the foregoing embodiment.
  • step 201 is also applicable to step 401 of this embodiment, and details are not described herein again.
  • a face image is extracted from the image to be recognized, and the extracted face image is input into a first trained first recognition model to obtain a recognition result and reference keypoint information corresponding to the extracted face image.
  • the execution subject may first extract a face image from the to-be-recognized image, and then input the extracted face image into a pre-trained first recognition model to obtain the extracted
  • the recognition result and reference keypoint information corresponding to the face image may include, but is not limited to, at least one of the following: text, numbers, symbols, images, and audio.
  • the recognition result can be used to characterize the category of the face corresponding to the face image.
  • the benchmark keypoint information may include but is not limited to at least one of the following: text, numbers.
  • the symbol, image, and reference keypoint information can be used to characterize the position of the reference keypoint in the face image in the face image.
  • the reference key point may be a point used to determine a key point in the face image, for example, a point where the nose tip of the nose is located, a point where the corner of the mouth is located, and the like.
  • the above-mentioned execution subject may use the face image extraction method in the embodiment corresponding to FIG. 2 to extract the face image from the image to be identified, and details are not described herein again.
  • the first recognition model in this embodiment may be used to characterize the correspondence between the recognition result corresponding to the face image and the face image and the reference keypoint information.
  • the training samples in the corresponding training sample set may include sample face images, sample recognition results and sample reference keypoint information pre-labeled for the sample face images.
  • the reference keypoint information of the sample can be used to characterize the position of the reference keypoint in the sample face image in the sample face image.
  • Step 403 Select a candidate recognition model that matches the obtained recognition result from the candidate recognition model set as the second recognition model.
  • the execution entity may select a candidate recognition model that matches the obtained recognition result from the candidate recognition model set as the second recognition model.
  • the candidate recognition model in the candidate recognition model set may be a pre-trained model for recognizing faces of different classes to generate key point information.
  • the candidate recognition model can be used to characterize the correspondence between the face image and the reference keypoint information of the face image and the keypoint information of the face image.
  • the candidate recognition model may be a model obtained by training an initial model (such as a convolutional neural network, a residual network, etc.) based on training samples and using a machine learning method.
  • the candidate recognition model can be obtained by training as follows. First, a training sample set is obtained, where the training sample may include a sample face image and sample keypoint information pre-labeled for the sample face image. Then, training samples can be selected from the training sample set and the following model training steps are performed: extracting the sample reference keypoint information corresponding to the sample face image of the selected training sample, wherein the sample reference keypoint information can be used to characterize the reference key The position of the point in the sample face image; the sample face image of the selected training sample and the extracted reference keypoint information of the sample are input into the initial model to obtain the keypoint information; the sample keypoint corresponding to the input sample face image The information is used as the expected output of the initial model. Based on the obtained keypoint information and sample keypoint information, adjust the parameters of the initial model; determine whether there are unselected training samples in the training sample set; respond to the absence of unselected training samples , Determine the adjusted initial model as a candidate recognition model.
  • the execution subject used to perform the above model training step can extract sample reference keypoint information corresponding to the sample face image in various ways.
  • the sample face image can be input to the first recognition model in step 402 of this embodiment to obtain the sample reference keypoint information corresponding to the sample face image; or the sample face image can be output and the user can be labeled for the sample face image The key information of the sample benchmark.
  • the method may further include the following steps: in response to determining that there are unselected training samples, reselecting training samples from the unselected training samples, and using the most recently adjusted initial model as the new initial model, and continuing Perform the model training steps described above.
  • the selection of the second recognition model in this embodiment may be implemented in a manner similar to the method for selecting the second recognition model in the embodiment corresponding to FIG. 2, and is not described herein again.
  • Step 404 Input the extracted face image and the obtained reference keypoint information into the selected second recognition model to obtain keypoint information corresponding to the extracted face image.
  • the execution subject may input the extracted face image and the obtained reference keypoint information into the selected second recognition model to obtain the extracted face image.
  • Corresponding key point information may be noted that, for the face image in the extracted face image, the execution subject may input the face image and the reference keypoint information corresponding to the face image into the second recognition model corresponding to the face image. Obtain keypoint information corresponding to the face image. It can be understood that the correspondence between the face image and the second recognition model can be determined by the correspondence between the recognition result corresponding to the face image and the second recognition model.
  • the process 400 of the method for generating information in this embodiment highlights the reference keypoint information corresponding to the generated face image, and based on the generated Steps of generating key point information corresponding to a face image based on the key point information. Therefore, the solution described in this embodiment can use the reference keypoint information as a reference to generate more accurate keypoint information, which further improves the accuracy of information generation.
  • this application provides an embodiment of an apparatus for generating information.
  • the apparatus embodiment corresponds to the method embodiment shown in FIG. 2.
  • the device can be specifically applied to various electronic devices.
  • the apparatus 500 for generating information in this embodiment includes an image acquisition unit 501, a first input unit 502, a model selection unit 503, and a second input unit 504.
  • the image acquisition unit 501 is configured to acquire an image to be identified, where the image to be identified includes a face image;
  • the first input unit 502 is configured to extract a face image from the image to be identified, and input the extracted face image into a pre-training
  • the first recognition model of the first recognition model obtains the recognition result corresponding to the extracted face image, wherein the recognition result is used to characterize the category of the face corresponding to the face image;
  • the model selection unit 503 is configured to select a candidate recognition model set from the set of candidate recognition models.
  • the candidate recognition model that matches the obtained recognition result is used as the second recognition model, wherein the candidate recognition model in the candidate recognition model set is a pre-trained model for identifying different classes of faces to generate key point information;
  • the second input The unit 504 is configured to input the extracted face image into the selected second recognition model to obtain keypoint information corresponding to the extracted face image, wherein the keypoint information is used to characterize the keypoints in the face image in the face image. Location.
  • the image acquisition unit 501 of the apparatus 500 for generating information may acquire an image to be identified in a wired connection manner or a wireless connection manner.
  • the image to be identified may include a face image.
  • the face image included in the image to be identified may include an animal face image, and may also include a human face image.
  • the animal face corresponding to the animal face image may be various types of animal faces, such as a dog face, a cat face, and the like.
  • the first input unit 502 may first extract a face image from the to-be-recognized image, and then input the extracted face image into a pre-trained first recognition model to obtain Recognition result corresponding to the extracted face image.
  • the recognition result may include, but is not limited to, at least one of the following: text, numbers, symbols, images, and audio.
  • the recognition result can be used to characterize the category of the face corresponding to the face image.
  • the image to be recognized may include at least one face image.
  • the first input unit 502 may input the face image into a first trained first recognition model.
  • a recognition result corresponding to the face image is obtained.
  • the first recognition model may be used to characterize a correspondence between a face image and a recognition result corresponding to the face image.
  • the model selection unit 503 may select a candidate recognition model that matches the obtained recognition result from the candidate recognition model set as the second recognition model.
  • the candidate recognition model in the candidate recognition model set may be a pre-trained model for recognizing faces of different classes to generate key point information.
  • the candidate recognition model may be used to characterize a correspondence between a face image and keypoint information corresponding to the face image.
  • the keypoint information can be used to characterize the position of keypoints in the face image in the face image.
  • the key points of the face image may be points with obvious semantic discrimination, such as points used to characterize the nose, points used to characterize the eyes, and so on.
  • the second input unit 504 may input the extracted face image into the selected second recognition model to obtain a key corresponding to the extracted face image.
  • Point information may be noted that, for a face image in the extracted face image, the second input unit 504 may input the face image into a second recognition model corresponding to the face image, and obtain a key point corresponding to the face image. information. It can be understood that the correspondence between the face image and the second recognition model can be determined by the correspondence between the recognition result corresponding to the face image and the second recognition model.
  • the first input unit 502 may be further configured to: input the extracted face image into a pre-trained first recognition model to obtain a recognition result corresponding to the extracted face image And reference keypoint information, where the reference keypoint information may be used to characterize the position of the reference keypoint in the face image in the face image; and the second input unit 504 may be further configured to: compare the extracted face image with the The obtained reference keypoint information is input into the selected second recognition model, and keypoint information corresponding to the extracted face image is obtained.
  • the first input unit 502 may include a first input module (not shown in the figure) configured to input an image to be identified into a pre-trained third recognition model to obtain Position information for characterizing the position of a face image in the image to be identified in the image to be identified; an image extraction module (not shown in the figure) configured to extract a face image from the image to be identified based on the obtained position information .
  • a first input module (not shown in the figure) configured to input an image to be identified into a pre-trained third recognition model to obtain Position information for characterizing the position of a face image in the image to be identified in the image to be identified
  • an image extraction module (not shown in the figure) configured to extract a face image from the image to be identified based on the obtained position information .
  • the second input unit 504 may be further configured to: input the extracted face image into the selected second recognition model to obtain the key points corresponding to the extracted face image Information and matching information, where the matching information may include a matching index used to characterize the degree of matching between the category of the face corresponding to the input face image and the category of the face corresponding to the second recognition model.
  • the image acquisition unit 501 may be further configured to: select an image to be identified from an image sequence corresponding to a target video, where the target video may be a place where a face is photographed. Get the video.
  • the apparatus 500 may further include: an image selecting unit (not shown in the figure), configured to select from the image sequence to be located behind the image to be identified and adjacent to the image to be identified Image as a candidate to-be-recognized image; an image determination unit (not shown in the figure) is configured to extract a face image from the candidate to-be-recognized image as a candidate face image, and determine the face image in the extracted, to-be-recognized image Is a reference face image; a condition determination unit (not shown in the figure) is configured to determine whether a matching index in the matching information corresponding to the determined reference face image meets a preset condition; a third input unit (not shown in the figure) Out), configured to, in response to determining yes, input the extracted candidate face image into the second reference model input to the determined reference face image, to obtain keypoint information and matching information corresponding to the extracted candidate face image.
  • an image selecting unit (not shown in the figure)
  • an image determination unit is configured to extract a face image from the candidate
  • the apparatus 500 obtained by the foregoing embodiment of the present application obtains an image to be identified through the image acquisition unit 501, and then the first input unit 502 extracts a face image from the image to be identified, and inputs the extracted face image into a pre-trained first recognition model.
  • the model selection unit 503 selects a candidate recognition model set that matches the obtained recognition result.
  • the candidate recognition model is used as the second recognition model.
  • the second input unit 504 inputs the extracted face image into the selected second recognition model to obtain key point information corresponding to the extracted face image.
  • the key point information is used for Characterize the position of key points in the face image in the face image, so that the face image can be identified using a candidate training model that is pre-trained to identify different types of faces to generate key point information, which can then identify different types of The face image improves the comprehensiveness of information generation, and uses the image corresponding to the face image. Other candidate recognition models to match the identified, the accuracy of the information generated may be provided.
  • FIG. 6 illustrates a schematic structural diagram of a computer system 600 suitable for implementing an electronic device (such as the terminal device / server shown in FIG. 1) in the embodiment of the present application.
  • the electronic device shown in FIG. 6 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present application.
  • the computer system 600 includes a central processing unit (CPU) 601, which can be loaded into a random access memory (RAM) 603 according to a program stored in a read-only memory (ROM) 602 or from a storage portion 608. Instead, perform various appropriate actions and processes.
  • RAM random access memory
  • ROM read-only memory
  • various programs and data required for the operation of the system 600 are also stored.
  • the CPU 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604.
  • An input / output (I / O) interface 605 is also connected to the bus 604.
  • the following components are connected to the I / O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a cathode ray tube (CRT), a liquid crystal display (LCD), and the speaker; a storage portion 608 including a hard disk and the like; a communication section 609 including a network interface card such as a LAN card, a modem, and the like.
  • the communication section 609 performs communication processing via a network such as the Internet.
  • the driver 610 is also connected to the I / O interface 605 as necessary.
  • a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 610 as needed, so that a computer program read therefrom is installed into the storage section 608 as needed.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a computer-readable medium, the computer program containing program code for performing a method shown in a flowchart.
  • the computer program may be downloaded and installed from a network through the communication portion 609, and / or installed from a removable medium 611.
  • CPU central processing unit
  • the computer-readable medium described in this application may be a computer-readable signal medium or a computer-readable storage medium or any combination of the foregoing.
  • the computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programming read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal that is included in baseband or propagated as part of a carrier wave, and which carries computer-readable program code. Such a propagated data signal may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • each block in the flowchart or block diagram may represent a module, a program segment, or a part of code, which contains one or more functions to implement a specified logical function Executable instructions.
  • the functions noted in the blocks may also occur in a different order than those marked in the drawings. For example, two successively represented boxes may actually be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented by a dedicated hardware-based system that performs the specified function or operation , Or it can be implemented with a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present application may be implemented by software or hardware.
  • the described unit may also be provided in a processor, for example, it may be described as: a processor includes an image acquisition unit, a first input unit, a model selection unit, and a second input unit.
  • a processor includes an image acquisition unit, a first input unit, a model selection unit, and a second input unit.
  • the names of these units do not constitute a limitation on the unit itself in some cases, for example, the image acquisition unit may also be described as a “unit for acquiring an image to be identified”.
  • the present application also provides a computer-readable medium, which may be included in the electronic device described in the foregoing embodiments; or may exist alone without being assembled into the electronic device in.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device: obtains an image to be identified, wherein the image to be identified includes a face image; Extracting a face image from the face, and inputting the extracted face image into a pre-trained first recognition model to obtain a recognition result corresponding to the extracted face image, wherein the recognition result is used to characterize a category of a face corresponding to the face image; from From the candidate recognition model set, a candidate recognition model that matches the obtained recognition result is selected as the second recognition model.
  • the candidate recognition model in the candidate recognition model set is a pre-trained, used to recognize faces of different categories to generate a key.
  • Model of point information inputting the extracted face image into the selected second recognition model to obtain key point information corresponding to the extracted face image, wherein the key point information is used to characterize the key points in the face image in the face image Location.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Selon divers modes de réalisation, la présente invention concerne un procédé et un dispositif utilisés pour générer des informations. Une manière spécifique de mettre en œuvre le procédé consiste à : acquérir une image à identifier; extraire une image faciale de l'image à identifier, et entrer l'image faciale extraite dans un premier modèle d'identification pré-entraîné afin d'obtenir un résultat d'identification correspondant à l'image faciale extraite, le résultat d'identification étant utilisé pour caractériser le type de visage correspondant à l'image faciale; choisir un modèle d'identification candidat qui correspond au résultat d'identification obtenu à partir d'un ensemble de modèles d'identification candidats pour servir de second modèle d'identification, le modèle d'identification candidat dans l'ensemble de modèles d'identification candidats étant un modèle pré-appris utilisé pour identifier différents types de visages de façon à générer des informations de point clé; entrer l'image faciale extraite dans le second modèle d'identification afin d'obtenir les informations de point clé correspondant à l'image faciale extraite, les informations de point clé étant utilisées pour caractériser la position dans l'image faciale d'un point clé dans l'image faciale. Le mode de réalisation augmente l'exhaustivité et la précision de la génération d'informations.
PCT/CN2018/116182 2018-07-27 2018-11-19 Procédé et dispositif utilisés pour la génération d'informations WO2020019591A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810846313.7 2018-07-27
CN201810846313.7A CN109034069B (zh) 2018-07-27 2018-07-27 用于生成信息的方法和装置

Publications (1)

Publication Number Publication Date
WO2020019591A1 true WO2020019591A1 (fr) 2020-01-30

Family

ID=64647253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116182 WO2020019591A1 (fr) 2018-07-27 2018-11-19 Procédé et dispositif utilisés pour la génération d'informations

Country Status (2)

Country Link
CN (1) CN109034069B (fr)
WO (1) WO2020019591A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461352A (zh) * 2020-04-17 2020-07-28 支付宝(杭州)信息技术有限公司 模型训练、业务节点识别方法、装置及电子设备
CN112241709A (zh) * 2020-10-21 2021-01-19 北京字跳网络技术有限公司 图像处理方法、胡子变换网络的训练方法、装置
CN112926479A (zh) * 2021-03-08 2021-06-08 新疆爱华盈通信息技术有限公司 猫脸识别方法、系统、电子装置及存储介质
CN113221767A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN113808044A (zh) * 2021-09-17 2021-12-17 北京百度网讯科技有限公司 加密掩膜确定方法、装置、设备以及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109740567A (zh) * 2019-01-18 2019-05-10 北京旷视科技有限公司 关键点定位模型训练方法、定位方法、装置及设备
CN109919244B (zh) * 2019-03-18 2021-09-07 北京字节跳动网络技术有限公司 用于生成场景识别模型的方法和装置
CN110347134A (zh) * 2019-07-29 2019-10-18 南京图玩智能科技有限公司 一种ai智能水产养殖样本识别方法及养殖系统
CN110688894B (zh) * 2019-08-22 2024-05-10 平安科技(深圳)有限公司 一种手掌关键点提取方法和装置
CN115240230A (zh) * 2022-09-19 2022-10-25 星宠王国(北京)科技有限公司 一种犬类脸部检测模型训练方法、装置、检测方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183665A1 (en) * 2006-02-06 2007-08-09 Mayumi Yuasa Face feature point detecting device and method
CN104715227A (zh) * 2013-12-13 2015-06-17 北京三星通信技术研究有限公司 人脸关键点的定位方法和装置
CN105512627A (zh) * 2015-12-03 2016-04-20 腾讯科技(深圳)有限公司 一种关键点的定位方法及终端
CN105760836A (zh) * 2016-02-17 2016-07-13 厦门美图之家科技有限公司 基于深度学习的多角度人脸对齐方法、系统及拍摄终端
CN106295591A (zh) * 2016-08-17 2017-01-04 乐视控股(北京)有限公司 基于人脸图像的性别识别方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1266642C (zh) * 2003-10-09 2006-07-26 重庆大学 基于多类别的人脸分类识别方法
WO2008139093A2 (fr) * 2007-04-06 2008-11-20 France Telecom Determination d'un modele de categorie d'images
CN107103269A (zh) * 2016-02-23 2017-08-29 芋头科技(杭州)有限公司 一种表情反馈方法及智能机器人
CN107491726B (zh) * 2017-07-04 2020-08-04 重庆邮电大学 一种基于多通道并行卷积神经网络的实时表情识别方法
CN108197644A (zh) * 2017-12-27 2018-06-22 深圳市大熊动漫文化有限公司 一种图像识别方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183665A1 (en) * 2006-02-06 2007-08-09 Mayumi Yuasa Face feature point detecting device and method
CN104715227A (zh) * 2013-12-13 2015-06-17 北京三星通信技术研究有限公司 人脸关键点的定位方法和装置
CN105512627A (zh) * 2015-12-03 2016-04-20 腾讯科技(深圳)有限公司 一种关键点的定位方法及终端
CN105760836A (zh) * 2016-02-17 2016-07-13 厦门美图之家科技有限公司 基于深度学习的多角度人脸对齐方法、系统及拍摄终端
CN106295591A (zh) * 2016-08-17 2017-01-04 乐视控股(北京)有限公司 基于人脸图像的性别识别方法及装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461352A (zh) * 2020-04-17 2020-07-28 支付宝(杭州)信息技术有限公司 模型训练、业务节点识别方法、装置及电子设备
CN111461352B (zh) * 2020-04-17 2023-05-09 蚂蚁胜信(上海)信息技术有限公司 模型训练、业务节点识别方法、装置及电子设备
CN112241709A (zh) * 2020-10-21 2021-01-19 北京字跳网络技术有限公司 图像处理方法、胡子变换网络的训练方法、装置
CN112926479A (zh) * 2021-03-08 2021-06-08 新疆爱华盈通信息技术有限公司 猫脸识别方法、系统、电子装置及存储介质
CN113221767A (zh) * 2021-05-18 2021-08-06 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN113221767B (zh) * 2021-05-18 2023-08-04 北京百度网讯科技有限公司 训练活体人脸识别模型、识别活体人脸的方法及相关装置
CN113808044A (zh) * 2021-09-17 2021-12-17 北京百度网讯科技有限公司 加密掩膜确定方法、装置、设备以及存储介质
CN113808044B (zh) * 2021-09-17 2022-11-01 北京百度网讯科技有限公司 加密掩膜确定方法、装置、设备以及存储介质

Also Published As

Publication number Publication date
CN109034069A (zh) 2018-12-18
CN109034069B (zh) 2021-04-09

Similar Documents

Publication Publication Date Title
WO2020019591A1 (fr) Procédé et dispositif utilisés pour la génération d'informations
CN109726624B (zh) 身份认证方法、终端设备和计算机可读存储介质
CN107492379B (zh) 一种声纹创建与注册方法及装置
WO2020006961A1 (fr) Procédé et dispositif d'extraction d'image
CN108509915B (zh) 人脸识别模型的生成方法和装置
WO2019242222A1 (fr) Procédé et dispositif à utiliser lors de la génération d'informations
WO2020000879A1 (fr) Procédé et appareil de reconnaissance d'image
WO2020024484A1 (fr) Procédé et dispositif de production de données
KR20210053825A (ko) 비디오를 처리하기 위한 방법 및 장치
WO2020000876A1 (fr) Procédé et dispositif de génération de modèle
CN110740389B (zh) 视频定位方法、装置、计算机可读介质及电子设备
WO2020029466A1 (fr) Procédé et appareil de traitement d'image
CN108549848B (zh) 用于输出信息的方法和装置
WO2021083069A1 (fr) Procédé et dispositif de formation de modèle d'échange de visages
US11126827B2 (en) Method and system for image identification
WO2020006964A1 (fr) Procédé et dispositif de détection d'image
JP7394809B2 (ja) ビデオを処理するための方法、装置、電子機器、媒体及びコンピュータプログラム
CN112507090B (zh) 用于输出信息的方法、装置、设备和存储介质
CN110363220B (zh) 行为类别检测方法、装置、电子设备和计算机可读介质
CN109582825B (zh) 用于生成信息的方法和装置
KR20200109239A (ko) 이미지를 처리하는 방법, 장치, 서버 및 저장 매체
CN108399401B (zh) 用于检测人脸图像的方法和装置
WO2022193911A1 (fr) Procédé et appareil d'acquisition d'informations d'instruction, support de stockage lisible et dispositif électronique
WO2020007191A1 (fr) Procédé et appareil de reconnaissance et de détection de corps vivant, support et dispositif électronique
CN109101956B (zh) 用于处理图像的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18927263

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 18.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18927263

Country of ref document: EP

Kind code of ref document: A1