CN108985257A - Method and apparatus for generating information - Google Patents

Method and apparatus for generating information Download PDF

Info

Publication number
CN108985257A
CN108985257A CN201810877023.9A CN201810877023A CN108985257A CN 108985257 A CN108985257 A CN 108985257A CN 201810877023 A CN201810877023 A CN 201810877023A CN 108985257 A CN108985257 A CN 108985257A
Authority
CN
China
Prior art keywords
facial image
sample
information
key point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810877023.9A
Other languages
Chinese (zh)
Inventor
邓启力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201810877023.9A priority Critical patent/CN108985257A/en
Publication of CN108985257A publication Critical patent/CN108985257A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the present application discloses the method and apparatus for generating information.One specific embodiment of this method includes: to obtain target facial image;By target facial image input human face recognition model trained in advance, obtain face key point information and head pose information corresponding to target facial image, wherein, face key point information is for characterizing position of the face key point in facial image, head pose information is used to characterize the posture on head corresponding to facial image, and human face recognition model is used to characterize the corresponding relationship of face key point information and head pose information corresponding to facial image and facial image.This embodiment improves the efficiency that information generates.

Description

Method and apparatus for generating information
Technical field
The invention relates to field of computer technology, more particularly, to generate the method and apparatus of information.
Background technique
Face key point refers to the point with the indexing of obvious semantic space in face, such as point, eyes corresponding to nose Corresponding point etc..Head pose refers to direction of the head of human body in three-dimensional system of coordinate.
Currently, head pose estimation technology is widely applied with the development of face critical point detection technology.It is existing In technology, for the head pose estimation of facial image, face key point corresponding to facial image is usually first detected, so Afterwards according to detected face key point, head pose corresponding to facial image is estimated.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for generating information.
In a first aspect, the embodiment of the present application provides a kind of method for generating information, this method comprises: obtaining target Facial image;By target facial image input human face recognition model trained in advance, people corresponding to target facial image is obtained Face key point information and head pose information, wherein face key point information is for characterizing face key point in facial image Position, head pose information is used to characterize the posture on head corresponding to facial image, and human face recognition model is for characterizing people The corresponding relationship of face key point information and head pose information corresponding to face image and facial image.
In some embodiments, human face recognition model includes Feature Selection Model, first information generation model and the second letter Breath generates model;And the human face recognition model that the input of target facial image is trained in advance, it is right to obtain target facial image institute The face key point information and head pose information answered, comprising: target facial image input feature vector is extracted into model, obtains target The characteristics of image of facial image and output;The characteristics of image input first information is generated into model, it is right to obtain target facial image institute The face key point information answered, and characteristics of image is inputted into the second information and generates model, it obtains corresponding to target facial image Head pose information.
In some embodiments, training obtains human face recognition model as follows: initial training sample set is obtained, In, initial training sample includes that sample facial image and the sample face key point marked in advance for sample facial image are believed Breath, sample face key point information is for characterizing position of the sample face key point in sample facial image;For initially instructing Practice the initial training sample in sample set, based on the sample face key point information in the initial training sample, determines that this is initial Sample header posture information corresponding to sample facial image in training sample, and utilize identified sample header posture Information and the initial training sample form training sample, wherein sample header posture information is for characterizing sample facial image institute The posture of corresponding sample header;Using machine learning method, by the sample people of the training sample in composed training sample Face image is as input, by sample face key point information corresponding to the sample facial image inputted and sample header posture Information obtains human face recognition model as desired output, training.
In some embodiments, target facial image is obtained, comprising: acquisition carries out target face to shoot people obtained Face video;Facial image is chosen from human face image sequence corresponding to face video as target facial image.
In some embodiments, in the human face recognition model that the input of target facial image is trained in advance, target person is obtained After face key point information and head pose information corresponding to face image, this method further include: from target facial image Determine facial image region;Based on face key point information corresponding to obtained, target facial image, to target face figure Facial image region as in is rotated, so that facial image region meets preset condition, and is based on postrotational face Image-region generates new target facial image;New target facial image is inputted into human face recognition model, obtains new target Face key point information and head pose information corresponding to facial image.
Second aspect, the embodiment of the present application provide it is a kind of for generating the device of information, the device include: image obtain Unit is configured to obtain target facial image;First input unit is configured to inputting target facial image into training in advance Human face recognition model, obtain target facial image corresponding to face key point information and head pose information, wherein face Key point information is for characterizing position of the face key point in facial image, and head pose information is for characterizing facial image institute The posture on corresponding head, human face recognition model is for characterizing face key point information corresponding to facial image and facial image With the corresponding relationship of head pose information.
In some embodiments, human face recognition model includes Feature Selection Model, first information generation model and the second letter Breath generates model;And first input unit include: the first input module, be configured to mention target facial image input feature vector Modulus type obtains characteristics of image and the output of target facial image;Second input module is configured to characteristics of image inputting the One information generates model, obtains face key point information corresponding to target facial image, and characteristics of image is inputted second Information generates model, obtains head pose information corresponding to target facial image.
In some embodiments, training obtains human face recognition model as follows: initial training sample set is obtained, In, initial training sample includes that sample facial image and the sample face key point marked in advance for sample facial image are believed Breath, sample face key point information is for characterizing position of the sample face key point in sample facial image;For initially instructing Practice the initial training sample in sample set, based on the sample face key point information in the initial training sample, determines that this is initial Sample header posture information corresponding to sample facial image in training sample, and utilize identified sample header posture Information and the initial training sample form training sample, wherein sample header posture information is for characterizing sample facial image institute The posture of corresponding sample header;Using machine learning method, by the sample people of the training sample in composed training sample Face image is as input, by sample face key point information corresponding to the sample facial image inputted and sample header posture Information obtains human face recognition model as desired output, training.
In some embodiments, image acquisition unit includes: video acquiring module, be configured to obtain to target face into Row shoots face video obtained;Image chooses module, is configured to from human face image sequence corresponding to face video Facial image is chosen as target facial image.
In some embodiments, device further include: area determination unit is configured to determine from target facial image Facial image region;Region rotary unit is configured to based on face key point corresponding to obtained, target facial image Information rotates the facial image region in target facial image, so that facial image region meets preset condition, and Based on postrotational facial image region, new target facial image is generated;Second input unit is configured to new target Facial image inputs human face recognition model, obtains face key point information and head pose corresponding to new target facial image Information.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, comprising: one or more processors;Storage dress Set, be stored thereon with one or more programs, when one or more programs are executed by one or more processors so that one or The method that multiple processors realize any embodiment in the above-mentioned method for generating information.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should The method of any embodiment in the above-mentioned method for generating information is realized when program is executed by processor.
Method and apparatus provided by the embodiments of the present application for generating information, by obtaining target facial image, then By target facial image input human face recognition model trained in advance, the letter of face key point corresponding to target facial image is obtained Breath and head pose information, wherein face key point information can be used for characterizing position of the face key point in facial image, Head pose information can be used for characterizing the posture on head corresponding to facial image, so as to utilize face trained in advance Identification model, while generating face key point information and head pose information corresponding to facial image, that is, carrying out face pass While key point detects, the estimation to head pose is realized, improves the efficiency of information generation.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the application can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for generating information of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the method for generating information of the embodiment of the present application;
Fig. 4 is the flow chart according to another embodiment of the method for generating information of the application;
Fig. 5 is the structural schematic diagram according to one embodiment of the device for generating information of the application;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can be using the method for generating information of the application or the implementation of the device for generating information The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send message etc..Various telecommunication customer end applications can be installed, such as image processing class is answered on terminal device 101,102,103 With, U.S. figure software, web browser applications, searching class application, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard When part, it can be the various electronic equipments with display screen, including but not limited to smart phone, tablet computer, e-book reading (Moving Picture Experts Group Audio Layer III, dynamic image expert compress mark for device, MP3 player Quasi- audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert compression Standard audio level 4) player, pocket computer on knee and desktop computer etc..When terminal device 101,102,103 is When software, it may be mounted in above-mentioned cited electronic equipment.Its may be implemented into multiple softwares or software module (such as with To provide the multiple softwares or software module of Distributed Services), single software or software module also may be implemented into.It does not do herein It is specific to limit.
Server 105 can be to provide the server of various services, such as to the people that terminal device 101,102,103 is sent The image processing server that face image is handled.Image processing server can carry out the data such as the facial image received The processing such as analysis, and obtain processing result (such as face key point information and head pose information).
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software It, can also be with to be implemented as multiple softwares or software module (such as providing multiple softwares of Distributed Services or software module) It is implemented as single software or software module.It is not specifically limited herein.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.In target facial image or generate face key point letter Used data do not need in the case where long-range obtain during breath and head pose information, and above system framework can be with Do not include network, and only includes terminal device or server.
With continued reference to Fig. 2, the process of one embodiment of the method for generating information according to the application is shown 200.The method for being used to generate information, comprising the following steps:
Step 201, target facial image is obtained.
Wired connection can be passed through in the executing subject (such as server shown in FIG. 1) that this implementation generates the method for information Mode or radio connection obtain target facial image.Wherein, target facial image can be to be to be determined corresponding to it The facial image of face key point and head pose.Facial image can be to carry out shooting image obtained to face.
It should be noted that herein, face key point can be the point indexed with obvious semantic space, it can be used for table The composition position of traveller on a long journey's face, such as face key point can be the point for characterizing nose, point for characterizing eyes etc..Head Posture refers to direction of the head in three-dimensional system of coordinate.Herein, direction of the head in three-dimensional system of coordinate can with head around The X-axis of three-dimensional system of coordinate, Y-axis, the angle of Z axis rotation characterize.
It, can also be with it should be noted that above-mentioned executing subject is available to be pre-stored within local target facial image It obtains and communicates the target facial image that the electronic equipment (such as terminal device shown in FIG. 1) of connection is sent.
In some optional implementations of the present embodiment, above-mentioned executing subject can obtain target by following steps Facial image: target face is carried out to shoot face video obtained firstly, above-mentioned executing subject is available.Then, on Facial image can be chosen as target facial image from the human face image sequence corresponding to face video by stating executing subject.Its In, target face can be the face of the face key point corresponding to it and head pose to be determined.
It should be noted that video is substantially the image sequence that a sequencing according to the time arranges, thus it is above-mentioned Face video can correspond to a human face image sequence.Herein, above-mentioned executing subject can adopt in various manners from face figure As choosing facial image in sequence as target facial image.For example, can be by the way of randomly selecting, or it can be preferential The preferable facial image of clarity is chosen as target facial image.
Step 202, it is right to obtain target facial image institute for the human face recognition model that the input of target facial image is trained in advance The face key point information and head pose information answered.
In the present embodiment, based on target facial image obtained in step 201, above-mentioned executing subject can be by target person Face image input human face recognition model trained in advance, obtains face key point information and head corresponding to target facial image Posture information.
Wherein, face key point information can include but is not limited at least one of following: number, text, symbol, image, Face key point information can be used for characterizing position of the face key point in facial image.For example, face key point information can Think the key point coordinate for characterizing position of the face key point in facial image.Herein, key point coordinate can be Coordinate under the coordinate system pre-established based on facial image.
Head pose information can include but is not limited at least one of following: number, text, symbol, image, head pose Information can be used for characterizing the posture on head corresponding to facial image.For example, it is directed to three-dimensional world coordinate system, head pose letter Breath may include head corresponding to facial image rotated relative to X-axis angle value, relative to Y-axis rotation angle value and The angle value of corresponding Z axis rotation.
Human face recognition model can be used for characterizing face key point information and head corresponding to facial image and facial image The corresponding relationship of portion's posture information.Specifically, human face recognition model can for based on training sample, using machine learning method, To initial model (such as convolutional neural networks (Convolutional Neural Network, CNN), residual error network (ResNet)), the model obtained after being trained such as.
In some optional implementations of the present embodiment, above-mentioned human face recognition model can train as follows It obtains:
Firstly, obtaining initial training sample set.Wherein, initial training sample may include sample facial image and for sample The sample face key point information that this facial image marks in advance.Sample face key point information can be used for characterizing sample face Position of the key point in sample facial image.
Then, for the initial training sample in initial training sample set, based on the sample people in the initial training sample Face key point information determines sample header posture information corresponding to the sample facial image in the initial training sample, and Using identified sample header posture information and the initial training sample, training sample is formed.Wherein, sample header posture is believed Breath can be used for characterizing the posture of sample header corresponding to sample facial image.
Herein, various methods be can use, sample header posture information is determined based on sample face key point information.Example Such as, can be based on sample face key point information, using OpenCV (Open Source Computer Vision Library, Open source computer vision library) inner solvePnP function determines sample header posture information.
Finally, the sample facial image of the training sample in composed training sample is made using machine learning method For input, using sample face key point information corresponding to the sample facial image inputted and sample header posture information as Desired output is trained predetermined initial model, obtains human face recognition model.
In practice, after forming training sample, it can use various modes and initial model be trained, obtain face and know Other model.Specifically, as an example, training can be chosen from composed training sample for composed training sample Sample, and execute following training step: the sample facial image of selected training sample is inputted into initial model, obtains sample Face key point information and head pose information corresponding to facial image;By sample corresponding to the sample facial image inputted The desired output of this face key point information and sample header posture information as initial model determines that face obtained is crucial Point information is relative to the penalty values of sample face key point information and head pose information obtained relative to sample header The penalty values of posture information;Using identified penalty values, using the parameter of the method adjustment initial model of backpropagation;It determines It whether there is unselected training sample in composed training sample;Unselected training sample is not present in response to determining This, is determined as human face recognition model for initial model adjusted.
It should be noted that the selection mode of training sample is not intended to limit in this application.Such as can be and randomly select, It is also possible to preferentially choose the preferable training sample of clarity of sample facial image.It, can be with it should also be noted that, herein Damage of the face key point information obtained relative to sample face key point information is determined using preset various loss functions The penalty values of mistake value and head pose information obtained relative to sample header posture information.For example, Euclidean can be used Range loss function calculates penalty values.
In this example, can with the following steps are included: in response to determine there are unselected training samples, never by Again training sample is chosen in the training sample of selection, and uses the last initial model adjusted as new introductory die Type continues to execute above-mentioned training step.
It should be noted that in practice, for the step of generating human face recognition model executing subject can with for giving birth to Executing subject at the method for information is same or different.If identical, for holding for the step of generating human face recognition model Trained human face recognition model can be stored in local after training obtains human face recognition model by row main body.If it is different, Can then obtain in training for the executing subject for the step of generating human face recognition model will be trained after human face recognition model Human face recognition model is sent to the executing subject of the method for generating information.
In some optional implementations of the present embodiment, target facial image is being inputted into above-mentioned recognition of face mould Type, after obtaining face key point information corresponding to target facial image and head pose information, above-mentioned executing subject may be used also To execute following steps:
Firstly, above-mentioned executing subject can determine facial image region from target facial image.
Herein, above-mentioned executing subject can adopt determines facial image region in various manners.Such as face can be passed through Identification technology determines facial image region;Alternatively, target facial image can be exported, the facial image area manually marked is obtained Domain.
Then, based on face key point information corresponding to obtained, target facial image, above-mentioned executing subject can be with Facial image region in target facial image is rotated, so that facial image region meets preset condition, and is based on Postrotational facial image region generates new target facial image.Wherein, face figure included by new target facial image As region is postrotational facial image region.In addition, it is necessary to explanation, above-mentioned executing subject can be from facial image area Any one point is chosen in plane where domain as the rotation center rotated to facial image region, is not limited herein System.
Herein, preset condition can for technical staff it is pre-set, for making facial image region meet technology people The condition of the standard of member.Specifically, preset condition can be to face key point indicated by face key point information obtained In face key point limited so that facial image region meets the standard of technical staff.
Illustratively, obtained, face key point information corresponding to target facial image includes the eyes in face The coordinate of two corresponding key points.In turn, preset condition can be key corresponding to the eyes in facial image region The line and horizontal direction parallel of point.Herein, the coordinate of key point corresponding to eyes can be using horizontal direction as horizontal axis, with Vertical direction is the coordinate under the coordinate system of the longitudinal axis.In turn, above-mentioned executing subject can be to the face figure in target facial image As region is rotated, so that identical (i.e. two keys corresponding to eyes of the ordinate of two key points corresponding to eyes The line and horizontal direction parallel of point, that is, meet preset condition).
Herein, it is to be understood that the line of key point corresponding to eyes is usually vertical with head axes.In turn, By rotation processing in above-mentioned example, to facial image region, the head on head corresponding to facial image region can be made Portion's axis is vertical with horizontal direction.Herein, the standard of technical staff can be head corresponding to facial image region Head axes are vertical with horizontal direction.Certainly, the standard of technical staff may be other standards, such as facial image region institute The head axes and horizontal direction parallel on corresponding head.It should be noted that head axes can be in corresponding to human body A part of heart line.Only consider the appearance of human body, human body can be divided into symmetrical two parts by center line corresponding to human body.
Finally, new target facial image can be inputted above-mentioned human face recognition model by above-mentioned executing subject, obtain new Face key point information and head pose information corresponding to target facial image.
It is understood that since training sample is limited, head corresponding to the sample facial image that training sample is concentrated Posture can not cover all head poses, so in general, the human face recognition model of training acquisition is to common head pose (example Such as head pose of the head axes with horizontal direction when vertical) corresponding to facial image can carry out more accurate identification. Therefore, for this implementation, based on facial image that is postrotational, meeting preset condition region, to new target face figure As being identified, more accurate face key point information and head pose information can be obtained, the standard of information generation is improved True property.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for generating information of the present embodiment Figure.In the application scenarios of Fig. 3, server 301 obtains the target facial image 303 of the transmission of terminal device 302 first.Then, Target facial image 303 can be inputted human face recognition model 304 trained in advance by server 301, obtain target facial image Face key point information 305 corresponding to 303 " (10,40) " and head pose information 306 " (8 °, 20 °, 3 °) ".
Wherein, face key point information " (10,40) " can be based on target person for face key point corresponding to nose Coordinate under the plane right-angle coordinate that face image 303 pre-establishes.Specifically, " 10 " can be the coordinate of x-axis;" 40 " can be with For the coordinate of y-axis.It is opposite that head pose information " (8 °, 20 °, 3 °) " can be used for characterizing head corresponding to target facial image The rotation angle of X-axis, Y-axis, Z axis in three-dimensional world coordinate system.Specifically, " 8 ° " can be the angle rotated relative to X-axis Degree;" 20 ° " can be the angle rotated relative to Y-axis;" 3 ° " can be the angle rotated relative to Z axis.
The method provided by the above embodiment of the application is then defeated by target facial image by obtaining target facial image Enter human face recognition model trained in advance, obtains face key point information corresponding to target facial image and head pose letter Breath, wherein face key point information can be used for characterizing position of the face key point in facial image, and head pose information can With the posture for characterizing head corresponding to facial image, so as to utilize human face recognition model trained in advance, simultaneously Face key point information and head pose information corresponding to facial image are generated, that is, is carrying out the same of face critical point detection When, the estimation to head pose is realized, the efficiency of information generation is improved.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for generating information.The use In the process 400 for the method for generating information, comprising the following steps:
Step 401, target facial image is obtained.
In the present embodiment, can lead to for generating the executing subject (such as server shown in FIG. 1) of the method for information It crosses wired connection mode or radio connection obtains target facial image.Wherein, target facial image can be to be determined The facial image of face key point and head pose corresponding to it.Facial image can be obtained shoot to face Image.
It should be noted that step 401 can be realized by the way of similar with the step 201 in previous embodiment.Phase Ying Di describes the also suitable step 401 that can be used for the present embodiment above with respect to step 201, and details are not described herein again.
Step 402, by the Feature Selection Model of target facial image input human face recognition model trained in advance, mesh is obtained Mark characteristics of image and the output of facial image.
In the present embodiment, human face recognition model trained in advance may include Feature Selection Model, in turn, be based on step Target facial image can be inputted recognition of face trained in advance by target facial image obtained in 401, above-mentioned executing subject The Feature Selection Model of model obtains characteristics of image and the output of target facial image.
In the present embodiment, Feature Selection Model can be used for extracting characteristics of image and the output of target facial image.Tool Body, herein, Feature Selection Model may include for extracting the structure of characteristics of image (such as convolutional layer), naturally it is also possible to Including other structures (such as pond layer), herein with no restrictions.
Step 403, the first information of characteristics of image input human face recognition model is generated into model, obtains target facial image Corresponding face key point information, and the second information of characteristics of image input human face recognition model is generated into model, it obtains Head pose information corresponding to target facial image.
In the present embodiment, human face recognition model can also include that the first information generates model and the second information generation mould Type.Wherein, the first information, which generates model, can be used for generating face key point information.Second information, which generates model, can be used for giving birth to At head pose information.In turn, the characteristics of image that features described above can be extracted model output by above-mentioned executing subject inputs respectively The first information generates model and the second information generates model, obtains face key point information and head corresponding to target facial image Portion's posture information.
Wherein, face key point information can include but is not limited at least one of following: number, text, symbol, image, Face key point information can be used for characterizing position of the face key point in facial image.Head pose information may include but Be not limited at least one of following: number, text, symbol, image, head pose information can be used for characterizing corresponding to facial image Head posture.
In the present embodiment, the first information generates model and the second information generates model and can extract respectively with features described above Model connection, and then the first information generates the characteristics of image that model can be exported based on Feature Selection Model, and it is crucial to generate face Point information;Second information generates the characteristics of image that model can be exported based on Feature Selection Model, generates head pose information.
It should be noted that herein, it may include for giving birth to that the first information, which generates model and the second information generation model, At the structure (such as full articulamentum) of information, can also include certainly other structures (such as output layer), herein with no restrictions.
Figure 4, it is seen that being used to generate information approach in the present embodiment compared with the corresponding embodiment of Fig. 2 Process 400, which is highlighted, extracts model for target facial image input feature vector, obtains characteristics of image, and by characteristics of image obtained As sharing feature, the first information is inputted respectively and generates model and the second information generation model, and then obtains face key point letter The step of breath and head pose information.The scheme of the present embodiment description can use sharing feature as a result, while generate face pass Key point information and head pose information, reduce the complexity of model, improve the efficiency of information generation.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating letter One embodiment of the device of breath, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer For in various electronic equipments.
As shown in figure 5, the present embodiment includes: that image acquisition unit 501 and first is defeated for generating the device 500 of information Enter unit 502.Wherein, image acquisition unit 501 is configured to obtain target facial image;First input unit 502 is configured At the human face recognition model that the input of target facial image is trained in advance, face key point corresponding to target facial image is obtained Information and head pose information, wherein face key point information is for characterizing position of the face key point in facial image, head Portion's posture information is used to characterize the posture on head corresponding to facial image, and human face recognition model is for characterizing facial image and people The corresponding relationship of face key point information and head pose information corresponding to face image.
In the present embodiment, wired connection mode can be passed through for generating the image acquisition unit 501 of the device of information Or radio connection obtains target facial image.Wherein, target facial image can be the face to be determined corresponding to it The facial image of key point and head pose.Facial image can be to carry out shooting image obtained to face.
It should be noted that herein, face key point can be the point indexed with obvious semantic space, it can be used for table The composition position of traveller on a long journey's face, such as face key point can be the point for characterizing nose, point for characterizing eyes etc..Head Posture refers to direction of the head in three-dimensional system of coordinate.Herein, direction of the head in three-dimensional system of coordinate can with head around The X-axis of three-dimensional system of coordinate, Y-axis, the angle of Z axis rotation characterize.
In the present embodiment, the target facial image obtained based on image acquisition unit 501, the first input unit 502 can With the human face recognition model that the input of target facial image is trained in advance, face key point corresponding to target facial image is obtained Information and head pose information.
Wherein, face key point information can include but is not limited at least one of following: number, text, symbol, image, Face key point information can be used for characterizing position of the face key point in facial image.Head pose information may include but Be not limited at least one of following: number, text, symbol, image, head pose information can be used for characterizing corresponding to facial image Head posture.
Human face recognition model can be used for characterizing face key point information and head corresponding to facial image and facial image The corresponding relationship of portion's posture information.Specifically, human face recognition model can for based on training sample, using machine learning method, The model obtained after being trained to initial model (such as convolutional neural networks, residual error network etc.).
In some optional implementations of the present embodiment, human face recognition model may include Feature Selection Model, One information generates model and the second information generates model;And first input unit 502 may include: the first input module (figure In be not shown), be configured to by target facial image input feature vector extract model, obtain target facial image characteristics of image and Output;Second input module (not shown) is configured to input characteristics of image the first information and generates model, acquisition target Face key point information corresponding to facial image, and characteristics of image is inputted into the second information and generates model, obtain target person Head pose information corresponding to face image.
In some optional implementations of the present embodiment, human face recognition model can be trained as follows Arrive: obtain initial training sample set, wherein initial training sample include sample facial image and for sample facial image it is preparatory The sample face key point information of mark, sample face key point information is for characterizing sample face key point in sample face figure Position as in;For the initial training sample in initial training sample set, based on the sample face in the initial training sample Key point information determines sample header posture information corresponding to the sample facial image in the initial training sample, Yi Jili With identified sample header posture information and the initial training sample, training sample is formed, wherein sample header posture information It can be used for characterizing the posture of sample header corresponding to sample facial image;Using machine learning method, by composed instruction Practice the sample facial image of the training sample in sample as input, by sample people corresponding to the sample facial image inputted As desired output, training obtains human face recognition model for face key point information and sample header posture information.
In some optional implementations of the present embodiment, image acquisition unit 501 may include: video acquiring module (not shown) is configured to acquisition and carries out shooting face video obtained to target face;Image chooses module (in figure It is not shown), it is configured to choose facial image from human face image sequence corresponding to face video as target facial image.
In some optional implementations of the present embodiment, device 500 can also include: area determination unit (in figure It is not shown), it is configured to determine facial image region from target facial image;Region rotary unit (not shown), quilt It is configured to based on face key point information corresponding to obtained, target facial image, to the face in target facial image Image-region is rotated, so that facial image region meets preset condition, and is based on postrotational facial image region, raw The target facial image of Cheng Xin;Second input unit (not shown) is configured to new target facial image inputting people Face identification model obtains face key point information corresponding to new target facial image and head pose information.
It is understood that all units recorded in the device 500 and each step phase in the method with reference to Fig. 2 description It is corresponding.As a result, above with respect to the operation of method description, the beneficial effect of feature and generation be equally applicable to device 500 and its In include unit, details are not described herein.
The device provided by the above embodiment 500 of the application obtains target facial image by image acquisition unit 501, and Target facial image is inputted human face recognition model trained in advance by the first input unit 502 afterwards, obtains target facial image institute Corresponding face key point information and head pose information, wherein face key point information can be used for characterizing face key point Position in facial image, head pose information can be used for characterizing the posture on head corresponding to facial image, so as to To utilize human face recognition model trained in advance, while generating face key point information and head pose corresponding to facial image Information realizes the estimation to head pose that is, while carrying out face critical point detection, improves the effect of information generation Rate.
Below with reference to Fig. 6, it is (such as shown in FIG. 1 that it illustrates the electronic equipments for being suitable for being used to realize the embodiment of the present application Terminal device/server) computer system 600 structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, Should not function to the embodiment of the present application and use scope bring any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data. CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always Line 604.
I/O interface 605 is connected to lower component: the importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.; And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because The network of spy's net executes communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to read from thereon Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media 611 are mounted.When the computer program is executed by central processing unit (CPU) 601, limited in execution the present processes Above-mentioned function.It should be noted that computer-readable medium described herein can be computer-readable signal media or Computer readable storage medium either the two any combination.Computer readable storage medium for example can be --- but Be not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination. The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires electrical connection, Portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only deposit Reservoir (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory Part or above-mentioned any appropriate combination.In this application, computer readable storage medium, which can be, any include or stores The tangible medium of program, the program can be commanded execution system, device or device use or in connection.And In the application, computer-readable signal media may include in a base band or the data as the propagation of carrier wave a part are believed Number, wherein carrying computer-readable program code.The data-signal of this propagation can take various forms, including but not It is limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be computer Any computer-readable medium other than readable storage medium storing program for executing, the computer-readable medium can send, propagate or transmit use In by the use of instruction execution system, device or device or program in connection.Include on computer-readable medium Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc., Huo Zheshang Any appropriate combination stated.
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet Include image acquisition unit and the first input unit.Wherein, the title of these units is not constituted under certain conditions to the unit The restriction of itself, for example, image acquisition unit is also described as " obtaining the unit of target facial image ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment. Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment When row, so that the electronic equipment: obtaining target facial image;By target facial image input recognition of face mould trained in advance Type obtains face key point information and head pose information corresponding to target facial image, wherein face key point information is used In position of the characterization face key point in facial image, head pose information is for characterizing head corresponding to facial image Posture, human face recognition model are believed for characterizing face key point information corresponding to facial image and facial image and head pose The corresponding relationship of breath.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of method for generating information, comprising:
Obtain target facial image;
Target facial image input human face recognition model trained in advance is obtained corresponding to the target facial image Face key point information and head pose information, wherein face key point information is for characterizing face key point in facial image In position, head pose information is used to characterize the posture on head corresponding to facial image, and human face recognition model is for characterizing The corresponding relationship of face key point information and head pose information corresponding to facial image and facial image.
2. according to the method described in claim 1, wherein, the human face recognition model includes Feature Selection Model, the first information It generates model and the second information generates model;And
It is right to obtain the target facial image institute for the human face recognition model that target facial image input is trained in advance The face key point information and head pose information answered, comprising:
The target facial image is inputted into the Feature Selection Model, obtains the characteristics of image of the target facial image and defeated Out;
Described image feature is inputted into the first information and generates model, obtains the pass of face corresponding to the target facial image Key point information, and described image feature is inputted into second information and generates model, it is right to obtain the target facial image institute The head pose information answered.
3. according to the method described in claim 1, wherein, training obtains the human face recognition model as follows:
Obtain initial training sample set, wherein initial training sample include sample facial image and for sample facial image it is pre- The sample face key point information first marked, sample face key point information is for characterizing sample face key point in sample face Position in image;
For the initial training sample in initial training sample set, based on the sample face key point letter in the initial training sample Breath determines sample header posture information corresponding to the sample facial image in the initial training sample, and utilizes and determine Sample header posture information and the initial training sample, form training sample, wherein sample header posture information is for characterizing The posture of sample header corresponding to sample facial image;
It will using the sample facial image of the training sample in composed training sample as input using machine learning method Sample face key point information corresponding to the sample facial image inputted and sample header posture information as desired output, Training obtains human face recognition model.
4. according to the method described in claim 1, wherein, the acquisition target facial image, comprising:
Acquisition carries out target face to shoot face video obtained;
Facial image is chosen from human face image sequence corresponding to the face video as target facial image.
5. method described in one of -4 according to claim 1, wherein the target facial image is inputted training in advance described Human face recognition model, after obtaining face key point information corresponding to the target facial image and head pose information, The method also includes:
Facial image region is determined from the target facial image;
Based on face key point information corresponding to obtained, the described target facial image, in the target facial image Facial image region rotated so that the facial image region meets preset condition, and be based on postrotational face Image-region generates new target facial image;
The new target facial image is inputted into the human face recognition model, is obtained corresponding to the new target facial image Face key point information and head pose information.
6. a kind of for generating the device of information, comprising:
Image acquisition unit is configured to obtain target facial image;
First input unit is configured to inputting the target facial image into human face recognition model trained in advance, obtains institute State face key point information and head pose information corresponding to target facial image, wherein face key point information is used for table Position of traveller on a long journey's face key point in facial image, head pose information are used to characterize the appearance on head corresponding to facial image State, human face recognition model is for characterizing face key point information and head pose information corresponding to facial image and facial image Corresponding relationship.
7. device according to claim 6, wherein the human face recognition model includes Feature Selection Model, the first information It generates model and the second information generates model;And
First input unit includes:
First input module is configured to the target facial image inputting the Feature Selection Model, obtains the target The characteristics of image of facial image and output;
Second input module is configured to inputting described image feature into the first information generation model, obtains the target Face key point information corresponding to facial image, and described image feature is inputted into second information and generates model, it obtains Obtain head pose information corresponding to the target facial image.
8. device according to claim 6, wherein training obtains the human face recognition model as follows:
Obtain initial training sample set, wherein initial training sample include sample facial image and for sample facial image it is pre- The sample face key point information first marked, sample face key point information is for characterizing sample face key point in sample face Position in image;
For the initial training sample in initial training sample set, based on the sample face key point letter in the initial training sample Breath determines sample header posture information corresponding to the sample facial image in the initial training sample, and utilizes and determine Sample header posture information and the initial training sample, form training sample, wherein sample header posture information is for characterizing The posture of sample header corresponding to sample facial image;
It will using the sample facial image of the training sample in composed training sample as input using machine learning method Sample face key point information corresponding to the sample facial image inputted and sample header posture information as desired output, Training obtains human face recognition model.
9. device according to claim 6, wherein described image acquiring unit includes:
Video acquiring module is configured to acquisition and carries out shooting face video obtained to target face;
Image chooses module, is configured to choose facial image conduct from human face image sequence corresponding to the face video Target facial image.
10. the device according to one of claim 6-9, wherein described device further include:
Area determination unit is configured to determine facial image region from the target facial image;
Region rotary unit is configured to based on face key point information corresponding to obtained, the described target facial image, Facial image region in the target facial image is rotated, so that the facial image region meets preset condition, And it is based on postrotational facial image region, generate new target facial image;
Second input unit, is configured to input the new target facial image human face recognition model, described in acquisition Face key point information corresponding to new target facial image and head pose information.
11. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real Now such as method as claimed in any one of claims 1 to 5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor Such as method as claimed in any one of claims 1 to 5.
CN201810877023.9A 2018-08-03 2018-08-03 Method and apparatus for generating information Pending CN108985257A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810877023.9A CN108985257A (en) 2018-08-03 2018-08-03 Method and apparatus for generating information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810877023.9A CN108985257A (en) 2018-08-03 2018-08-03 Method and apparatus for generating information

Publications (1)

Publication Number Publication Date
CN108985257A true CN108985257A (en) 2018-12-11

Family

ID=64555284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810877023.9A Pending CN108985257A (en) 2018-08-03 2018-08-03 Method and apparatus for generating information

Country Status (1)

Country Link
CN (1) CN108985257A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754464A (en) * 2019-01-31 2019-05-14 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN110046571A (en) * 2019-04-15 2019-07-23 北京字节跳动网络技术有限公司 The method and apparatus at age for identification
CN110197230A (en) * 2019-06-03 2019-09-03 北京字节跳动网络技术有限公司 Method and apparatus for training pattern
CN110427864A (en) * 2019-07-29 2019-11-08 腾讯科技(深圳)有限公司 A kind of image processing method, device and electronic equipment
CN110517214A (en) * 2019-08-28 2019-11-29 北京百度网讯科技有限公司 Method and apparatus for generating image
CN111145110A (en) * 2019-12-13 2020-05-12 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111241961A (en) * 2020-01-03 2020-06-05 精硕科技(北京)股份有限公司 Face detection method and device and electronic equipment
CN111695431A (en) * 2020-05-19 2020-09-22 深圳禾思众成科技有限公司 Face recognition method, face recognition device, terminal equipment and storage medium
CN111797791A (en) * 2018-12-25 2020-10-20 上海智臻智能网络科技股份有限公司 Human body posture recognition method and device
CN111814573A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 Face information detection method and device, terminal equipment and storage medium
CN111968203A (en) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 Animation driving method, animation driving device, electronic device, and storage medium
WO2021052224A1 (en) * 2019-09-18 2021-03-25 北京市商汤科技开发有限公司 Video generation method and apparatus, electronic device, and computer storage medium
CN112560705A (en) * 2020-12-17 2021-03-26 北京捷通华声科技股份有限公司 Face detection method and device and electronic equipment
CN113012042A (en) * 2019-12-20 2021-06-22 海信集团有限公司 Display device, virtual photo generation method, and storage medium
CN113034580A (en) * 2021-03-05 2021-06-25 北京字跳网络技术有限公司 Image information detection method and device and electronic equipment
CN114399803A (en) * 2021-11-30 2022-04-26 际络科技(上海)有限公司 Face key point detection method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050265604A1 (en) * 2004-05-27 2005-12-01 Mayumi Yuasa Image processing apparatus and method thereof
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN105760836A (en) * 2016-02-17 2016-07-13 厦门美图之家科技有限公司 Multi-angle face alignment method based on deep learning and system thereof and photographing terminal
CN107038422A (en) * 2017-04-20 2017-08-11 杭州电子科技大学 The fatigue state recognition method of deep learning is constrained based on space geometry
CN107590482A (en) * 2017-09-29 2018-01-16 百度在线网络技术(北京)有限公司 information generating method and device
CN108197604A (en) * 2018-01-31 2018-06-22 上海敏识网络科技有限公司 Fast face positioning and tracing method based on embedded device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050265604A1 (en) * 2004-05-27 2005-12-01 Mayumi Yuasa Image processing apparatus and method thereof
CN103400105A (en) * 2013-06-26 2013-11-20 东南大学 Method identifying non-front-side facial expression based on attitude normalization
CN105760836A (en) * 2016-02-17 2016-07-13 厦门美图之家科技有限公司 Multi-angle face alignment method based on deep learning and system thereof and photographing terminal
CN107038422A (en) * 2017-04-20 2017-08-11 杭州电子科技大学 The fatigue state recognition method of deep learning is constrained based on space geometry
CN107590482A (en) * 2017-09-29 2018-01-16 百度在线网络技术(北京)有限公司 information generating method and device
CN108197604A (en) * 2018-01-31 2018-06-22 上海敏识网络科技有限公司 Fast face positioning and tracing method based on embedded device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
中国支付清算协会: "《网络支付市场调研与案例选编(2015-2016)》", 31 August 2017, 中国金融出版社 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797791A (en) * 2018-12-25 2020-10-20 上海智臻智能网络科技股份有限公司 Human body posture recognition method and device
CN109754464A (en) * 2019-01-31 2019-05-14 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109754464B (en) * 2019-01-31 2020-03-27 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN110046571A (en) * 2019-04-15 2019-07-23 北京字节跳动网络技术有限公司 The method and apparatus at age for identification
CN110197230A (en) * 2019-06-03 2019-09-03 北京字节跳动网络技术有限公司 Method and apparatus for training pattern
CN110427864A (en) * 2019-07-29 2019-11-08 腾讯科技(深圳)有限公司 A kind of image processing method, device and electronic equipment
CN110427864B (en) * 2019-07-29 2023-04-21 腾讯科技(深圳)有限公司 Image processing method and device and electronic equipment
CN110517214A (en) * 2019-08-28 2019-11-29 北京百度网讯科技有限公司 Method and apparatus for generating image
CN110517214B (en) * 2019-08-28 2022-04-12 北京百度网讯科技有限公司 Method and apparatus for generating image
JP2022526148A (en) * 2019-09-18 2022-05-23 ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド Video generation methods, devices, electronic devices and computer storage media
WO2021052224A1 (en) * 2019-09-18 2021-03-25 北京市商汤科技开发有限公司 Video generation method and apparatus, electronic device, and computer storage medium
CN111145110B (en) * 2019-12-13 2021-02-19 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111145110A (en) * 2019-12-13 2020-05-12 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113012042A (en) * 2019-12-20 2021-06-22 海信集团有限公司 Display device, virtual photo generation method, and storage medium
CN111241961A (en) * 2020-01-03 2020-06-05 精硕科技(北京)股份有限公司 Face detection method and device and electronic equipment
CN111241961B (en) * 2020-01-03 2023-12-08 北京秒针人工智能科技有限公司 Face detection method and device and electronic equipment
CN111695431A (en) * 2020-05-19 2020-09-22 深圳禾思众成科技有限公司 Face recognition method, face recognition device, terminal equipment and storage medium
CN111814573A (en) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 Face information detection method and device, terminal equipment and storage medium
CN111968203A (en) * 2020-06-30 2020-11-20 北京百度网讯科技有限公司 Animation driving method, animation driving device, electronic device, and storage medium
CN111968203B (en) * 2020-06-30 2023-11-14 北京百度网讯科技有限公司 Animation driving method, device, electronic equipment and storage medium
CN112560705A (en) * 2020-12-17 2021-03-26 北京捷通华声科技股份有限公司 Face detection method and device and electronic equipment
CN113034580A (en) * 2021-03-05 2021-06-25 北京字跳网络技术有限公司 Image information detection method and device and electronic equipment
CN113034580B (en) * 2021-03-05 2023-01-17 北京字跳网络技术有限公司 Image information detection method and device and electronic equipment
CN114399803A (en) * 2021-11-30 2022-04-26 际络科技(上海)有限公司 Face key point detection method and device

Similar Documents

Publication Publication Date Title
CN108985257A (en) Method and apparatus for generating information
CN109858445A (en) Method and apparatus for generating model
CN108898185A (en) Method and apparatus for generating image recognition model
CN109101919A (en) Method and apparatus for generating information
CN108830235A (en) Method and apparatus for generating information
CN109086719A (en) Method and apparatus for output data
CN110503703A (en) Method and apparatus for generating image
CN109145783A (en) Method and apparatus for generating information
CN109902659A (en) Method and apparatus for handling human body image
CN108470328A (en) Method and apparatus for handling image
CN108229419A (en) For clustering the method and apparatus of image
CN109829432A (en) Method and apparatus for generating information
CN108491823A (en) Method and apparatus for generating eye recognition model
CN108345387A (en) Method and apparatus for output information
CN108960110A (en) Method and apparatus for generating information
CN109977839A (en) Information processing method and device
CN109934191A (en) Information processing method and device
CN108062544A (en) For the method and apparatus of face In vivo detection
CN109800730A (en) The method and apparatus for generating model for generating head portrait
CN109241934A (en) Method and apparatus for generating information
CN110446066A (en) Method and apparatus for generating video
CN110009059A (en) Method and apparatus for generating model
CN109117758A (en) Method and apparatus for generating information
CN109544444A (en) Image processing method, device, electronic equipment and computer storage medium
CN108491812A (en) The generation method and device of human face recognition model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination