CN110188711A - Method and apparatus for output information - Google Patents

Method and apparatus for output information Download PDF

Info

Publication number
CN110188711A
CN110188711A CN201910477387.2A CN201910477387A CN110188711A CN 110188711 A CN110188711 A CN 110188711A CN 201910477387 A CN201910477387 A CN 201910477387A CN 110188711 A CN110188711 A CN 110188711A
Authority
CN
China
Prior art keywords
side face
image
target
face
location information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910477387.2A
Other languages
Chinese (zh)
Inventor
邓启力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910477387.2A priority Critical patent/CN110188711A/en
Publication of CN110188711A publication Critical patent/CN110188711A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

Embodiment of the invention discloses the method and apparatus for output information, and the method and apparatus for handling video.The specific embodiment for being used for the method for output information includes: acquisition target facial image;Based on side face location model trained in advance, generate the side face location information in the side face region in target facial image, wherein, the position in the side face region in image of the side face location model for determining input, side face region includes the corresponding facial pixel of cheekbone, and side face location information includes the location information of the corresponding facial pixel of cheekbone;Export side face location information generated.The embodiment can determine the position of the cheekbone object in face-image, facilitate the processing that the position based on cheekbone object more refines face-image, facilitate the processing mode of abundant face-image.

Description

Method and apparatus for output information
Technical field
Embodiment of the disclosure is related to field of computer technology, and in particular to for the method and apparatus of output information, with And the method and apparatus for handling video.
Background technique
In general, the electronic equipments such as computer need to realize by the semantic analysis of image to the identification of image.Image Semanteme is divided into vision layer, object layer and conceptual level.The bottom that vision layer is generally understood, i.e. color, texture and shape etc., It is semantic that these features are all referred to as low-level image feature.Object layer typically includes attributive character etc., for example, indicating certain an object at certain The state at one moment.Conceptual level is the thing closest to human intelligible that image expression goes out.For example, including someone on an image Body object (such as face object and non-face object), computer needs to analyze each image-region, to determine the figure As the position of the face object and non-face object that are included.In the prior art, exist to object (such as the cheekbone in face-image Bone object) demand that is positioned.
In addition, existing method, is often based on the gesture and movement of user, thin face processing is carried out to facial image 's.
Summary of the invention
The present disclosure proposes the method and apparatus for output information, and the method and apparatus for handling video.
In a first aspect, embodiment of the disclosure provides a kind of method for output information, this method comprises: obtaining mesh Mark face-image;Based on side face location model trained in advance, the side face position in the side face region in target facial image is generated Information, wherein the position in the side face region in image of the side face location model for determining input, side face region includes cheekbone pair The facial pixel answered, side face location information include the location information of the corresponding facial pixel of cheekbone;Export side generated Face location information.
In some embodiments, this method further include: be based on side face location information generated, to target facial image into The thin face processing of row, obtains thin face rear face image.
In some embodiments, this method further include: be in using thin face rear face image substitution target facial image It is existing.
In some embodiments, side face region further includes at least one of following corresponding facial pixel: temple, lower jaw 3 Along ents between 3 Along ents between bone, cheekbone and temple and cheekbone and mandibular;And side face location information further includes side The location information of the also included facial pixel in face region.
In some embodiments, it is based on side face location information generated, thin face processing is carried out to target facial image, is obtained To thin face rear face image, comprising: each facial pixel for including by side face region in target facial image to in target It axis perpendicular and is moved by the directions of close-target central axes, to carry out thin face processing, obtains thin face rear face image;Its In, target central axes are the point for being used to indicate place between the eyebrows and the straight line being used to indicate where the point of Renzhong acupoint in face-image.
In some embodiments, in each facial pixel moved, about symmetrical two faces of target axis What pixel was moved is equidistant.
In some embodiments, distance and facial pixel of each facial pixel moved to the movement of target central axes The distance of point to target central axes is positively correlated.
Second aspect, embodiment of the disclosure provide a kind of method for handling video, work as this method comprises: obtaining The video of preceding shooting and presentation is as target video;The video frame comprising face object is chosen from target video as target face Portion's image, and execute following processing step: based on side face location model trained in advance, generate the side in target facial image The side face location information in face region, wherein the position in the side face region in image of the side face location model for determining input, side Face region includes the corresponding facial pixel of cheekbone, and side face location information includes the position letter of the corresponding facial pixel of cheekbone Breath;Based on side face location information generated, thin face processing is carried out to target facial image, obtains thin face rear face image;It adopts It is presented with thin face rear face image substitution target facial image.
In some embodiments, this method further include: in response in the current non-targeted video of target facial image most A later frame, using the subsequent facial video frame of target facial image in target video, current as new target facial image, Based on new target facial image, processing step is continued to execute.
The third aspect, embodiment of the disclosure provide a kind of device for output information, which includes: first to obtain Unit is taken, is configured to obtain target facial image;Generation unit is configured to based on side face location model trained in advance, Generate the side face location information in the side face region in target facial image, wherein side face location model is for determining the figure of input The position in the side face region as in, side face region include the corresponding facial pixel of cheekbone, and side face location information includes cheekbone pair The location information for the facial pixel answered;Output unit is configured to export side face location information generated.
In some embodiments, device further include: processing unit is configured to believe based on side face position generated Breath carries out thin face processing to target facial image, obtains thin face rear face image.
In some embodiments, device further include: display unit is configured to substitute mesh using thin face rear face image Mark face-image is presented.
In some embodiments, side face region further includes at least one of following corresponding facial pixel: temple, lower jaw 3 Along ents between 3 Along ents between bone, cheekbone and temple and cheekbone and mandibular;And side face location information further includes side The location information of the also included facial pixel in face region.
In some embodiments, processing unit includes: mobile module, is configured to side face region in target facial image Including each facial pixel moved to perpendicular with target central axes and by close-target central axes directions, to carry out Thin face processing, obtains thin face rear face image;Wherein, target central axes are the point and use for being used to indicate place between the eyebrows in face-image Straight line where the point of instruction Renzhong acupoint.
In some embodiments, in each facial pixel moved, about symmetrical two faces of target axis What pixel was moved is equidistant.
In some embodiments, distance and facial pixel of each facial pixel moved to the movement of target central axes The distance of point to target central axes is positively correlated.
Fourth aspect, embodiment of the disclosure provide a kind of for handling the device of video, which includes: second to obtain Unit is taken, is configured to obtain the video of current shooting and presentation as target video;Execution unit is configured to regard from target The video frame comprising face object is chosen in frequency as target facial image, and executes following processing step: based on instruction in advance Experienced side face location model generates the side face location information in the side face region in target facial image, wherein side face location model The position in the side face region in image for determining input, side face region include the corresponding facial pixel of cheekbone, side face position Confidence breath includes the location information of the corresponding facial pixel of cheekbone;Based on side face location information generated, to target face Image carries out thin face processing, obtains thin face rear face image;It is in using thin face rear face image substitution target facial image It is existing.
In some embodiments, device further include: continue to execute unit, be configured in response to current target face Last frame in the non-targeted video of image, by the subsequent facial video frame of target facial image in target video, current Processing step is continued to execute based on new target facial image as new target facial image.
5th aspect, embodiment of the disclosure provide a kind of electronic equipment, comprising: one or more processors;Storage Device is stored thereon with one or more programs, when said one or multiple programs are executed by said one or multiple processors, So that the one or more processors realize the method as being used for output information in above-mentioned first aspect, alternatively, above-mentioned second party The method of any embodiment in method in face for handling video.
6th aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program, The method as being used for output information in above-mentioned first aspect is realized when the program is executed by processor, alternatively, above-mentioned second aspect In method for handling any embodiment in the method for video.
The method and apparatus for output information that embodiment of the disclosure provides, and method for handling video with Then device, based on side face location model trained in advance, is generated in target facial image by obtaining target facial image Side face region side face location information, wherein side face location model for determine input image in side face region position It sets, side face region includes the corresponding facial pixel of cheekbone, and side face location information includes the position of the corresponding facial pixel of cheekbone Confidence breath, finally, exporting side face location information generated, defines the position of the cheekbone object in face-image as a result, Facilitate the processing that the position based on cheekbone object more refines face-image, facilitates the place of abundant face-image Reason mode.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the method for output information of the disclosure;
Fig. 3 is the schematic diagram according to an application scenarios of the method for output information of the disclosure;
Fig. 4 is the flow chart according to another embodiment of the method for output information of the disclosure;
Fig. 5 is the flow chart according to one embodiment of the method for handling video of the disclosure;
Fig. 6 is the structural schematic diagram according to one embodiment of the device for output information of the disclosure;
Fig. 7 is the structural schematic diagram according to one embodiment of the device for handling video of the disclosure;
Fig. 8 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can the method for output information using embodiment of the disclosure or the dress for output information It sets, alternatively, the exemplary system architecture 100 for handling the method for video or the embodiment of the device for handling video.
As shown in Figure 1, system architecture 100 may include terminal device 101,102,103, network 104 and server 105. Network 104 between terminal device 101,102,103 and server 105 to provide the medium of communication link.Network 104 can be with Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out Send data (such as face-image) etc..Various client applications, such as video can be installed on terminal device 101,102,103 Playout software, the application of Domestic News class, image processing class application, web browser applications, shopping class application, searching class application, Instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be hardware, be also possible to software.When terminal device 101,102,103 is hard When part, it can be various electronic equipments, including but not limited to smart phone, tablet computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), MP4 (Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) player, pocket computer on knee and desktop computer etc..It, can be with when terminal device 101,102,103 is software It is mounted in above-mentioned cited electronic equipment.Multiple softwares or software module may be implemented into (such as providing distribution in it The software or software module of formula service), single software or software module also may be implemented into.It is not specifically limited herein.
Server 105 can be to provide the server of various services, such as to being obtained from terminal device 101,102,103 The background server that face-image is handled.Background server can be to face-image (such as the target face figure got Picture) etc. data analyze etc. processing (such as the side face location information for determining the side face region in face-image), handled As a result (such as side face location information) and output.As an example, server 105 can be cloud server, it is also possible to physics Server.
It should be noted that server can be hardware, it is also possible to software.When server is hardware, may be implemented At the distributed server cluster that multiple servers form, individual server also may be implemented into.It, can when server is software To be implemented as multiple softwares or software module (such as providing the software of Distributed Services or software module), also may be implemented At single software or software module.It is not specifically limited herein.
It should also be noted that, the method provided by embodiment of the disclosure for output information can be held by server Row, can also be executed, can also be fitted to each other execution by server and terminal device by terminal device.Correspondingly, for exporting The various pieces (such as each unit, subelement, module, submodule) that the device of information includes can all be set to server In, it can also all be set in terminal device, can also be respectively arranged in server and terminal device.In addition, the disclosure Embodiment provided by can be executed by server for the method that handles video, can also be executed by terminal device, may be used also To be fitted to each other execution by server and terminal device.Various pieces that correspondingly, the device for exporting video includes (such as Each unit, subelement, module, submodule) it can all be set in server, it can also all be set to terminal device In, it can also be respectively arranged in server and terminal device.
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need It wants, can have any number of terminal device, network and server.For example, when being used for the operation of output information method thereon Electronic equipment during executing this method, when not needing to carry out data transmission with other electronic equipments, the system architecture It can only include electronic equipment (such as server or terminal device) thereon for the operation of output information method.
With continued reference to Fig. 2, the process of one embodiment of the method for output information according to the disclosure is shown 200.This is used for the method for output information, comprising the following steps:
Step 201, target facial image is obtained.
In the present embodiment, for the executing subject of the method for output information, (such as server shown in FIG. 1 or terminal are set It is standby) target facial image can be obtained from other electronic equipments or locally by wired connection mode or radio connection.
Wherein, target facial image can be any face figure of the side face location information in side face region therein to be determined Picture.The target facial image can be facial image, be also possible to the image of the face of animal.
Herein, technical staff can carry out the specific location in sets itself side face region according to actual needs.For example, side Face region can be light is irradiated at left and right sides of the face of people after, facial area that light is irradiated to.Example again Such as, side face region is also possible to characterize temporal point, the point for characterizing cheekbone, the point for characterizing mandibular, characterization in face-image The quadrilateral area that the point in Tinggong cave is constituted.
Side face location information can be the information for characterizing the position in side face region in face-image.
Step 202, based on side face location model trained in advance, the side face in the side face region in target facial image is generated Location information.
In the present embodiment, based on side face location model trained in advance, target face is can be generated in above-mentioned executing subject The side face location information in the side face region in image.Wherein, the side face area in image of the side face location model for determining input The position in domain.Side face region includes the corresponding facial pixel of cheekbone, and side face location information includes the corresponding facial pixel of cheekbone The location information of point.Side face location information can serve to indicate that the position in side face region in face-image.Here, cheekbone is corresponding Facial pixel can be a facial pixel, be also possible to multiple facial pixels.
It should be noted that cheekbone object described in some embodiments of the present disclosure means the corresponding facial picture of cheekbone Vegetarian refreshments.
Herein, above-mentioned side face location model can be the convolutional neural networks mould obtained using machine learning algorithm training Type, being also possible to associated storage has the bivariate table or database of side face location information in side face region in image and image.Cheekbone The corresponding facial pixel of bone can be the image of the cheekbone lateral cutaneous high spot in target facial image.It should be understood that cheekbone The corresponding facial pixel of bone should be located on the target skin in face-image, and on the image of the cheekbone in non-skin.It is practical On, due to blocking for target skin, the image of cheekbone is often sightless in face-image.It is right that target skin can be face As the image of middle skin.In addition, facial pixel described in embodiment of the disclosure should all be located at the skin in face-image On skin object.
As an example, above-mentioned executing subject can execute in the following way the step 202:
Target facial image is input in side face location model, the side face in the side face region in target facial image is obtained Location information.Wherein, in this example, the position in the side face region in face-image of the side face location model for determining input It sets.The side face location model can be using machine learning algorithm, based on including the side face region in face-image and image The obtained convolutional neural networks model of training sample training of side face location information, be also possible to associated storage have face-image and The bivariate table or database of the side face location information in the side face region in image.
As another example, above-mentioned executing subject can also execute in the following way the step 202:
Firstly, extracting in target facial image eyebrow object to the image-region between chin object.Wherein, eyebrow object It can be the image of the eyebrow on face-image.Chin object can be the image of the chin on face-image.
Herein, above-mentioned executing subject can adopt in various manners to extract in face-image eyebrow object to chin object Between image-region.For example, eyebrow object part below in face-image is determined as in face-image eyebrow object extremely Image-region between chin object.For another example target facial image to be input to extraction model trained in advance, face is obtained Eyebrow object is to the image-region between chin object in image.Here, extracting model can be used for extracting the face figure of input Eyebrow object as in is to the image-region between chin object.Illustratively, extraction model can be is calculated using machine learning The convolutional neural networks that method training obtains.
Then, extracted image-region is input in side face location model, is obtained in extracted image-region The side face location information in side face region.Wherein, in this example, side face location model is for determining eyebrow object to chin object Between image-region in side face region position.The side face location model can be using machine learning algorithm, based on packet The training of the side face location information in side face region of the object containing eyebrow into the image-region and image-region between chin object The convolutional neural networks model that sample training obtains, being also possible to associated storage has eyebrow object to the image between chin object The bivariate table or database of the side face location information in the side face region in region and image-region.
Finally, the side face location information based on the side face region in extracted image-region, determines target facial image In side face region side face location information.
It is appreciated that since extracted image-region extracts from target facial image, thus can be based on extracted The side face location information in the side face region in image-region determines the side face position letter in the side face region in target facial image Breath, details are not described herein.
It is appreciated that due to that may include more useless information (such as target facial image in target facial image In other pixels in addition to side face region), thus, using it is in this example, first extracted from face-image including The mode of the image-region in side face region, it is possible to reduce above-mentioned executing subject uses the calculating consumed during side face location model Resource.
In some optional implementations of the present embodiment, side face region further includes at least one of following corresponding face Pixel: 3 Along ents between 3 Along ents between temple, mandibular, cheekbone and temple and cheekbone and mandibular;And side Face location information further includes the location information of the also included facial pixel in side face region.Here, above-mentioned each single item can correspond to One facial pixel can also correspond to multiple facial pixels.
It is appreciated that the quantity of facial pixel included by usual side face location information is more, then may be implemented more Accurately to the positioning in the side face region in face-image.The image procossing more refined is helped to realize as a result, (after such as Continue thin face processing).
Step 203, side face location information generated is exported.
In the present embodiment, above-mentioned executing subject can export side face location information generated.
Herein, above-mentioned executing subject can send side face position generated to the electronic equipment for communicating with connection and believe Breath can also be shown in local display using by side face location information generated to export the side face location information Mode exports the side face location information.
In some optional implementations of the present embodiment, following steps are can also be performed in above-mentioned executing subject: being based on Side face location information generated carries out thin face processing to target facial image, obtains thin face rear face image.
As an example, above-mentioned executing subject can be based on GPUImage, in the side face region of side face location information instruction Each pixel carry out the variation of coordinate and color, the thin face of target facial image is handled to realize, after obtaining thin face Face-image.Wherein, GPUImage is the processing frame of the image or video based on graphics processing unit of an open source.
In some optional implementations of the present embodiment, following steps are can also be performed in above-mentioned executing subject: being used Thin face rear face image substitution target facial image is presented.
Herein, before thin face rear face image is presented in above-mentioned executing subject, which can present target Face-image, can also be with not shown target facial image.The present embodiment is not construed as limiting this.
It should be understood that this optional implementation can be in using thin face rear face image substitution target facial image It is existing, the presentation mode of image is enriched as a result,.
With continued reference to the signal that Fig. 3, Fig. 3 are according to the application scenarios of the method for output information of the present embodiment Figure.In the application scenarios of Fig. 3, server 301 obtains target facial image 3011 first.Then, server 301 is based on preparatory Trained side face location model 3012 generates the side face location information 3013 in the side face region in target facial image 3011.Its In, the position in the side face region in image of the side face location model for determining input, side face region includes the corresponding face of cheekbone Portion's pixel, side face location information include the location information of the corresponding facial pixel of cheekbone.Finally, server 301 exports institute (in diagram, server 301 has sent generated the side face location information of generation to the terminal device 302 for communicating with connection Side face location information.Wherein, side face location information includes the location information 3013,3014) of the corresponding facial pixel of cheekbone.
The method provided by the above embodiment for output information of the disclosure, by obtaining target facial image, then, Based on side face location model trained in advance, the side face location information in the side face region in target facial image is generated, wherein side The position in the side face region in image of the face location model for determining input, side face region include the corresponding facial pixel of cheekbone Point, side face location information include the location information of the corresponding facial pixel of cheekbone, finally, exporting side face position letter generated Breath, defines the position of the cheekbone object in face-image as a result, facilitates the position based on cheekbone object to face-image The processing more refined facilitates the processing mode of abundant face-image.
With further reference to Fig. 4, it illustrates the processes 400 of another embodiment of the method for output information.The use In the process 400 of the method for output information, comprising the following steps:
Step 401, target facial image is obtained.
In the present embodiment, step 401 and the step 201 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Step 402, based on side face location model trained in advance, the side face in the side face region in target facial image is generated Location information.
In the present embodiment, step 402 and the step 202 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Step 403, side face location information generated is exported.
In the present embodiment, step 403 and the step 203 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Step 404, each facial pixel for including by side face region in target facial image to target central axes phase Vertical and by close-target central axes directions are moved, and to carry out thin face processing, obtain thin face rear face image.
In the present embodiment, for the executing subject of the method for output information, (such as server shown in FIG. 1 or terminal are set It is standby) side face region includes in the target facial image that can will get in step 401 each facial pixel to target The perpendicular and direction by close-target central axes in central axes is moved, and to carry out thin face processing, obtains thin face rear face image. Wherein, target central axes are the point for being used to indicate place between the eyebrows in face-image and the straight line being used to indicate where the point of Renzhong acupoint.
It is appreciated that above-mentioned executing subject does not need to determine each facial pixel during executing the step 404 Mobile distance, and need to only pay close attention to the displacement of the movement of each facial pixel.As long as the moving displacement instruction of facial pixel The face pixel to perpendicular with target central axes and moved by the directions of close-target central axes, no matter facial pixel Movement routine be straight line, curve or broken line, should belong to the present embodiment it is claimed within the scope of.
It should be understood that during carrying out thin face processing, in addition to being moved to facial pixel, above-mentioned execution master Body usually also needs to carry out liquefaction processing, to determine the position of the face contour in face-image;In addition, above-mentioned executing subject Usually also need in face-image face object carry out anamorphose (such as protect similar image transformation and/or local scale Adjustment transformation) processing, to avoid the excessive deformation of face object in face-image, and reduces image septum reset object after thin face Including pixel quantity;In addition, above-mentioned executing subject usually also needs to carry out interpolation processing to face-image, moved with filling up The position of missing pixel point after dynamic.In practice, above-mentioned treatment effect (including the movement of facial pixel, liquefaction processing, image become Shape, interpolation processing), it can be used the image array of target facial image and predetermined one or more matrix multiples Mode is realized.It should be understood that above-mentioned predetermined one or more matrixes can be used to implement target facial image to thin face The mapping of rear face image.
As an example, the moving distance of each facial pixel moved can be predetermined distance value, it can also With the numerical value in the predetermined distance range that is randomly generated.
In some optional implementations of the present embodiment, in each facial pixel for being moved, about in target What the facial pixel of two of axisymmetrical was moved is equidistant.
It is appreciated that since face organ has symmetry, thus, in each facial pixel moved, about Symmetrical two facial pixels of target axis moved when being equidistant, obtained thin face rear face image still has There is symmetry.Thus, relative to the unequal side of distance moved about target axis symmetrical two facial pixels Case, this optional implementation can carry out more accurate thin face to target facial image and handle.
In some optional implementations of the present embodiment, each facial pixel moved is moved to target central axes Dynamic distance is positively correlated at a distance from pixel to target central axes.
It is appreciated that the remoter pixel in target central axes usually requires mobile bigger distance due in side face region, Just be able to achieve better thin face effect, thus, when each facial pixel moved to the mobile distance in target central axes with When the distance of facial pixel to target central axes is positively correlated, relative to each facial pixel moved to target central axes Mobile distance non-positively related scheme at a distance from pixel to target central axes, this optional implementation can be to target Face-image carries out more accurate thin face processing, reaches better thin face effect.
It should be noted that the present embodiment can also include embodiment corresponding with Fig. 2 in addition to documented content above Same or similar feature, effect, details are not described herein.
Figure 4, it is seen that the method for output information compared with the corresponding embodiment of Fig. 2, in the present embodiment Process 400 highlight to target facial image carry out thin face processing the step of.The scheme of the present embodiment description can be with base as a result, The thin face processing of face-image is realized in side face location information generated, thus enriches the processing mode of image.
With continued reference to Fig. 5, the process of one embodiment of the method for handling video according to the disclosure is shown 500.The method for being used to handle video, comprising the following steps:
Step 501, the video of current shooting and presentation is obtained as target video.Later, step 502 is executed.
It in the present embodiment, can be with for handling the executing subject (such as terminal device shown in FIG. 1) of the method for video The video of the image acquiring device current shooting, above-mentioned executing subject are obtained from local image acquiring device (such as camera) The video of the image acquiring device current shooting can be presented using local picture production device (such as display screen).At this In, above-mentioned target video is the shooting of present image acquisition device and the video that picture production device is presented.
Wherein, above-mentioned target video can be various videos.For example, target video can be the face to people or animal Video obtained from being shot.
It is appreciated that when the video obtained from target video is shot to the face of people, it is complete in target video Portion or partial video frame may include the face object of people.Herein, face object can be the people's presented in video frame The image of face.
It should be understood that the video frame in video (including above-mentioned target video) is image.
Step 502, the video frame comprising face object is chosen from target video as target facial image.Later, it holds Row step 503.
In the present embodiment, above-mentioned executing subject can be chosen from the target video that step 501 is got comprising face The video frame of object is as target facial image.
Herein, above-mentioned executing subject can choose the video frame comprising face object in target video, currently presented As target facial image, can also choose in target video, current video to be presented and not shown, comprising face object Frame is as target facial image.
Step 503, based on side face location model trained in advance, the side face in the side face region in target facial image is generated Location information.Later, step 504 is executed.
In the present embodiment, above-mentioned executing subject can generate target face based on side face location model trained in advance The side face location information in the side face region in image.Wherein, the side face area in image of the side face location model for determining input The position in domain, side face region include the corresponding facial pixel of cheekbone, and side face location information includes the corresponding facial pixel of cheekbone The location information of point.
In the present embodiment, step 503 and the step 202 in Fig. 2 corresponding embodiment are almost the same, and which is not described herein again.
Step 504, it is based on side face location information generated, thin face processing is carried out to target facial image, obtains thin face Rear face image.Later, step 505 is executed.
In the present embodiment, above-mentioned executing subject can be based on side face location information generated, to target facial image Thin face processing is carried out, thin face rear face image is obtained.
As an example, above-mentioned executing subject can be based on GPUImage, in the side face region of side face location information instruction Each pixel carry out the change of coordinate and color, the thin face of target facial image is handled to realize, after obtaining thin face Face-image.Wherein, GPUImage is the processing frame of the image or video based on graphics processing unit of an open source.
In some optional implementations of the present embodiment, above-mentioned executing subject can also execute this in the following way Step 504:
By each facial pixel that side face region in target facial image includes to perpendicular with target central axes and lean on The direction of close-target central axes is moved, and to carry out thin face processing, obtains thin face rear face image.Wherein, target central axes For in face-image the point for being used to indicate place between the eyebrows and the straight line that is used to indicate where the point of Renzhong acupoint.
Herein, above-mentioned object region can be the image-region including facial pixel.The object region It can be the image-region obtained after liquefying to each facial pixel that side face region includes, be also possible to facial picture Centered on vegetarian refreshments, based on the image-region that preset algorithm obtains, for example, being half with pre-determined distance using facial pixel as the center of circle The circular image-region of diameter.
It should be understood that during mobile pixel, in the face contour of the face object in target facial image Pixel is often both needed to be moved, hereby it is achieved that the thin face to target facial image is handled.
As an example, the moving distance of each facial pixel moved can be predetermined distance value, it can also With the numerical value in the predetermined distance range that is randomly generated.
In some optional implementations of the present embodiment, in each facial pixel for being moved, about in target What the facial pixel of two of axisymmetrical was moved is equidistant.
It is appreciated that since face organ has symmetry, thus, in each facial pixel moved, about Symmetrical two facial pixels of target axis moved when being equidistant, obtained thin face rear face image still has There is symmetry.Thus, relative to the unequal side of distance moved about target axis symmetrical two facial pixels Case, this optional implementation can carry out more accurate thin face to target facial image and handle.
In some optional implementations of the present embodiment, each facial pixel moved is moved to target central axes Dynamic distance is positively correlated at a distance from facial pixel to target central axes.
It is appreciated that the remoter pixel in target central axes usually requires mobile bigger distance due in side face region, Just be able to achieve better thin face effect, thus, when each facial pixel moved to the mobile distance in target central axes with When the distance of facial pixel to target central axes is positively correlated, relative to each facial pixel moved to target central axes Mobile distance non-positively related scheme at a distance from facial pixel to target central axes, this optional implementation can be right Target facial image carries out more accurate thin face processing, reaches better thin face effect.
Step 505, it is presented using thin face rear face image substitution target facial image.
In the present embodiment, above-mentioned executing subject can be in using thin face rear face image substitution target facial image It is existing.
Herein, before thin face rear face image is presented in above-mentioned executing subject, which can present target Face-image, can also be with not shown target facial image.The present embodiment is not construed as limiting this.
It should be understood that above-mentioned executing subject can shoot video by the video capture device being mounted thereon in real time And it presents.Due to above-mentioned executing subject obtain video between presentation video there are the regular hour, also, above-mentioned execution Main body obtains video, and to executing between the completion step 505, there is also the regular hours.Therefore, at above-mentioned executing subject presentation After reason before image, which can present target facial image, can also be with not shown target facial image.In addition, Technical staff or user can also carry out sets itself whether before thin face rear face image is presented according to actual needs, to mesh Mark face-image is presented.
In some optional implementations of the present embodiment, after executing above-mentioned steps 505, above-mentioned executing subject is also Following steps 506 can be executed: determining whether current target facial image is last frame in target video.If it is not, then Execute following steps 507: using the subsequent facial video frame of target facial image in target video, current as new target Face-image continues to execute above-mentioned steps 503- step 505 based on new target facial image.
Herein, the subsequent facial video frame of above-mentioned target facial image can be the next frame face of target facial image Video frame is also possible to target facial image interval predetermined quantity (such as 1,2) pattern portion video frame and positioned at target face Facial video frame after image.
The method provided by the above embodiment for handling video of the disclosure, by the view for obtaining current shooting and presentation Frequency is used as target video, then, the video frame comprising face object is chosen from target video as target facial image, and It executes following processing step: based on side face location model trained in advance, generating the side in the side face region in target facial image Face location information, wherein the position in the side face region in image of the side face location model for determining input, side face region includes The corresponding facial pixel of cheekbone, side face location information include the location information of the corresponding facial pixel of cheekbone;Based on giving birth to At side face location information, thin face processing is carried out to target facial image, obtains thin face rear face image;Using thin face rear face Image substitution target facial image is presented, hereby it is achieved that scheming to the face for including in the video of current shooting and presentation The real-time thin face processing of picture, and presented using the face-image substitution target facial image after thin face to user, thus, Enrich the mode of processing and the presentation of video.
With further reference to Fig. 6, as the realization to method shown in above-mentioned Fig. 2, present disclose provides one kind for exporting letter One embodiment of the device of breath, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, except following documented special Sign is outer, which can also include feature identical or corresponding with embodiment of the method shown in Fig. 2, and generates and scheme Embodiment of the method shown in 2 is identical or corresponding effect.The device specifically can be applied in various electronic equipments.
As shown in fig. 6, the device 600 for output information of the present embodiment includes: first acquisition unit 601, it is configured At acquisition target facial image;Generation unit 602 is configured to generate target face based on side face location model trained in advance The side face location information in the side face region in portion's image, wherein the side face in image of the side face location model for determining input The position in region, side face region include the corresponding facial pixel of cheekbone, and side face location information includes the corresponding facial picture of cheekbone The location information of vegetarian refreshments;Output unit 603 is configured to export side face location information generated.
It in the present embodiment, can be by wired connection side for the first acquisition unit 601 of the device of output information 600 Formula or radio connection obtain target facial image from other electronic equipments or locally.Wherein, target facial image can be with It is any face-image of the side face location information in side face region therein to be determined.The target facial image can be face figure Picture is also possible to the face-image of animal.Side face location information can be the position for characterizing side face region in face-image Information.
In the present embodiment, above-mentioned generation unit 602 can generate target face based on side face location model trained in advance The side face location information in the side face region in portion's image.Wherein, the side face in image of the side face location model for determining input The position in region.Side face region includes the corresponding facial pixel of cheekbone, and side face location information includes the corresponding facial picture of cheekbone The location information of vegetarian refreshments.Side face location information can serve to indicate that the position in side face region in face-image.
In the present embodiment, above-mentioned output unit 603 can export side face location information generated.
In some optional implementations of the present embodiment, the device 600 further include: processing unit (not shown) It is configured to based on side face location information generated, thin face processing is carried out to target facial image, obtains thin face rear face figure Picture.
In some optional implementations of the present embodiment, the device 600 further include: display unit (not shown) It is configured to be presented using thin face rear face image substitution target facial image.
In some optional implementations of the present embodiment, side face region further includes at least one of following corresponding face Pixel: 3 Along ents between 3 Along ents between temple, mandibular, cheekbone and temple and cheekbone and mandibular;And side Face location information further includes the location information of the also included facial pixel in side face region.
In some optional implementations of the present embodiment, processing unit includes: mobile module (not shown) quilt The each facial pixel for being configured to include by side face region in target facial image is to perpendicular and close with target central axes The direction of target central axes is moved, and to carry out thin face processing, obtains thin face rear face image.Wherein, target central axes are The point for being used to indicate place between the eyebrows in face-image and the straight line being used to indicate where the point of Renzhong acupoint.
In some optional implementations of the present embodiment, in each facial pixel for being moved, about in target What the facial pixel of two of axisymmetrical was moved is equidistant.
In some optional implementations of the present embodiment, each facial pixel moved is moved to target central axes Dynamic distance is positively correlated at a distance from facial pixel to target central axes.
The device provided by the above embodiment for output information of the disclosure obtains mesh by first acquisition unit 601 Face-image is marked, then, generation unit 602 generates the side in target facial image based on side face location model trained in advance The side face location information in face region, wherein the position in the side face region in image of the side face location model for determining input, side Face region includes the corresponding facial pixel of cheekbone, and side face location information includes the position letter of the corresponding facial pixel of cheekbone Breath, finally, output unit 603 exports side face location information generated, defines the cheekbone object in face-image as a result, Position, facilitate the processing that the position based on cheekbone object more refines face-image, facilitate abundant face The processing mode of image.
With further reference to Fig. 7, as the realization to method shown in above-mentioned Fig. 5, present disclose provides one kind for handling view One embodiment of the device of frequency, the Installation practice is corresponding with embodiment of the method shown in fig. 5, except following documented special Sign is outer, which can also include feature identical or corresponding with embodiment of the method shown in fig. 5, and generates and scheme Embodiment of the method shown in 5 is identical or corresponding effect.The device specifically can be applied in various electronic equipments.
As shown in fig. 7, the present embodiment includes: second acquisition unit 701 for handling the device 700 of video, it is configured At acquisition current shooting and the video of presentation as target video;Execution unit 702 is configured to choose packet from target video Video frame containing face object is as target facial image, and executes following processing step: fixed based on side face trained in advance Bit model generates the side face location information in the side face region in target facial image, wherein side face location model is defeated for determining The position in the side face region in the image entered, side face region include the corresponding facial pixel of cheekbone, and side face location information includes The location information of the corresponding facial pixel of cheekbone;Based on side face location information generated, target facial image is carried out thin Face processing, obtains thin face rear face image;It is presented using thin face rear face image substitution target facial image.
It in the present embodiment, can be by wired connection side for the second acquisition unit 701 of the device of output information 700 Formula or radio connection obtain target facial image from other electronic equipments or locally.Wherein, target facial image can be with It is any face-image of the side face location information in side face region therein to be determined.The target facial image can be face figure Picture is also possible to the face-image of animal.
In the present embodiment, above-mentioned execution unit 702 can choose the video frame comprising face object from target video As target facial image, and execute following processing step: firstly, generating mesh based on side face location model trained in advance Mark the side face location information in the side face region in face-image, wherein in image of the side face location model for determining input The position in side face region, side face region include the corresponding facial pixel of cheekbone, and side face location information includes the corresponding face of cheekbone The location information of portion's pixel.Then, it is based on side face location information generated, thin face processing is carried out to target facial image, Obtain thin face rear face image.Finally, being presented using thin face rear face image substitution target facial image.
In some optional implementations of the present embodiment, the device 700 further include: continue to execute unit (in figure not Show) be configured in response to the last frame in the non-targeted video of current target facial image, by it is in target video, when The subsequent facial video frame of preceding target facial image is as new target facial image, based on new target facial image, after It is continuous to execute processing step.
The device provided by the above embodiment for being used to handle video of the disclosure, is worked as by the acquisition of second acquisition unit 701 The video of preceding shooting and presentation is as target video, and then, execution unit 702 is chosen from target video comprising face object Video frame is as target facial image, and executes following processing step: based on side face location model trained in advance, generating mesh Mark the side face location information in the side face region in face-image, wherein in image of the side face location model for determining input The position in side face region, side face region include the corresponding facial pixel of cheekbone, and side face location information includes the corresponding face of cheekbone The location information of portion's pixel;Based on side face location information generated, thin face processing is carried out to target facial image, is obtained thin Face rear face image;It is presented using thin face rear face image substitution target facial image, hereby it is achieved that current shooting And the real-time thin face processing for the face-image in the video presented including, and target face is substituted using the face-image after thin face Portion's image is presented to user, thus, enrich the mode of processing and the presentation of video.
Below with reference to Fig. 8, it illustrates the electronic equipment that is suitable for being used to realize embodiment of the disclosure, (example is as shown in figure 1 Server or terminal device) 800 structural schematic diagram.Terminal device in embodiment of the disclosure can include but is not limited to all As mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP are (portable Formula multimedia player), the mobile terminal and such as number TV, desk-top meter of car-mounted terminal (such as vehicle mounted guidance terminal) etc. The fixed terminal of calculation machine etc..Terminal device/server shown in Fig. 8 is only an example, should not be to the implementation of the disclosure The function and use scope of example bring any restrictions.
As shown in figure 8, electronic equipment 800 may include processing unit (such as central processing unit, graphics processor etc.) 801, random access can be loaded into according to the program being stored in read-only memory (ROM) 802 or from storage device 808 Program in memory (RAM) 803 and execute various movements appropriate and processing.In RAM 803, it is also stored with electronic equipment Various programs and data needed for 800 operations.Processing unit 801, ROM 802 and RAM803 are connected with each other by bus 804. Input/output (I/O) interface 805 is also connected to bus 804.
In general, following device can connect to I/O interface 805: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 806 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 807 of dynamic device etc.;Storage device 808 including such as tape, hard disk etc.;And communication device 809.Communication device 809, which can permit electronic equipment 800, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 8 shows tool There is the electronic equipment 800 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with Alternatively implement or have more or fewer devices.Each box shown in Fig. 8 can represent a device, can also root According to needing to represent multiple devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 809, or from storage device 808 It is mounted, or is mounted from ROM802.When the computer program is executed by processing unit 801, the implementation of the disclosure is executed The above-mentioned function of being limited in the method for example.
It is situated between it should be noted that computer-readable medium described in embodiment of the disclosure can be computer-readable signal Matter or computer readable storage medium either the two any combination.Computer readable storage medium for example can be with System, device or the device of --- but being not limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, or it is any more than Combination.The more specific example of computer readable storage medium can include but is not limited to: have one or more conducting wires Electrical connection, portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type are programmable Read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic are deposited Memory device or above-mentioned any appropriate combination.In embodiment of the disclosure, computer readable storage medium, which can be, appoints What include or the tangible medium of storage program that the program can be commanded execution system, device or device use or and its It is used in combination.And in embodiment of the disclosure, computer-readable signal media may include in a base band or as carrier wave The data-signal that a part is propagated, wherein carrying computer-readable program code.The data-signal of this propagation can be adopted With diversified forms, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal is situated between Matter can also be any computer-readable medium other than computer readable storage medium, which can be with It sends, propagate or transmits for by the use of instruction execution system, device or device or program in connection.Meter The program code for including on calculation machine readable medium can transmit with any suitable medium, including but not limited to: electric wire, optical cable, RF (radio frequency) etc. or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or more When a program is executed by the electronic equipment, so that the electronic equipment: obtaining target facial image;It is fixed based on side face trained in advance Bit model generates the side face location information in the side face region in target facial image, wherein side face location model is defeated for determining The position in the side face region in the image entered, side face region include the corresponding facial pixel of cheekbone, and side face location information includes The location information of the corresponding facial pixel of cheekbone;Export side face location information generated.Alternatively, making the electronic equipment: The video of current shooting and presentation is obtained as target video;The video frame conduct comprising face object is chosen from target video Target facial image, and execute following processing step: based on side face location model trained in advance, generate target facial image In side face region side face location information, wherein the side face region in image of the side face location model for determining input Position, side face region include the corresponding facial pixel of cheekbone, and side face location information includes the corresponding facial pixel of cheekbone Location information;Based on side face location information generated, thin face processing is carried out to target facial image, obtains thin face rear face figure Picture;It is presented using thin face rear face image substitution target facial image.
The behaviour for executing embodiment of the disclosure can be write with one or more programming languages or combinations thereof The computer program code of work, described program design language include object oriented program language-such as Java, Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in embodiment of the disclosure can be realized by way of software, can also be passed through The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor Including first acquisition unit, generation unit and output unit.Alternatively, it is single including the second acquisition to be described as a kind of processor Member and execution unit.Wherein, the title of these units does not constitute the restriction to the unit itself under certain conditions, for example, First acquisition unit is also described as " obtaining the unit of target facial image ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art Member is it should be appreciated that invention scope involved in the disclosure, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed in the disclosure Can technical characteristic replaced mutually and the technical solution that is formed.

Claims (17)

1. a kind of method for output information, comprising:
Obtain target facial image;
Based on side face location model trained in advance, the side face position letter in the side face region in the target facial image is generated Breath, wherein the position in the side face region in image of the side face location model for determining input, side face region includes cheekbone Corresponding face pixel, side face location information include the location information of the corresponding facial pixel of cheekbone;
Export side face location information generated.
2. according to the method described in claim 1, wherein, the method also includes:
Based on side face location information generated, thin face processing is carried out to the target facial image, obtains thin face rear face figure Picture.
3. according to the method described in claim 2, wherein, the method also includes:
The target facial image is substituted using the thin face rear face image to be presented.
4. according to the method described in claim 2, wherein, side face region further includes at least one of following corresponding facial pixel Point: 3 Along ents between 3 Along ents between temple, mandibular, cheekbone and temple and cheekbone and mandibular;And
Side face location information further includes the location information of the also included facial pixel in side face region.
5. the method according to one of claim 2-4, wherein it is described to be based on side face location information generated, to described Target facial image carries out thin face processing, obtains thin face rear face image, comprising:
By each facial pixel that side face region in the target facial image includes to perpendicular with target central axes and lean on The direction of the nearly target central axes is moved, and to carry out thin face processing, obtains thin face rear face image;
Wherein, the target central axes are the point for being used to indicate place between the eyebrows in face-image and the point for being used to indicate Renzhong acupoint place Straight line.
6. according to the method described in claim 5, wherein, in each facial pixel for being moved, about the target axis What symmetrical two facial pixels were moved is equidistant.
7. according to the method described in claim 5, wherein, each facial pixel moved is mobile to the target central axes Distance positive correlation at a distance from facial pixel to the target central axes.
8. a kind of method for handling video, comprising:
The video of current shooting and presentation is obtained as target video;
The video frame comprising face object is chosen from the target video as target facial image, and executes following processing Step:
Based on side face location model trained in advance, the side face location information in the side face region in target facial image is generated, In, the position in the side face region in image of the side face location model for determining input, side face region includes that cheekbone is corresponding Facial pixel, side face location information includes the location information of the corresponding facial pixel of cheekbone;
Based on side face location information generated, thin face processing is carried out to target facial image, obtains thin face rear face image;
It is presented using thin face rear face image substitution target facial image.
9. according to the method described in claim 8, wherein, the method also includes:
In response to the last frame in the current non-target video of target facial image, by it is in the target video, when The subsequent facial video frame of preceding target facial image is as new target facial image, based on new target facial image, after It is continuous to execute the processing step.
10. a kind of device for output information, comprising:
First acquisition unit is configured to obtain target facial image;
Generation unit is configured to generate the side face in the target facial image based on side face location model trained in advance The side face location information in region, wherein the position in the side face region in image of the side face location model for determining input, Side face region includes the corresponding facial pixel of cheekbone, and side face location information includes the position letter of the corresponding facial pixel of cheekbone Breath;
Output unit is configured to export side face location information generated.
11. device according to claim 10, wherein described device further include:
Processing unit, is configured to based on side face location information generated, carries out thin face processing to the target facial image, Obtain thin face rear face image.
12. device according to claim 11, wherein described device further include:
Display unit is configured to be presented using the thin face rear face image substitution target facial image.
13. device according to claim 11, wherein side face region further includes at least one of following corresponding facial pixel Point: 3 Along ents between 3 Along ents between temple, mandibular, cheekbone and temple and cheekbone and mandibular;And
Side face location information further includes the location information of the also included facial pixel in side face region.
14. device described in one of 1-13 according to claim 1, wherein the processing unit includes:
Mobile module, each facial pixel for being configured to include by side face region in the target facial image to target The perpendicular and direction close to the target central axes in central axes is moved, and to carry out thin face processing, obtains thin face rear face Image;
Wherein, the target central axes are the point for being used to indicate place between the eyebrows in face-image and the point for being used to indicate Renzhong acupoint place Straight line.
15. a kind of for handling the device of video, comprising:
Second acquisition unit is configured to obtain the video of current shooting and presentation as target video;
Execution unit is configured to choose the video frame comprising face object from the target video as target face figure Picture, and execute following processing step:
Based on side face location model trained in advance, the side face location information in the side face region in target facial image is generated, In, the position in the side face region in image of the side face location model for determining input, side face region includes that cheekbone is corresponding Facial pixel, side face location information includes the location information of the corresponding facial pixel of cheekbone;
Based on side face location information generated, thin face processing is carried out to target facial image, obtains thin face rear face image;
It is presented using thin face rear face image substitution target facial image.
16. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now method as described in any in claim 1-9.
17. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor The now method as described in any in claim 1-9.
CN201910477387.2A 2019-06-03 2019-06-03 Method and apparatus for output information Pending CN110188711A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910477387.2A CN110188711A (en) 2019-06-03 2019-06-03 Method and apparatus for output information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910477387.2A CN110188711A (en) 2019-06-03 2019-06-03 Method and apparatus for output information

Publications (1)

Publication Number Publication Date
CN110188711A true CN110188711A (en) 2019-08-30

Family

ID=67719969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910477387.2A Pending CN110188711A (en) 2019-06-03 2019-06-03 Method and apparatus for output information

Country Status (1)

Country Link
CN (1) CN110188711A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145110A (en) * 2019-12-13 2020-05-12 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850847A (en) * 2015-06-02 2015-08-19 上海斐讯数据通信技术有限公司 Image optimization system and method with automatic face thinning function
CN105447823A (en) * 2014-08-07 2016-03-30 联想(北京)有限公司 Image processing method and electronic device
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108470322A (en) * 2018-03-09 2018-08-31 北京小米移动软件有限公司 Handle the method, apparatus and readable storage medium storing program for executing of facial image
CN108491780A (en) * 2018-03-16 2018-09-04 广东欧珀移动通信有限公司 Image landscaping treatment method, apparatus, storage medium and terminal device
CN109299714A (en) * 2017-07-25 2019-02-01 上海中科顶信医学影像科技有限公司 ROI template generation method, ROI extracting method and system, equipment, medium
CN109614902A (en) * 2018-11-30 2019-04-12 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447823A (en) * 2014-08-07 2016-03-30 联想(北京)有限公司 Image processing method and electronic device
CN104850847A (en) * 2015-06-02 2015-08-19 上海斐讯数据通信技术有限公司 Image optimization system and method with automatic face thinning function
CN109299714A (en) * 2017-07-25 2019-02-01 上海中科顶信医学影像科技有限公司 ROI template generation method, ROI extracting method and system, equipment, medium
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN108470322A (en) * 2018-03-09 2018-08-31 北京小米移动软件有限公司 Handle the method, apparatus and readable storage medium storing program for executing of facial image
CN108491780A (en) * 2018-03-16 2018-09-04 广东欧珀移动通信有限公司 Image landscaping treatment method, apparatus, storage medium and terminal device
CN109614902A (en) * 2018-11-30 2019-04-12 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145110A (en) * 2019-12-13 2020-05-12 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111145110B (en) * 2019-12-13 2021-02-19 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP7104683B2 (en) How and equipment to generate information
CN108416310B (en) Method and apparatus for generating information
CN109858445A (en) Method and apparatus for generating model
CN108898185A (en) Method and apparatus for generating image recognition model
CN108509915A (en) The generation method and device of human face recognition model
CN111476871B (en) Method and device for generating video
CN111814985A (en) Model training method under federated learning network and related equipment thereof
CN108985257A (en) Method and apparatus for generating information
CN109086719A (en) Method and apparatus for output data
CN110503703A (en) Method and apparatus for generating image
CN110298319B (en) Image synthesis method and device
CN108492364A (en) The method and apparatus for generating model for generating image
CN109993150A (en) The method and apparatus at age for identification
CN108363995A (en) Method and apparatus for generating data
CN110188719A (en) Method for tracking target and device
CN110796089B (en) Method and apparatus for training face model
CN110288705B (en) Method and device for generating three-dimensional model
CN110009059A (en) Method and apparatus for generating model
CN109815365A (en) Method and apparatus for handling video
CN111275650B (en) Beauty treatment method and device
CN108491823A (en) Method and apparatus for generating eye recognition model
CN109977839A (en) Information processing method and device
CN108062544A (en) For the method and apparatus of face In vivo detection
CN109784304A (en) Method and apparatus for marking dental imaging
CN109754464A (en) Method and apparatus for generating information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination