CN107820027A - Video personage dresss up method, apparatus, computing device and computer-readable storage medium - Google Patents
Video personage dresss up method, apparatus, computing device and computer-readable storage medium Download PDFInfo
- Publication number
- CN107820027A CN107820027A CN201711066571.5A CN201711066571A CN107820027A CN 107820027 A CN107820027 A CN 107820027A CN 201711066571 A CN201711066571 A CN 201711066571A CN 107820027 A CN107820027 A CN 107820027A
- Authority
- CN
- China
- Prior art keywords
- image
- processing
- video
- dressed
- effect textures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 230000000694 effects Effects 0.000 claims abstract description 153
- 238000012545 processing Methods 0.000 claims abstract description 142
- 230000004927 fusion Effects 0.000 claims abstract description 28
- 238000004891 communication Methods 0.000 claims description 17
- 239000000284 extract Substances 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 abstract description 10
- 230000001815 facial effect Effects 0.000 abstract description 7
- 230000011218 segmentation Effects 0.000 description 6
- 235000013399 edible fruits Nutrition 0.000 description 5
- 210000004709 eyebrow Anatomy 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 241000406668 Loxodonta cyclotis Species 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 210000004209 hair Anatomy 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- JEIPFZHSYJVQDO-UHFFFAOYSA-N iron(III) oxide Inorganic materials O=[Fe]O[Fe]=O JEIPFZHSYJVQDO-UHFFFAOYSA-N 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000001035 drying Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/272—Means for inserting a foreground image in a background image, i.e. inlay, outlay
- H04N2005/2726—Means for inserting a foreground image in a background image, i.e. inlay, outlay for simulating a person's appearance, e.g. hair style, glasses, clothes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
Dress up method, apparatus, computing device and computer-readable storage medium the invention discloses a kind of video personage, this method includes:The current frame image for including special object in video is obtained in real time;Scene cut processing is carried out to current frame image, obtains being directed to the face area image to be dressed up of the foreground image and special object of special object;Draw makeups effect textures corresponding with face area image to be dressed up;Makeups effect textures and foreground image are subjected to fusion treatment, obtain frame processing image;Video data after frame processing image covering current frame image is handled;Video data after display processing.Present invention employs deep learning method, complete scene cut processing with realizing the high accuracy of high efficiency, the facial zone that the present invention can be precisely, rapidly the personage in video adds landscaping effect, improve video data treatment effeciency, video data processing mode is optimized, has beautified video data display effect.
Description
Technical field
The present invention relates to technical field of image processing, and in particular to a kind of video personage dresss up method, apparatus, computing device
And computer-readable storage medium.
Background technology
With the development of science and technology, the technology of image capture device also increasingly improves.The image collected becomes apparent from, differentiated
Rate, display effect also greatly improve.But the video of existing recording be only dull recorded material in itself, possibly can not meet user
Demand, user wishes to carry out personalisation process to video, for example, user wants to add the facial zone of the personage in video
Landscaping effect.In the prior art, it is that the two field picture of video is handled by the way of adaptive fuzzy mostly, so that
Personage therein has landscaping effect, but the display effect of video resulting after handling in this way is bad, video
It is not clear enough, it is background blurring, lack authenticity.
The content of the invention
In view of the above problems, it is proposed that the present invention so as to provide one kind overcome above mentioned problem or at least in part solve on
The video personage for stating problem dresss up method, apparatus, computing device and computer-readable storage medium.
A kind of method according to an aspect of the invention, there is provided video personage dresss up, this method include:
Real-time image acquisition collecting device is captured and/or the video recorded in include special object present frame figure
Picture;Or the current frame image that special object is included in currently played video is obtained in real time;
Scene cut processing is carried out to current frame image, obtains being directed to the foreground image and special object of special object
Face area image to be dressed up;
Draw makeups effect textures corresponding with face area image to be dressed up;
Makeups effect textures and foreground image are subjected to fusion treatment, obtain frame processing image;
Video data after frame processing image covering current frame image is handled;
Video data after display processing.
Further, makeups effect textures corresponding with face area image to be dressed up are drawn to further comprise:
The key message in face region to be dressed up is extracted from face area image to be dressed up, is drawn according to key message
Makeups effect textures.
Further, the key message for face region to be dressed up being extracted from face area image to be dressed up further wraps
Include:
Key message is key point information;
The key point information of face edges of regions to be dressed up is extracted from face area image to be dressed up.
Further, makeups effect textures are drawn according to key message to further comprise:
Key message is key point information;
Search basic makeups effect textures corresponding with key point information;Or obtain corresponding with user's selection operation
Basic makeups effect textures;
According to key point information, the positional information between at least two key points with symmetric relation is calculated;
According to positional information, basic makeups effect textures are handled, obtain makeups effect textures.
Further, according to positional information, basic makeups effect textures is handled, makeups effect textures is obtained and enters one
Step includes:
According to the range information in positional information, processing is zoomed in and out to basic makeups effect textures.
Further, according to positional information, basic makeups effect textures is handled, makeups effect textures is obtained and enters one
Step includes:
According to the rotation angle information in positional information, rotation processing is carried out to basic makeups effect textures.
Further, makeups effect textures and foreground image are subjected to fusion treatment, obtain frame processing image and further wrap
Include:
Scene cut will be carried out to current frame image handle obtained background image or default background image to imitate with makeups
Fruit textures, foreground image carry out fusion treatment, obtain frame processing image.
Further, after frame processing image is obtained, this method also includes:
Tone processing, photo-irradiation treatment and/or brightness processed are carried out to frame processing image.
Further, the video data after display processing further comprises:By the video data real-time display after processing;
This method also includes:Video data after processing is uploaded to Cloud Server.
Further, the video data after processing is uploaded into Cloud Server to further comprise:
Video data after processing is uploaded to cloud video platform server, so that cloud video platform server is in cloud video
Platform is shown video data.
Further, the video data after processing is uploaded into Cloud Server to further comprise:
Video data after processing is uploaded to cloud direct broadcast server, so that cloud direct broadcast server pushes away video data in real time
Give viewing subscription client.
Further, the video data after processing is uploaded into Cloud Server to further comprise:
Video data after processing is uploaded to cloud public number server, so that cloud public number server pushes away video data
Give public number concern client.
According to another aspect of the present invention, there is provided a kind of video personage dresss up device, and the device includes:
Acquisition module, suitable for captured by real-time image acquisition collecting device and/or in the video recorded comprising specific right
The current frame image of elephant;Or the current frame image that special object is included in currently played video is obtained in real time;
Split module, suitable for carrying out scene cut processing to current frame image, obtain being directed to the foreground picture of special object
The face area image to be dressed up of picture and special object;
Stick picture disposing module, suitable for drawing makeups effect textures corresponding with face area image to be dressed up;
Fusion treatment module, suitable for makeups effect textures and foreground image are carried out into fusion treatment, obtain frame processing image;
Overlay module, suitable for the video data after frame processing image covering current frame image is handled;
Display module, suitable for the video data after display processing.
Further, stick picture disposing module is further adapted for:
The key message in face region to be dressed up is extracted from face area image to be dressed up, is drawn according to key message
Makeups effect textures.
Further, key message is key point information;Stick picture disposing module is further adapted for:
The key point information of face edges of regions to be dressed up is extracted from face area image to be dressed up.
Further, key message is key point information;Stick picture disposing module is further adapted for:
Search basic makeups effect textures corresponding with key point information;Or obtain corresponding with user's selection operation
Basic makeups effect textures;
According to key point information, the positional information between at least two key points with symmetric relation is calculated;
According to positional information, basic makeups effect textures are handled, obtain makeups effect textures.
Further, stick picture disposing module is further adapted for:
According to the range information in positional information, processing is zoomed in and out to basic makeups effect textures.
Further, stick picture disposing module is further adapted for:
According to the rotation angle information in positional information, rotation processing is carried out to basic makeups effect textures.
Further, fusion treatment module is further adapted for:
Scene cut will be carried out to current frame image handle obtained background image or default background image to imitate with makeups
Fruit textures, foreground image carry out fusion treatment, obtain frame processing image.
Further, the device also includes:
Image processing module, suitable for carrying out tone processing, photo-irradiation treatment and/or brightness processed to frame processing image.
Further, display module is further adapted for:By the video data real-time display after processing;
The device also includes:
Uploading module, suitable for the video data after processing is uploaded into Cloud Server.
Further, uploading module is further adapted for:
Video data after processing is uploaded to cloud video platform server, so that cloud video platform server is in cloud video
Platform is shown video data.
Further, uploading module is further adapted for:
Video data after processing is uploaded to cloud direct broadcast server, so that cloud direct broadcast server pushes away video data in real time
Give viewing subscription client.
Further, uploading module is further adapted for:
Video data after processing is uploaded to cloud public number server, so that cloud public number server pushes away video data
Give public number concern client.
According to another aspect of the invention, there is provided a kind of computing device, including:Processor, memory, communication interface and
Communication bus, processor, memory and communication interface complete mutual communication by communication bus;
Memory is used to deposit an at least executable instruction, and executable instruction makes the above-mentioned video personage of computing device dress up
Operated corresponding to method.
In accordance with a further aspect of the present invention, there is provided a kind of computer-readable storage medium, be stored with least one in storage medium
Executable instruction, executable instruction make computing device be operated as above-mentioned video personage dresss up corresponding to method.
According to technical scheme provided by the invention, real-time image acquisition collecting device is captured and/or the video recorded
In include the current frame image of special object, or, obtain in real time current comprising special object in currently played video
Two field picture, scene cut processing is carried out to current frame image, obtains being directed to the foreground image and special object of special object
Face area image to be dressed up, then draw corresponding with face area image to be dressed up makeups effect textures, makeups imitated
Fruit textures carry out fusion treatment with foreground image, obtain frame processing image, then obtain frame processing image covering current frame image
Video data after to processing, the video data after display processing.Present invention employs deep learning method, high efficiency is realized
Complete scene cut processing, technical scheme provided by the invention is based on resulting scene cut result energy high accuracy
Enough is precisely, rapidly that the facial zone of the personage in video adds landscaping effect, improves video data treatment effeciency, optimizes
Video data processing mode, has beautified video data display effect.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention,
And can be practiced according to the content of specification, and in order to allow above and other objects of the present invention, feature and advantage can
Become apparent, below especially exemplified by the embodiment of the present invention.
Brief description of the drawings
By reading the detailed description of hereafter preferred embodiment, it is various other the advantages of and benefit it is common for this area
Technical staff will be clear understanding.Accompanying drawing is only used for showing the purpose of preferred embodiment, and is not considered as to the present invention
Limitation.And in whole accompanying drawing, identical part is denoted by the same reference numerals.In the accompanying drawings:
Fig. 1 shows that video personage according to an embodiment of the invention dresss up the schematic flow sheet of method;
Fig. 2 shows that video personage in accordance with another embodiment of the present invention dresss up the schematic flow sheet of method;
Fig. 3 shows that video personage according to an embodiment of the invention dresss up the structured flowchart of device;
Fig. 4 shows that video personage in accordance with another embodiment of the present invention dresss up the structured flowchart of device;
Fig. 5 shows a kind of structural representation of computing device according to embodiments of the present invention.
Embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in accompanying drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
Limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
Fig. 1 shows that video personage according to an embodiment of the invention dresss up the schematic flow sheet of method, such as Fig. 1 institutes
Show, this method comprises the following steps:
Step S100, real-time image acquisition collecting device is captured and/or the video recorded in comprising special object
Current frame image;Or the current frame image that special object is included in currently played video is obtained in real time.
Image capture device illustrates by taking mobile terminal as an example in the present embodiment.Get mobile terminal camera in real time
Current frame image when current frame image in recorded video or shooting video.Due to the present invention to special object at
Reason, therefore only obtain the current frame image comprising special object during acquisition current frame image.Except real-time image acquisition collecting device
In the captured and/or video recorded outside the current frame image comprising special object, it can also obtain in real time currently played
Video in include the current frame image of special object.Wherein, special object can be human body etc..
Step S101, scene cut processing is carried out to current frame image, obtain being directed to the foreground image of special object with
And the face area image to be dressed up of special object.
Wherein, when carrying out scene cut processing to current frame image, deep learning method can be utilized.Deep learning is
It is a kind of based on the method that data are carried out with representative learning in machine learning.Observation (such as piece image) can use a variety of sides
Formula represents, such as vector of each pixel intensity value, or is more abstractively expressed as a series of sides, the region etc. of given shape.
And some specific method for expressing are used to be easier from example learning task (for example, recognition of face or human facial expression recognition).
Scene cut processing is carried out to current frame image using the dividing method of deep learning, obtains field corresponding with current frame image
Scape segmentation result.Specifically, scene cut network obtained using deep learning method etc. carries out scene to current frame image
Dividing processing, obtains current frame image and is directed to the face of the foreground image of special object, background image and special object treating
Dress up area image, wherein, the foreground image can only include special object, and background image is to remove foreground picture in current frame image
Image as outside.So that special object is human body as an example, the face area image to be dressed up of special object can include human body
Image of position corresponding region such as the image and cheek in face region, forehead and chin etc., wherein, face region can refer to face
The region at each position such as eyebrow in portion region, specifically, face region may include:Eyebrow, eyes, ear, nose and face etc.
Region corresponding to position.
Step S102, draw makeups effect textures corresponding with face area image to be dressed up.
After face area image to be dressed up has been obtained, corresponding makeups are drawn for face area image to be dressed up
Effect textures.Those skilled in the art can set makeups effect textures according to being actually needed for face area image to be dressed up, this
Place does not limit.For example, if face area image to be dressed up is brow region image, draw corresponding with brow region image
Eyebrow type effect textures;If face area image to be dressed up is eye areas image, eye corresponding with eye areas image is drawn
Shadow effect textures;If face area image to be dressed up is face area image, lip gloss corresponding with face area image is drawn
Effect textures;If face area image to be dressed up is cheek region image, rouge effect corresponding with cheek region image is drawn
Fruit textures.
Step S103, makeups effect textures and foreground image are subjected to fusion treatment, obtain frame processing image.
After drafting has obtained makeups effect textures, makeups effect textures and foreground image are subjected to fusion treatment, made
Obtain makeups effect textures truly, accurately can be merged with the special object in foreground image, so as to obtain frame processing
Image.
Step S104, the video data after frame processing image covering current frame image is handled.
Original current frame image is directly override using frame processing image, the video counts after directly can be processed
According to.Meanwhile the user of recording can also be immediately seen frame processing image.
When obtaining frame processing image, frame can be handled image and directly cover original current frame image.Speed during covering
Degree is very fast, is typically completed within 1/24 second.For a user, because the time of covering treatment is relatively short, human eye is not bright
Aobvious discovers, i.e., human eye does not perceive the process that the former current frame image in video data is capped.So subsequently showing
During video data after processing, shoot and/or record equivalent to one side and/or during playing video data, one side real-time display
For the video data after processing, user does not feel as the display effect that two field picture in video data covers.
Step S105, the video data after display processing.
After video data after being handled, it can be shown in real time, after user can directly be seen that processing
Video data display effect.
Dressed up method according to the video personage that the present embodiment provides, real-time image acquisition collecting device is captured and/or institute
The current frame image of special object is included in the video of recording, or, obtain in real time in currently played video comprising specific
The current frame image of object, scene cut processing is carried out to current frame image, obtain being directed to the foreground image of special object with
And the face area image to be dressed up of special object, then draw makeups effect patch corresponding with face area image to be dressed up
Figure, makeups effect textures and foreground image are subjected to fusion treatment, obtain frame processing image, then work as frame processing image covering
Prior image frame handled after video data, the video data after display processing.It is real present invention employs deep learning method
Complete scene cut processing with having showed the high accuracy of high efficiency, technical scheme provided by the invention is based on resulting scene cut
Result can be precisely, rapidly that the facial zone of the personage in video adds landscaping effect, improve at video data
Efficiency is managed, video data processing mode is optimized, has beautified video data display effect.
Fig. 2 shows that video personage in accordance with another embodiment of the present invention dresss up the schematic flow sheet of method, such as Fig. 2 institutes
Show, this method comprises the following steps:
Step S200, real-time image acquisition collecting device is captured and/or the video recorded in comprising special object
Current frame image;Or the current frame image that special object is included in currently played video is obtained in real time.
Step S201, scene cut processing is carried out to current frame image, obtain being directed to the foreground image of special object with
And the face area image to be dressed up of special object.
Step S202, the key message in face region to be dressed up is extracted from face area image to be dressed up, according to pass
Key information draws makeups effect textures.
For convenience of drafting makeups effect textures, it is necessary to extract face region to be dressed up from face area image to be dressed up
Key message.The key message can be specially key point information, key area information, and/or key lines information etc..This hair
Bright embodiment illustrates so that key message is key point information as an example, but the key message of the present invention is not limited to key point
Information.The processing speed and efficiency that makeups effect textures are drawn according to key point information can be improved using key point information, can
Directly to draw makeups effect textures according to key point information, it is not necessary to it is multiple to carry out subsequently calculating, analysis etc. to key message again
Miscellaneous operation.Meanwhile key point information is easy to extract, and extract accurately so that the effect for drawing makeups effect textures is more accurate.Tool
Body, the key point information of face edges of regions to be dressed up can be extracted from face area image to be dressed up.It is to be installed with face
Play the part of area image for exemplified by face area image, the key point information of face edges of regions is extracted from face area image.
, can pre-rendered many basic makeups effect patches in order to easily and quickly draw out makeups effect textures
Figure, then when drawing with face wait the corresponding makeups effect textures of area image of dressing up, so that it may which basis is beautiful corresponding to first finding
Adornment effect textures, then basic makeups effect textures are handled, so as to be quickly obtained makeups effect textures.Wherein, this
A little basic makeups effect textures may include the basic eye shadow effect patch of the basic eyebrow type effect textures of different eyebrow types, different colours
The basic rouge effect textures of basic lip gloss effect textures and different colours etc. of figure, different colours.In addition, for the ease of pipe
These basic makeups effect textures are managed, an effect textures storehouse can be established, these basic makeups effect textures are stored to the effect
In fruit textures storehouse.
Specifically, after the key point information in face region to be dressed up is extracted from face area image to be dressed up,
Basic makeups effect textures corresponding with key point information can be searched, or, it is beautiful to obtain basis corresponding with user's selection operation
Adornment effect textures, then according to key point information, the positional information between at least two key points with symmetric relation is calculated,
Then according to positional information, basic makeups effect textures is handled, obtain makeups effect textures.In this way can
Accurately draw and obtain makeups effect textures.
Wherein, this method can be automatically searched from effect textures storehouse and key point according to the key point information of extraction
Basic makeups effect textures corresponding to information, so that face area image to be dressed up is face area image as an example, the pass that extracts
Key point information is the key point information of face, and basis corresponding with the key point information of face is then searched from effect textures storehouse
Makeups effect textures, that is, equivalent to the basic lip gloss effect textures of lookup.In addition, in actual applications, for the ease of user
Using, preferably meet the individual demand of user, the basic makeups effect included in effect textures storehouse can be showed to user
Textures, user can voluntarily select basic makeups effect textures according to the hobby of oneself, then in this case, this method can obtain
Take basic makeups effect textures corresponding with user's selection operation.
So that face area image to be dressed up is eye areas image as an example, the key point of two canthus position can be calculated
Between positional information;So that face area image to be dressed up is face area image as an example, two corners of the mouth positions can be calculated
Key point between positional information.Wherein, positional information may include range information and rotation angle information, specifically, can be according to
According to the range information in the positional information between at least two key points with symmetric relation, basic makeups effect textures are entered
The processing of row scaling, and/or, according to the anglec of rotation letter in the positional information between at least two key points with symmetric relation
Breath, rotation processing is carried out to basic makeups effect textures, so as to obtain makeups effect corresponding with face area image to be dressed up
Textures.
Due to special object in recorded video it is different from the distance of image capture device, cause special object in present frame
Cause not of uniform size in image, so as to the face area image to be dressed up of special object that causes to handle to obtain through scene cut
Size is also inconsistent.By special object be human body exemplified by, when human body is in recorded video and image capture device it is distant
When, human body presents smaller in current frame image, then the face area image to be dressed up of human body is also smaller;When human body is being recorded
During video and image capture device it is closer to the distance when, human body presents larger in current frame image, then the face of human body is treated
Area image of dressing up is also larger.According to the distance letter in the positional information between at least two key points with symmetric relation
Breath, processing is zoomed in and out to basic makeups effect textures so that obtained makeups effect textures more suit it is specific in foreground image
The size of object., can be with for example, when the face for the special object for handling to obtain through scene cut is when area image of dressing up is smaller
Diminution processing is carried out to basic makeups effect textures, more to suit foreground image;It is specific right when handling to obtain through scene cut
The face of elephant can be amplified processing, more to suit prospect when area image of dressing up is larger to basic makeups effect textures
Image.
Furthermore, it is contemplated that it may be deposited in the current frame image that special object is got in image capture device recorded video
It is not positive situation about facing, when being presented such as human body in the form of turning one's head in current frame image, to make makeups effect textures
More suit foreground image, it is also desirable to which rotation processing is carried out to basic makeups effect textures.Using face area image to be dressed up as mouth
It is corresponding to paste basic makeups effect if the line for calculating two corners of the mouths have rotated 15 degree to the left exemplified by bar area image
Figure is to 15 degree of anticlockwise, more to suit foreground image.
Step S203, scene cut will be carried out to current frame image and handles obtained background image or default background image
Fusion treatment is carried out with makeups effect textures, foreground image, obtains frame processing image.
After drafting has obtained makeups effect textures, the back of the body that scene cut can will be carried out to current frame image handle to obtain
Scape image (i.e. the original background image of current frame image) carries out fusion treatment with makeups effect textures, foreground image, obtains frame
Handle image;Also default background image and makeups effect textures, foreground image can be subjected to fusion treatment, obtains frame processing figure
Picture.Wherein, default background image can be dynamic background image, or static background image.Those skilled in the art can
Default background image is set according to being actually needed, not limited herein.
Wherein, the Layer Order of frame processing image can be followed successively by makeups effect textures, foreground image, Background from front to back
Picture, when frame, which is handled, also includes other figure layers in image, the orders of other figure layers do not influence makeups effect textures, foreground image,
The tandem of background image.
Step S204, tone processing, photo-irradiation treatment and/or brightness processed are carried out to frame processing image.
Makeups effect textures are contained because frame is handled in image, to make the display effect of frame processing image more natural true
It is real, image can be handled to frame and carry out image procossing.Image procossing can include carrying out tone processing, illumination to frame processing image
Processing, brightness processed etc..For example raising brightness processed is carried out to frame processing image, the face of personage is looked inexperienced white, make its whole
The effect of body is more natural, attractive in appearance.
Step S205, the video data after frame processing image covering current frame image is handled.
Original current frame image is directly override using frame processing image, the video counts after directly can be processed
According to.Meanwhile the user of recording can also be immediately seen frame processing image.
Step S206, the video data after display processing.
After video data after being handled, it can be shown in real time, after user can directly be seen that processing
Video data display effect.
Step S207, the video data after processing is uploaded to Cloud Server.
Video data after processing can be directly uploaded to Cloud Server, specifically, can be by the video counts after processing
According to be uploaded to one or more cloud video platform server, such as iqiyi.com, youku.com, fast video cloud video platform server,
So that cloud video platform server is shown video data in cloud video platform.Or can also be by the video data after processing
Cloud direct broadcast server is uploaded to, can be straight by cloud when the user for having live viewing end is watched into cloud direct broadcast server
Broadcast server and give video data real time propelling movement to viewing subscription client.Or the video data after processing can also be uploaded to
Cloud public number server, when there is user to pay close attention to the public number, video data is pushed to public number by cloud public number server
Pay close attention to client;Further, cloud public number server can also be accustomed to according to the viewing of the user of concern public number, and push meets
The video data of user's custom pays close attention to client to public number.
Dressed up method according to the video personage that the present embodiment provides, employ deep learning method, realize high efficiency height
Complete scene cut processing accuracy, can be easily and quickly according to the key message in the face region to be dressed up extracted
Landscaping effect is added for the facial zone of the personage in video, video data treatment effeciency is improved, has beautified video data and shown
Show effect;In addition, effect textures accurately can be contracted by the key message according to the face region to be dressed up extracted
Put, rotation processing, it is more suited special object, further increase video data display effect.The present invention can be direct
Video data after being handled, the video data after processing can also be directly uploaded to Cloud Server, it is not necessary to user couple
The video of recording carries out extra process, saves user time, the video data after being handled with real-time display to user, convenient
User checks display effect.User's technical merit is not limited simultaneously, facilitates public use.
Fig. 3 shows that video personage according to an embodiment of the invention dresss up the structured flowchart of device, as shown in figure 3,
The device includes:Acquisition module 310, segmentation module 320, stick picture disposing module 330, fusion treatment module 340, overlay module
350 and display module 360.
Acquisition module 310 is suitable to:Real-time image acquisition collecting device is captured and/or the video recorded in comprising specific
The current frame image of object;Or the current frame image that special object is included in currently played video is obtained in real time.
Segmentation module 320 is suitable to:Scene cut processing is carried out to current frame image, obtains being directed to the prospect of special object
The face of image and special object area image to be dressed up.
Stick picture disposing module 330 is suitable to:Draw makeups effect textures corresponding with face area image to be dressed up.
Fusion treatment module 340 is suitable to:Makeups effect textures and foreground image are subjected to fusion treatment, obtain frame processing figure
Picture.
Overlay module 350 is suitable to:Video data after frame processing image covering current frame image is handled.
Overlay module 350 directly overrides former current frame image using frame processing image, after directly can be processed
Video data.Meanwhile the user of recording can also be immediately seen frame processing image.
Display module 360 is suitable to:Video data after display processing.
Display module 360 handled after video data after, it can be shown in real time, user can be direct
See the display effect of the video data after processing.
Dressed up device according to the video personage that the present embodiment provides, captured by acquisition module real-time image acquisition collecting device
And/or the current frame image of special object is included in the video recorded, or, obtain wrapped in currently played video in real time
Current frame image containing special object, segmentation module carry out scene cut processing to current frame image, and it is specific right to obtain being directed to
The foreground image of elephant and the face of special object area image to be dressed up, stick picture disposing module are drawn and face region to be dressed up
Makeups effect textures and foreground image are carried out fusion treatment, obtained by makeups effect textures corresponding to image, fusion treatment module
Frame handles image, and frame processing image is covered the video data after current frame image is handled by overlay module, and display module shows
Show the video data after processing.Present invention employs deep learning method, completes scene point with realizing the high accuracy of high efficiency
Processing is cut, technical scheme provided by the invention can be precisely, rapidly video based on resulting scene cut result
In personage facial zone addition landscaping effect, improve video data treatment effeciency, optimize video data processing mode,
Video data display effect is beautified.
Fig. 4 shows that video personage in accordance with another embodiment of the present invention dresss up the structured flowchart of device, such as Fig. 4 institutes
Show, the device includes:Acquisition module 410, segmentation module 420, stick picture disposing module 430, fusion treatment module 440, at image
Manage module 450, overlay module 460, display module 470 and uploading module 480.
Acquisition module 410 is suitable to:Real-time image acquisition collecting device is captured and/or the video recorded in comprising specific
The current frame image of object;Or the current frame image that special object is included in currently played video is obtained in real time.
Segmentation module 420 is suitable to:Scene cut processing is carried out to current frame image, obtains being directed to the prospect of special object
The face of image and special object area image to be dressed up.
Stick picture disposing module 430 is suitable to:The key in face region to be dressed up is extracted from face area image to be dressed up
Information, makeups effect textures are drawn according to key message.
Wherein, key message can be specially key point information, key area information, and/or key lines information etc..This hair
Bright embodiment illustrates so that key message is key point information as an example.Stick picture disposing module 430 is further adapted for:From face
The key point information of face edges of regions to be dressed up is extracted in area image to be dressed up.
Stick picture disposing module 430 is further adapted for:Search basic makeups effect textures corresponding with key point information;Or
Person, obtain basic makeups effect textures corresponding with user's selection operation;According to key point information, calculate with symmetric relation
Positional information between at least two key points;According to positional information, basic makeups effect textures are handled, obtain makeups
Effect textures.
Alternatively, stick picture disposing module 430 can be entered according to the range information in positional information to basic makeups effect textures
The processing of row scaling, also rotation processing can be carried out to basic makeups effect textures according to the rotation angle information in positional information.
Fusion treatment module 440 is suitable to:Current frame image will be carried out scene cut handle obtained background image or
Default background image carries out fusion treatment with makeups effect textures, foreground image, obtains frame processing image.
Image processing module 450 is suitable to:Tone processing, photo-irradiation treatment and/or brightness processed are carried out to frame processing image.
Overlay module 460 is suitable to:Video data after frame processing image covering current frame image is handled.
Display module 470 is suitable to:Video data after display processing.
Display module 470 handled after video data after, it can be shown in real time, user can be direct
See the display effect of the video data after processing.
Uploading module 480, suitable for the video data after processing is uploaded into Cloud Server.
Video data after processing can be directly uploaded to Cloud Server, specifically, uploading module by uploading module 480
480 can be uploaded to the video data after processing the cloud video platform server of one or more, such as iqiyi.com, youku.com, fast
The cloud video platform server such as video, so that cloud video platform server is shown video data in cloud video platform.Or
Video data after processing can also be uploaded to cloud direct broadcast server by uploading module 480, when the user for having live viewing end enters
When entering cloud direct broadcast server and being watched, can by cloud direct broadcast server by video data real time propelling movement to viewing user client
End.Or the video data after processing can also be uploaded to cloud public number server by uploading module 480, it is somebody's turn to do when there is user's concern
During public number, video data is pushed to public number concern client by cloud public number server;Further, cloud public number service
Device can also be accustomed to according to the viewing of the user of concern public number, and the video data that push meets user's custom is paid close attention to public number
Client.
Dressed up device according to the video personage that the present embodiment provides, employ deep learning method, realize high efficiency height
Complete scene cut processing accuracy, can be easily and quickly according to the key message in the face region to be dressed up extracted
Landscaping effect is added for the facial zone of the personage in video, video data treatment effeciency is improved, has beautified video data and shown
Show effect;In addition, effect textures accurately can be contracted by the key message according to the face region to be dressed up extracted
Put, rotation processing, it is more suited special object, further increase video data display effect.The present invention can be direct
Video data after being handled, the video data after processing can also be directly uploaded to Cloud Server, it is not necessary to user couple
The video of recording carries out extra process, saves user time, the video data after being handled with real-time display to user, convenient
User checks display effect.User's technical merit is not limited simultaneously, facilitates public use.
Present invention also offers a kind of nonvolatile computer storage media, computer-readable storage medium is stored with least one can
Execute instruction, the video personage that executable instruction can perform in above-mentioned any means embodiment dress up method.
Fig. 5 shows a kind of structural representation of computing device according to embodiments of the present invention, the specific embodiment of the invention
The specific implementation to computing device does not limit.
As shown in figure 5, the computing device can include:Processor (processor) 502, communication interface
(Communications Interface) 504, memory (memory) 506 and communication bus 508.
Wherein:
Processor 502, communication interface 504 and memory 506 complete mutual communication by communication bus 508.
Communication interface 504, for being communicated with the network element of miscellaneous equipment such as client or other servers etc..
Processor 502, for configuration processor 510, it can specifically perform above-mentioned video personage and dress up in embodiment of the method
Correlation step.
Specifically, program 510 can include program code, and the program code includes computer-managed instruction.
Processor 502 is probably central processor CPU, or specific integrated circuit ASIC (Application
Specific Integrated Circuit), or it is arranged to implement the integrated electricity of one or more of the embodiment of the present invention
Road.The one or more processors that computing device includes, can be same type of processor, such as one or more CPU;Also may be used
To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 506, for depositing program 510.Memory 506 may include high-speed RAM memory, it is also possible to also include
Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 510 specifically can be used for so that processor 502 performs the video personage dress in above-mentioned any means embodiment
Play the part of method.In program 510 specific implementation of each step may refer to above-mentioned video personage dress up corresponding steps in embodiment and
Corresponding description, will not be described here in unit.It is apparent to those skilled in the art that for description convenience and
Succinctly, the specific work process of the equipment of foregoing description and module, the corresponding process that may be referred in preceding method embodiment are retouched
State, will not be repeated here.
Algorithm and display be not inherently related to any certain computer, virtual system or miscellaneous equipment provided herein.
Various general-purpose systems can also be used together with teaching based on this.As described above, required by constructing this kind of system
Structure be obvious.In addition, the present invention is not also directed to any certain programmed language.It should be understood that it can utilize various
Programming language realizes the content of invention described herein, and the description done above to language-specific is to disclose this hair
Bright preferred forms.
In the specification that this place provides, numerous specific details are set forth.It is to be appreciated, however, that the implementation of the present invention
Example can be put into practice in the case of these no details.In some instances, known method, structure is not been shown in detail
And technology, so as not to obscure the understanding of this description.
Similarly, it will be appreciated that in order to simplify the disclosure and help to understand one or more of each inventive aspect,
Above in the description to the exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the method for the disclosure should be construed to reflect following intention:I.e. required guarantor
The application claims of shield features more more than the feature being expressly recited in each claim.It is more precisely, such as following
Claims reflect as, inventive aspect is all features less than single embodiment disclosed above.Therefore,
Thus the claims for following embodiment are expressly incorporated in the embodiment, wherein each claim is in itself
Separate embodiments all as the present invention.
Those skilled in the art, which are appreciated that, to be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more equipment different from the embodiment.Can be the module or list in embodiment
Member or component be combined into a module or unit or component, and can be divided into addition multiple submodule or subelement or
Sub-component.In addition at least some in such feature and/or process or unit exclude each other, it can use any
Combination is disclosed to all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so to appoint
Where all processes or unit of method or equipment are combined.Unless expressly stated otherwise, this specification (including adjoint power
Profit requires, summary and accompanying drawing) disclosed in each feature can be by providing the alternative features of identical, equivalent or similar purpose come generation
Replace.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included some features rather than further feature, but the combination of the feature of different embodiments means in of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
One of meaning mode can use in any combination.
The all parts embodiment of the present invention can be realized with hardware, or to be run on one or more processor
Software module realize, or realized with combinations thereof.It will be understood by those of skill in the art that it can use in practice
Microprocessor or digital signal processor (DSP) are come one of some or all parts in realizing according to embodiments of the present invention
A little or repertoire.The present invention is also implemented as setting for performing some or all of method as described herein
Standby or program of device (for example, computer program and computer program product).Such program for realizing the present invention can deposit
Storage on a computer-readable medium, or can have the form of one or more signal.Such signal can be from because of spy
Download and obtain on net website, either provide on carrier signal or provided in the form of any other.
It should be noted that the present invention will be described rather than limits the invention for above-described embodiment, and ability
Field technique personnel can design alternative embodiment without departing from the scope of the appended claims.In the claims,
Any reference symbol between bracket should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not
Element or step listed in the claims.Word "a" or "an" before element does not exclude the presence of multiple such
Element.The present invention can be by means of including the hardware of some different elements and being come by means of properly programmed computer real
It is existing.In if the unit claim of equipment for drying is listed, several in these devices can be by same hardware branch
To embody.The use of word first, second, and third does not indicate that any order.These words can be explained and run after fame
Claim.
Claims (10)
- A kind of method 1. video personage dresss up, methods described include:Real-time image acquisition collecting device is captured and/or the video recorded in include special object current frame image;Or Person, the current frame image that special object is included in currently played video is obtained in real time;Scene cut processing is carried out to the current frame image, obtains being directed to the foreground image of the special object and described The face area image to be dressed up of special object;Draw makeups effect textures corresponding with face area image to be dressed up;The makeups effect textures and the foreground image are subjected to fusion treatment, obtain frame processing image;Frame processing image is covered into the video data after the current frame image is handled;Video data after display processing.
- 2. the method according to claim 11, wherein, it is described to draw makeups corresponding with face area image to be dressed up Effect textures further comprise:The key message in face region to be dressed up is extracted from face area image to be dressed up, according to the key message Draw makeups effect textures.
- 3. method according to claim 1 or 2, wherein, extract institute in the area image to be dressed up from the face The key message for stating face region to be dressed up further comprises:The key message is key point information;The key point information of face edges of regions to be dressed up is extracted from face area image to be dressed up.
- 4. according to the method described in claim any one of 1-3, wherein, it is described that makeups effect patch is drawn according to the key message Figure further comprises:The key message is key point information;Search basic makeups effect textures corresponding with the key point information;Or obtain corresponding with user's selection operation Basic makeups effect textures;According to the key point information, the positional information between at least two key points with symmetric relation is calculated;According to the positional information, the basic makeups effect textures are handled, obtain makeups effect textures.
- 5. according to the method described in claim any one of 1-4, wherein, it is described according to the positional information, it is beautiful to the basis Adornment effect textures are handled, and are obtained makeups effect textures and are further comprised:According to the range information in the positional information, processing is zoomed in and out to the basic makeups effect textures.
- 6. according to the method described in claim any one of 1-5, wherein, it is described according to the positional information, it is beautiful to the basis Adornment effect textures are handled, and are obtained makeups effect textures and are further comprised:According to the rotation angle information in the positional information, rotation processing is carried out to the basic makeups effect textures.
- 7. according to the method described in claim any one of 1-6, wherein, it is described by the makeups effect textures and the foreground picture As carrying out fusion treatment, obtain frame processing image and further comprise:Background image or default background image and the described U.S. that will handle to obtain to current frame image progress scene cut Adornment effect textures, the foreground image carry out fusion treatment, obtain frame processing image.
- The device 8. a kind of video personage dresss up, described device include:Acquisition module, suitable for captured by real-time image acquisition collecting device and/or in the video recorded comprising special object Current frame image;Or the current frame image that special object is included in currently played video is obtained in real time;Split module, suitable for carrying out scene cut processing to the current frame image, obtain before being directed to the special object The face of scape image and special object area image to be dressed up;Stick picture disposing module, suitable for drawing makeups effect textures corresponding with face area image to be dressed up;Fusion treatment module, suitable for the makeups effect textures and the foreground image are carried out into fusion treatment, obtain frame processing Image;Overlay module, suitable for frame processing image is covered into the video data after the current frame image is handled;Display module, suitable for the video data after display processing.
- 9. a kind of computing device, including:Processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;The memory is used to deposit an at least executable instruction, and the executable instruction makes the computing device such as right will Ask the video personage any one of 1-7 to dress up corresponding to method to operate.
- 10. a kind of computer-readable storage medium, an at least executable instruction, the executable instruction are stored with the storage medium Make video personage of the computing device as any one of claim 1-7 dress up corresponding to method to operate.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711066571.5A CN107820027A (en) | 2017-11-02 | 2017-11-02 | Video personage dresss up method, apparatus, computing device and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711066571.5A CN107820027A (en) | 2017-11-02 | 2017-11-02 | Video personage dresss up method, apparatus, computing device and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107820027A true CN107820027A (en) | 2018-03-20 |
Family
ID=61604087
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711066571.5A Pending CN107820027A (en) | 2017-11-02 | 2017-11-02 | Video personage dresss up method, apparatus, computing device and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107820027A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108111911A (en) * | 2017-12-25 | 2018-06-01 | 北京奇虎科技有限公司 | Video data real-time processing method and device based on the segmentation of adaptive tracing frame |
CN109147012A (en) * | 2018-09-20 | 2019-01-04 | 麒麟合盛网络技术股份有限公司 | Image processing method and device |
CN109873902A (en) * | 2018-12-29 | 2019-06-11 | 努比亚技术有限公司 | Result of broadcast methods of exhibiting, device and computer readable storage medium |
CN110060205A (en) * | 2019-05-08 | 2019-07-26 | 北京迈格威科技有限公司 | Image processing method and device, storage medium and electronic equipment |
CN111241886A (en) * | 2018-11-29 | 2020-06-05 | 北京市商汤科技开发有限公司 | Object key point identification method and device, electronic equipment and storage medium |
CN111626919A (en) * | 2020-05-08 | 2020-09-04 | 北京字节跳动网络技术有限公司 | Image synthesis method and device, electronic equipment and computer-readable storage medium |
CN112132085A (en) * | 2020-09-29 | 2020-12-25 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN112991147A (en) * | 2019-12-18 | 2021-06-18 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN113450367A (en) * | 2020-03-24 | 2021-09-28 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN113658298A (en) * | 2018-05-02 | 2021-11-16 | 北京市商汤科技开发有限公司 | Method and device for generating special-effect program file package and special effect |
CN114155569A (en) * | 2021-08-31 | 2022-03-08 | 北京新氧科技有限公司 | Cosmetic progress detection method, device, equipment and storage medium |
CN114529445A (en) * | 2020-10-30 | 2022-05-24 | 北京字跳网络技术有限公司 | Method and device for drawing special dressing effect, electronic equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102436668A (en) * | 2011-09-05 | 2012-05-02 | 上海大学 | Automatic Beijing Opera facial mask making-up method |
CN105847728A (en) * | 2016-04-13 | 2016-08-10 | 腾讯科技(深圳)有限公司 | Information processing method and terminal |
CN106302124A (en) * | 2016-08-18 | 2017-01-04 | 北京奇虎科技有限公司 | A kind of method adding specially good effect and electronic equipment |
CN106791347A (en) * | 2015-11-20 | 2017-05-31 | 比亚迪股份有限公司 | A kind of image processing method, device and the mobile terminal using the method |
CN107105310A (en) * | 2017-05-05 | 2017-08-29 | 广州盈可视电子科技有限公司 | Figure image replacement method, device and a kind of recording and broadcasting system in a kind of net cast |
-
2017
- 2017-11-02 CN CN201711066571.5A patent/CN107820027A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102436668A (en) * | 2011-09-05 | 2012-05-02 | 上海大学 | Automatic Beijing Opera facial mask making-up method |
CN106791347A (en) * | 2015-11-20 | 2017-05-31 | 比亚迪股份有限公司 | A kind of image processing method, device and the mobile terminal using the method |
CN105847728A (en) * | 2016-04-13 | 2016-08-10 | 腾讯科技(深圳)有限公司 | Information processing method and terminal |
CN106302124A (en) * | 2016-08-18 | 2017-01-04 | 北京奇虎科技有限公司 | A kind of method adding specially good effect and electronic equipment |
CN107105310A (en) * | 2017-05-05 | 2017-08-29 | 广州盈可视电子科技有限公司 | Figure image replacement method, device and a kind of recording and broadcasting system in a kind of net cast |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108111911A (en) * | 2017-12-25 | 2018-06-01 | 北京奇虎科技有限公司 | Video data real-time processing method and device based on the segmentation of adaptive tracing frame |
CN108111911B (en) * | 2017-12-25 | 2020-07-28 | 北京奇虎科技有限公司 | Video data real-time processing method and device based on self-adaptive tracking frame segmentation |
CN113658298A (en) * | 2018-05-02 | 2021-11-16 | 北京市商汤科技开发有限公司 | Method and device for generating special-effect program file package and special effect |
CN109147012A (en) * | 2018-09-20 | 2019-01-04 | 麒麟合盛网络技术股份有限公司 | Image processing method and device |
CN109147012B (en) * | 2018-09-20 | 2023-04-14 | 麒麟合盛网络技术股份有限公司 | Image processing method and device |
CN111241886A (en) * | 2018-11-29 | 2020-06-05 | 北京市商汤科技开发有限公司 | Object key point identification method and device, electronic equipment and storage medium |
CN109873902A (en) * | 2018-12-29 | 2019-06-11 | 努比亚技术有限公司 | Result of broadcast methods of exhibiting, device and computer readable storage medium |
CN110060205A (en) * | 2019-05-08 | 2019-07-26 | 北京迈格威科技有限公司 | Image processing method and device, storage medium and electronic equipment |
CN112991147A (en) * | 2019-12-18 | 2021-06-18 | 北京字节跳动网络技术有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
US11651529B2 (en) | 2019-12-18 | 2023-05-16 | Beijing Bytedance Network Technology Co., Ltd. | Image processing method, apparatus, electronic device and computer readable storage medium |
CN112991147B (en) * | 2019-12-18 | 2023-10-27 | 抖音视界有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN113450367A (en) * | 2020-03-24 | 2021-09-28 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN111626919B (en) * | 2020-05-08 | 2022-11-15 | 北京字节跳动网络技术有限公司 | Image synthesis method and device, electronic equipment and computer readable storage medium |
CN111626919A (en) * | 2020-05-08 | 2020-09-04 | 北京字节跳动网络技术有限公司 | Image synthesis method and device, electronic equipment and computer-readable storage medium |
CN112132085A (en) * | 2020-09-29 | 2020-12-25 | 联想(北京)有限公司 | Image processing method and electronic equipment |
CN114529445A (en) * | 2020-10-30 | 2022-05-24 | 北京字跳网络技术有限公司 | Method and device for drawing special dressing effect, electronic equipment and storage medium |
CN114155569A (en) * | 2021-08-31 | 2022-03-08 | 北京新氧科技有限公司 | Cosmetic progress detection method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107820027A (en) | Video personage dresss up method, apparatus, computing device and computer-readable storage medium | |
CN107945188A (en) | Personage based on scene cut dresss up method and device, computing device | |
CN107483892A (en) | Video data real-time processing method and device, computing device | |
CN107507155A (en) | Video segmentation result edge optimization real-time processing method, device and computing device | |
CN107862277A (en) | Live dress ornament, which is dressed up, recommends method, apparatus, computing device and storage medium | |
CN107613360A (en) | Video data real-time processing method and device, computing device | |
CN108109161A (en) | Video data real-time processing method and device based on adaptive threshold fuzziness | |
CN108111911A (en) | Video data real-time processing method and device based on the segmentation of adaptive tracing frame | |
CN107665482B (en) | Video data real-time processing method and device for realizing double exposure and computing equipment | |
CN107977927A (en) | Stature method of adjustment and device, computing device based on view data | |
CN112949605A (en) | Semantic segmentation based face makeup method and system | |
CN107633228A (en) | Video data handling procedure and device, computing device | |
CN107682731A (en) | Video data distortion processing method, device, computing device and storage medium | |
CN107743263B (en) | Video data real-time processing method and device and computing equipment | |
CN107770606A (en) | Video data distortion processing method, device, computing device and storage medium | |
CN107808372A (en) | Image penetration management method, apparatus, computing device and computer-readable storage medium | |
CN107610149A (en) | Image segmentation result edge optimization processing method, device and computing device | |
CN107563357A (en) | Live dress ornament based on scene cut, which is dressed up, recommends method, apparatus and computing device | |
CN107613161A (en) | Video data handling procedure and device, computing device based on virtual world | |
CN107766803A (en) | Video personage based on scene cut dresss up method, apparatus and computing device | |
CN107566853A (en) | Realize the video data real-time processing method and device, computing device of scene rendering | |
CN108171716A (en) | Video personage based on the segmentation of adaptive tracing frame dresss up method and device | |
CN107563962A (en) | Video data real-time processing method and device, computing device | |
CN107564085A (en) | Scalloping processing method, device, computing device and computer-readable storage medium | |
CN107705279A (en) | Realize the view data real-time processing method and device, computing device of double exposure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180320 |
|
RJ01 | Rejection of invention patent application after publication |