CN106937154A - Process the method and device of virtual image - Google Patents
Process the method and device of virtual image Download PDFInfo
- Publication number
- CN106937154A CN106937154A CN201710160405.5A CN201710160405A CN106937154A CN 106937154 A CN106937154 A CN 106937154A CN 201710160405 A CN201710160405 A CN 201710160405A CN 106937154 A CN106937154 A CN 106937154A
- Authority
- CN
- China
- Prior art keywords
- video
- virtual image
- promoter
- data
- client
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000008569 process Effects 0.000 title claims description 7
- 210000001097 facial muscle Anatomy 0.000 claims abstract description 49
- 230000009471 action Effects 0.000 claims description 46
- 210000000988 bone and bone Anatomy 0.000 claims description 20
- 238000005516 engineering process Methods 0.000 claims description 17
- 230000001815 facial effect Effects 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 8
- 241000209140 Triticum Species 0.000 claims description 7
- 235000021307 Triticum Nutrition 0.000 claims description 7
- 230000000875 corresponding effect Effects 0.000 claims description 5
- 235000013372 meat Nutrition 0.000 claims description 2
- 210000003205 muscle Anatomy 0.000 claims description 2
- 230000008921 facial expression Effects 0.000 abstract description 10
- 230000015654 memory Effects 0.000 description 14
- 238000013135 deep learning Methods 0.000 description 7
- 210000003414 extremity Anatomy 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000014509 gene expression Effects 0.000 description 4
- 238000010295 mobile communication Methods 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000002414 leg Anatomy 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 210000000689 upper leg Anatomy 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 206010048232 Yawning Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/426—Internal components of the client ; Characteristics thereof
- H04N21/42653—Internal components of the client ; Characteristics thereof for processing graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/437—Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the present application provides a kind of method and device for processing virtual image, and methods described includes:Facial muscles data according to the first promoter determine to include the first video of the first virtual image video that the facial muscles data are used to drive first virtual image to move;First video is uploaded to service end, so that spectator client obtains first video.The embodiment of the present application can make the facial expressions and acts of the first virtual image consistent with the facial expressions and acts of the first promoter so that the first virtual image is more true to nature, and Consumer's Experience is more preferable.
Description
Technical field
The invention relates to field of computer technology, more particularly to a kind of method and device for processing virtual image.
Background technology
Net cast refers to be carried out using internet and stream media technology live, and video is because having merged image, word, sound
Etc. abundant element, excellent both in sound and shape, excellent is increasingly becoming the main flow expression way of internet.
To improve Consumer's Experience, in net cast, some virtual images are generally added.In the prior art, by obtaining
The skeleton data of main broadcaster in video, skeleton data is sent to server, and spectator client obtains the bone of main broadcaster from server
After data, according to skeleton data drive spectator client it is local virtual image motion, but this mode often occur main broadcaster with
Virtual image acts nonsynchronous phenomenon, causes Consumer's Experience very poor.
Therefore, how to cause that main broadcaster is synchronous with virtual image action, lifting Consumer's Experience, as needing solution badly in the prior art
Technical problem certainly.
The content of the invention
In view of the above problems, the application provide it is a kind of process virtual image method and device, overcome above mentioned problem or
Solve the above problems at least in part.
The embodiment of the present application provides a kind of method for processing virtual image, is applied to the first client, including:
Facial muscles data according to the first promoter determine to include the first video of the first virtual image video, the face
Portion's muscle data are used to drive first virtual image to move;
First video is uploaded to service end, so that spectator client obtains first video.
Alternatively, in the embodiment of the present application, determined to include the according to the facial muscles data of the first promoter described
Also include before first video of one virtual image video:
First promoter's video is parsed using facial recognition techniques, to obtain the facial muscles number of first promoter
According to.
Alternatively, in the embodiment of the present application, methods described also includes:
Bone action data according to first promoter determines to include the first video of the first virtual image video, institute
Bone action data is stated for driving first virtual image to move.
Alternatively, in the embodiment of the present application, it is described to be determined to include according to the bone action data of first promoter
First video of the first virtual image video includes:
The corresponding action command of bone action data of first promoter is obtained, the action command is used to perform extremely
A few action;
First virtual image is driven to perform the action command, to determine to include the first of the first virtual image video
Video.
Alternatively, in the embodiment of the present application, determined to include the according to the facial muscles data of the first promoter described
Also include after first video of one virtual image video:
First promoter's audio is obtained, and the first promoter audio is added in first video, so as to see
Many clients receive the first video including the first promoter audio.
Alternatively, in the embodiment of the present application, first video also includes first promoter's video.
Alternatively, in the embodiment of the present application, it is described that first video is uploaded to service end, so that spectator client
Obtaining first video includes:First video is uploaded to service end using real-time messages host-host protocol, so that spectators
Client obtains first video in real time.
The embodiment of the present application provides a kind of method for processing virtual image, is applied to the second client, including:
According to the facial muscles data of the second promoter for connecting wheat with the first promoter, generation includes that the second virtual image is regarded
Second video of frequency, the facial muscles data of second promoter are used to drive second virtual image to move;
Second video is uploaded to service end, first video and the second video are merged.
The embodiment of the present application provides a kind of method for processing virtual image, including:
Receive the first video of the first client and the second video of the second client;
First video and second video are merged by video fusion technology, interdynamic video is generated, so that spectators
Client obtains the interdynamic video.
The embodiment of the present application provides a kind of device for processing virtual image, is applied to the first client, including:
First generation module, for being determined to include the first virtual image video according to the facial muscles data of the first promoter
The first video, the facial muscles data are used to drive first virtual image to move;
First uploading module, for first video to be uploaded into service end, so that spectator client obtains described the
One video.
Alternatively, in the embodiment of the present application, described device also includes:
Parsing module, for parsing first promoter's video using facial recognition techniques, to obtain first promoter
Facial muscles data.
The embodiment of the present application provides a kind of device for processing virtual image, is applied to the second client, including:
Second generation module, the facial muscles data of second promoter of wheat are connected for basis with the first promoter, generation
Including the second the second video for intending vivid video, the facial muscles data of second promoter are used to driving described second virtual
Image motion;
Second uploading module, for second video to be uploaded into service end, first video and second is regarded
Frequency merges.
The embodiment of the present application provides a kind of device for processing virtual image, including:
Receiver module, for receiving the first video of the first client and the second video of spectator client;
Merging module, for merging first video and the second video by video fusion technology, generates interdynamic video,
So that spectator client obtains the interdynamic video.
From above technical scheme, the embodiment of the present application can determine to include the according to the facial muscles data of the first promoter
First video of one virtual image video, the facial muscles data are used to drive first virtual image to move;And by institute
State the first video and be uploaded to service end, so that spectator client obtains first video.The embodiment of the present application can make the first void
The facial expressions and acts for intending image are consistent with the facial expressions and acts of the first promoter so that the first virtual image is more true to nature, Consumer's Experience
More preferably.
Brief description of the drawings
In order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art, below will be to embodiment or existing
The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments described in application embodiment, for those of ordinary skill in the art, can also obtain according to these accompanying drawings
Obtain other accompanying drawings.
Fig. 1 is a kind of flow chart of the embodiment for processing virtual image method of the application;
Fig. 2 is a kind of flow chart of another embodiment for processing virtual image method of the application;
Fig. 3 is a kind of flow chart of another embodiment for processing virtual image method of the application;
Fig. 4 is a kind of flow chart of another embodiment for processing virtual image method of the application;
Fig. 5 is a kind of flow chart of another embodiment for processing virtual image method of the application;
Fig. 6 is a kind of structure chart of another embodiment for processing virtual image method of the application;
Fig. 7 is a kind of structure chart of another embodiment for processing virtual image method of the application;
Fig. 8 is a kind of structure chart of the embodiment for processing virtual image device of the application;
Fig. 9 is a kind of structure chart of another embodiment for processing virtual image device of the application;
Figure 10 is a kind of structure chart of another embodiment for processing virtual image device of the application;
Figure 11 is the hardware architecture diagram of the electronic equipment that the application one performs treatment virtual image method.
Specific embodiment
The embodiment of the present application can determine to include the of the first virtual image video according to the facial muscles data of the first promoter
One video, the facial muscles data are used to drive first virtual image to move;And first video is uploaded to clothes
Business end, so that spectator client obtains first video.The embodiment of the present application can make the facial expressions and acts of the first virtual image with
The facial expressions and acts of the first promoter are consistent so that the first virtual image is more true to nature, and Consumer's Experience is more preferable.
Certainly, any technical scheme for implementing the embodiment of the present application is not needed necessarily while reaching all excellent of the above
Point.
In order that those skilled in the art more fully understand the technical scheme in the embodiment of the present application, below in conjunction with the application
Accompanying drawing in embodiment, is clearly and completely described to the technical scheme in the embodiment of the present application, it is clear that described reality
It is only a part of embodiment of the embodiment of the present application to apply example, rather than whole embodiments.Based on the implementation in the embodiment of the present application
Example, the every other embodiment that those of ordinary skill in the art are obtained should all belong to the scope of the embodiment of the present application protection.
The embodiment of the present application is further illustrated with reference to the embodiment of the present application accompanying drawing to implement.
Referring to Fig. 1, in the application one is implemented, the method for the treatment virtual image is applied to the first client,
It includes:
S101, according to the first promoter facial muscles data determine include the first video of the first virtual image video, institute
Facial muscles data are stated for driving first virtual image to move;
Specifically, in the present embodiment, first client can be include hardware, software or embedded logic module or
The electronic installation of the combination of two or more this class component of person, and be able to carry out by mobile communication function.For example, mobile communication sets
Standby can be computer, smart mobile phone, panel computer, notebook computer, net book, Intelligent worn device etc..
Specifically, in the present embodiment, first promoter may include at least one first main broadcasters, and each described first
Main broadcaster can correspond at least one first virtual images.
Specifically, in the present embodiment, the facial muscles data can parse first promoter's facial expression
The data of action, for example, facial expression action is included but is not limited to:Smile, sad, indignation, blink, yawn.
Specifically, in the present embodiment, first virtual image can be the virtual image with face organ, example
Such as, cartoon character, character image etc..
Specifically, in the present embodiment, the source file of first virtual image can be PMD forms, the source of MAX forms
File.
Specifically, in the present embodiment, first video can only include the first virtual image video.
Specifically, in the present embodiment, first video also includes first promoter's video.For example, video can be used
Integration technology merges the first promoter video and the first virtual image video, the first video is obtained, so that spectators
Action in first promoter's video and the first virtual image video that client is obtained keeps synchronous;In addition, by described first
Promoter's video and the first virtual image video merge, and obtain the first video, can also reduce the data volume of network transmission, example
Such as, the first of 3M sizes is obtained after first promoter's video of 3M sizes merges with the first virtual image video of 1M sizes to regard
Frequently.
S102, first video is uploaded to service end, so that spectator client obtains first video.
Specifically, in the present embodiment, the service end is understood to one group of service journey that can process specific transactions logic
Sequence, service end can receive the network request (Network Request) that client end sends, and be made according to the network request
Logical process, and return to client by result data is obtained after logical process.
Specifically, in the present embodiment, real-time messages host-host protocol (RTMP, Real Time Messaging can be used
Protocol first video) is uploaded to service end.Certainly, the present embodiment it is also possible to use other and can realize that data are passed in real time
Defeated agreement, so that spectator client obtains first video in real time.For example:The agreement for being capable of achieving real-time data transmission can be with
Be real time streaming transport protocol (RTSP, Real Time Streaming Protocol), HTTP real-time stream media protocols (HLS,
HTTP Live Streaming)。
The present embodiment can determine to include that the first of the first virtual image video regards according to the facial muscles data of the first promoter
Frequently, the facial muscles data are used to drive first virtual image to move;And first video is uploaded to service end,
So that spectator client obtains first video.The embodiment of the present application can make the facial expressions and acts of the first virtual image and the first hair
The facial expressions and acts for playing person are consistent so that the first virtual image is more true to nature, and Consumer's Experience is more preferable.
Referring to Fig. 2, implement the application is another, methods described includes:
S201, using facial recognition techniques parse first promoter's video, to obtain the facial flesh of first promoter
Meat data.
Specifically, in the present embodiment, usable image capture device continuously gathers the image of the first promoter, will be described
Continuous first promoter image is converted to the first promoter video.For example, continuously gathering the first hair using camera
The image of person is played, to obtain first promoter's video.
Specifically, in the present embodiment, the facial recognition techniques (FRT, Face Recognition Technology),
Also referred to as face recognition technology, can be pre-processed, human facial feature extraction and expression classification etc. to the image of the first promoter
Reason, to obtain facial muscles data.
Specifically, in the present embodiment, deep learning (Deep Learning) is can be used to carry out the facial recognition techniques
Optimization.Deep learning can form more abstract high-rise expression attribute classification or feature by combining the low-level feature of face, i.e.,
Deep learning can be optimized to the algorithm of human facial feature extraction, to obtain apparent accurate expression.The face recognition skill
Art and deep learning belong to prior art, will not be repeated here.
S202, according to the facial muscles data of the first promoter determine include the first video of the first virtual image video,
The facial muscles data are used to drive first virtual image to move;
S203, first video is uploaded to service end, so that spectator client obtains first video.
Step S202, S203 is analogous respectively to step S101, S102 in Fig. 1 correspondence embodiments, will not be repeated here.
The present embodiment parses first promoter's video using facial recognition techniques, can improve the accuracy of face recognition.
Referring to Fig. 3, implement the application is another, methods described includes:
S301, according to the facial muscles data of the first promoter determine include the first video of the first virtual image video,
The facial muscles data are used to drive first virtual image to move;
S302, according to the bone action data of first promoter determine include that the first of the first virtual image video regards
Frequently, the bone action data is used to drive first virtual image to move.
Specifically, in the present embodiment, the bone action data can determine according to default skeleton character node.Bone
Characteristic node determines to include but is not limited to:Head node, neck node, chest node, left hand portion node, left arm node, the right side
Hand node, right arm node, left thigh portion node, left leg portion node, left foot portion node, right thigh portion node, right leg
Portion's node and right foot part node.The bone action data can be determined according to the change of skeleton character node.
Specifically, in the present embodiment, the bone action data can be the data for showing first main broadcaster's limb action,
Limb action is included but is not limited to:Nod, shake the head, wave, turn round.
Specifically, in the present embodiment, limbs identification technology is can be used to obtain the bone action data.For example, can make
The Kinect frameworks of increasing income provided with Microsoft realize that the limbs are recognized.
Specifically, in the present embodiment, deep learning is can be used to optimize the limbs identification technology.The limbs
Identification technology and deep learning belong to prior art, will not be repeated here.
Specifically, in the present embodiment, the limbs and face that action engine data can be used to drive virtual 3D images are dynamic
Make, the action engine data can be the file of VMD, MMD form.Comprising control described the in the file of VMD, MMD form
The data of one virtual image motion.For example, when the bone action data includes " nodding " characteristic, can control first
The action data " nodded " in virtual image loading VMD formatted files, so that the first virtual image is moved, generates the first virtual shape
As video.
Specifically, in the present embodiment, the action engine data can be customized by user according to real needs.
Specifically, in the present embodiment, can by first video frequency output meet openGL surface forms or
The image of RGB32 forms, H264 and ACC files be not encoded into by pictorial information harmony cent.H264 is a kind of high compression
Digital video coding-coding device standard, can be compressed treatment under conditions of definition is not changed by first video;ACC
(Advanced Audio Coding, Advanced Audio Coding technology) can be by the audio compression of first virtual image.This implementation
Example can reduce data volume during network transmission, improve data signaling rate.
S303, first video is uploaded to service end, so that spectator client obtains first video.
Step S301, S303 is analogous respectively to step S101, S102 in Fig. 1 correspondence embodiments, will not be repeated here.
The present embodiment uses facial muscles data and bone action data, generates the first virtual image video, and this reality
Example is applied by action engine data-drivens such as VMD so that first virtual image can complete more rich action, improve use
Experience at family.
Referring to Fig. 4, implement the application is another, methods described includes:
S401, according to the facial muscles data of the first promoter determine include the first video of the first virtual image video,
The facial muscles data are used to drive first virtual image to move;
S402, the corresponding action command of bone action data for obtaining first promoter, the action command are used for
At least one is performed to act;
Specifically, in the present embodiment, the action command can be the foundation of the first virtual image motion.
Specifically, in the present embodiment, the action command can correspond to multiple specific actions.
For example, when the bone action data includes " waving the right hand " characteristic, " right hand can be waved " as first
The action command of virtual image;The action command can correspond at least one specific action " waving the right hand ", on this basis, described
Specific action may also include " nodding ".
S403, driving first virtual image perform the action command, to determine to include the first virtual image video
The first video;
Specifically, in the present embodiment, first virtual image is driven to perform the action command one by one corresponding dynamic
Make, so that the motion of the first virtual image is got up, so that it is determined that the first video including the first virtual image video.
S404, first video is uploaded to service end, so that spectator client obtains first video.
Step S401, S404 is analogous respectively to step S101, S102 in Fig. 1 correspondence embodiments, will not be repeated here.
Referring to Fig. 5, implement the application is another, methods described includes:
S501, according to the facial muscles data of the first promoter determine include the first video of the first virtual image video,
The facial muscles data are used to drive first virtual image to move;
S502, first promoter's audio of acquisition, and the first promoter audio is added in first video;
Specifically, in the present embodiment, the audio can be storage to the sound in electronic equipment.
Specifically, in the present embodiment, audio collecting device is can be used to gather the sound of the first promoter.For example, using
Microphone gathers the sound of the first promoter, to obtain first promoter's audio.
Specifically, in the present embodiment, the audio can be 3D recording, number of people recording etc., can have spectators more preferable
Feeling of immersion.
S503, first video is uploaded to service end, so that spectator client obtains first video.
Step S501, S503 is analogous respectively to step S101, S102 in Fig. 1 correspondence embodiments, will not be repeated here.
Referring to Fig. 6, in the application one is implemented, the method for the treatment virtual image is applied to the second client,
It includes:
S601, the facial muscles data according to the second promoter for connecting wheat with the first promoter, generation include that second is virtual
Second video of vivid video, the facial muscles data of second promoter are used to drive second virtual image to move;
S602, second video is uploaded to service end, first video and the second video are merged.
Specifically, in the present embodiment, second promoter can be secondary main broadcaster.
Specifically, in the present embodiment, when the second promoter and the first main broadcaster connect wheat, the second of the second promoter is virtual
Image can be with first virtual image interactive of the first promoter.
Specifically, in the present embodiment, first video and the second video are merged and is capable of achieving the first main broadcaster and second
The interaction of main broadcaster.
Step S601, S602 corresponds respectively to S101, the S102 in Fig. 1 correspondence embodiments in the present embodiment, herein no longer
Repeat.
Referring to Fig. 7, in the application one is implemented, methods described includes:
Second video of S701, the first video for receiving the first client and the second client;
Specifically, in the present embodiment, can be used RTMP agreements that first video is uploaded into service end.Certainly, this reality
Applying example and it is also possible to use other can realize the agreement of real-time data transmission, so that spectator client obtains first video in real time.
For example:The agreement for being capable of achieving real-time data transmission can be RTSP agreements, HLS protocol.
Specifically, in the present embodiment, the first video may include the first virtual image video, and the second video may include that second is empty
Intend vivid video.
S702, by video fusion technology merge first video and second video, generate interdynamic video so that
Spectator client obtains the interdynamic video.
Specifically, in the present embodiment, the video fusion can be the first video to be merged with the second video.
Specifically, the present embodiment can be applied to service end.
The present embodiment receives the first video of the first client and the second video of the second client, by video fusion skill
Art merges first video and second video, interdynamic video is generated, so that it is virtual with second to generate the first virtual image
The effect of image interactive, improves the Consumer's Experience of spectator client.
Referring to Fig. 8, in the application one is implemented, described device includes:
Parsing module 801, for parsing first promoter's video using facial recognition techniques, is initiated with obtaining described first
The facial muscles data of person.
First generation module 802, for being determined to include the first virtual image according to the facial muscles data of the first promoter
First video of video, the facial muscles data are used to drive first virtual image to move;
First uploading module 803, for first video to be uploaded into service end, so that spectator client obtains described
First video.
Specifically, in the present embodiment, parsing module 801 can be used to perform step S201, the first life in Fig. 2 correspondence embodiments
Can be used for step S202 in execution Fig. 2 correspondence embodiments into module 802, the first uploading module 803 can be used to perform Fig. 2 correspondences in fact
Step S203 in example is applied, be will not be repeated here.
Referring to Fig. 9, in the application one is implemented, described device includes:
Second generation module 901, the facial muscles data of second promoter of wheat are connected for basis with the first promoter, raw
Into the second video for intending vivid video including second, the facial muscles data of second promoter are used to drive second void
Intend image motion;
Second uploading module 902, for second video to be uploaded into service end, by first video and second
Video merges.
Specifically, in the present embodiment, the second generation module 901 can be used to perform step S601 in Fig. 6 correspondence embodiments, the
Two uploading modules 902 can be used to perform step S602 in Fig. 6 correspondence embodiments, will not be repeated here.
Referring to Figure 10, in the application one is implemented, described device includes:
Receiver module 1001, for receiving the first video of the first client and the second video of spectator client;
Merging module 1002, for merging first video and the second video by video fusion technology, generation is interactive
Video, the interdynamic video is obtained by spectator client.
Specifically, in the present embodiment, receiver module 1001 can be used to perform step S701 in Fig. 7 correspondence embodiments, merge
Module 1002 can be used to perform step S702 in Fig. 7 correspondence embodiments, will not be repeated here.
Figure 11 is the hardware architecture diagram of some electronic equipments of the method that the application performs treatment virtual image.According to
Shown in Figure 11, the equipment includes:
One or more processors 1101 and memory 1102, in Figure 11 by taking a processor 1101 as an example.
The equipment for performing the method for the treatment of virtual image can also include:Input unit 1103 and output device 1103.
Processor 1101, memory 1102, input unit 1103 and output device 1104 can by bus or other
Mode is connected, in Figure 11 as a example by being connected by bus.
Memory 1102 can be used to store non-volatile software journey as a kind of non-volatile computer readable storage medium storing program for executing
Sequence, non-volatile computer executable program and module, the method correspondence of the treatment virtual image such as in the embodiment of the present application
Programmed instruction/module.Processor 1101 by run non-volatile software program of the storage in memory 1102, instruction with
And module, so that the various function application of execute server and data processing, that is, realize processing empty in above method embodiment
The method for intending image.
Memory 1102 can include storing program area and storage data field, wherein, storing program area can store operation system
Application program required for system, at least one function;Storage data field can store the use institute according to treatment virtual image device
Data of establishment etc..Additionally, memory 1102 can include high-speed random access memory 1102, can also include non-volatile
Memory 1102, for example, at least one magnetic disk storage 1102, flush memory device or other non-volatile solid state memories 1102
Part.In certain embodiments, memory 1102 is optional including the memory 1102 remotely located relative to processor 1101, these
Remote memory 1102 can be by network connection to the device for processing virtual image.The example of above-mentioned network is included but is not limited to
Internet, intranet, LAN, mobile radio communication and combinations thereof.
Input unit 1103 can receive the numeral or character information of input, and produce and the device for the treatment of virtual image
The key signals input that user is set and function control is relevant.Input unit 1103 may include the equipment such as pressing module.
One or more of modules are stored in the memory 1102, when by one or more of processors
During 1101 execution, the method for performing the treatment virtual image in above-mentioned any means embodiment.
The method that the executable the embodiment of the present application of the said goods is provided, possesses the corresponding functional module of execution method and has
Beneficial effect.Not ins and outs of detailed description in the present embodiment, reference can be made to the method that the embodiment of the present application is provided.
The electronic equipment of the embodiment of the present application exists in a variety of forms, including but not limited to:
(1) mobile communication equipment:The characteristics of this kind equipment is that possess mobile communication function, and to provide speech, data
It is main target to communicate.This Terminal Type includes:Smart mobile phone (such as iPhone), multimedia handset, feature mobile phone, and it is low
End mobile phone etc..
(2) super mobile personal computer equipment:This kind equipment belongs to the category of personal computer, there is calculating and treatment work(
Can, typically also possess mobile Internet access characteristic.This Terminal Type includes:PDA, MID and UMPC equipment etc., such as iPad.
(3) portable entertainment device:This kind equipment can show and play content of multimedia.The kind equipment includes:Audio,
Video player (such as iPod), handheld device, e-book, and intelligent toy and portable car-mounted navigation equipment.
(4) server:The equipment for providing the service of calculating, the composition of server includes processor 1101, hard disk, internal memory, is
System bus etc., server is similar with general computer architecture, but due to needing to provide highly reliable service, therefore in treatment
The requirement of the aspects such as ability, stability, reliability, security, scalability, manageability is higher.
(5) other have the electronic installation of data interaction function.
Device embodiment described above is only schematical, wherein the module illustrated as separating component can
To be or may not be physically separate, the part shown as module can be or may not be physics mould
Block, you can with positioned at a place, or can also be distributed on multiple mixed-media network modules mixed-medias.It can according to the actual needs be selected
In some or all of module realize the purpose of this embodiment scheme.Those of ordinary skill in the art are not paying creativeness
Work in the case of, you can to understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each implementation method can
Realized by the mode of software plus required general hardware platform, naturally it is also possible to by hardware.Based on such understanding, on
Stating the part that technical scheme substantially contributes to prior art in other words can be embodied in the form of software product, should
Computer software product can be stored in a computer-readable storage medium, the computer readable recording medium storing program for performing include be used for
The readable form storage of computer (such as computer) or any mechanism of transmission information.For example, machine readable media is included only
Read memory (ROM), random access memory (RAM), magnetic disk storage medium, optical storage media, flash medium, electricity, light,
Transmitting signal (for example, carrier wave, infrared signal, data signal etc.) of sound or other forms etc., the computer software product includes
Some instructions are used to so that a computer equipment (can be personal computer, server, or network equipment etc.) performs respectively
Method described in some parts of individual embodiment or embodiment.
Finally it should be noted that:Above example is only used to illustrate the technical scheme of the embodiment of the present application, rather than it is limited
System;Although being described in detail to the application with reference to the foregoing embodiments, it will be understood by those within the art that:Its
The technical scheme described in foregoing embodiments can still be modified, or which part technical characteristic is equal to
Replace;And these modifications or replacement, the essence of appropriate technical solution is departed from each embodiment technical scheme of the application
Spirit and scope.
Claims (13)
1. a kind of method for processing virtual image, is applied to the first client, it is characterised in that including:
Facial muscles data according to the first promoter determine to include the first video of the first virtual image video, the facial flesh
Meat data are used to drive first virtual image to move;
First video is uploaded to service end, so that spectator client obtains first video.
2. method according to claim 1, it is characterised in that true in the facial muscles data according to the first promoter
Surely also include before the first video including the first virtual image video:
First promoter's video is parsed using facial recognition techniques, to obtain the facial muscles data of first promoter.
3. method according to claim 1, it is characterised in that also include:
Bone action data according to first promoter determines to include the first video of the first virtual image video, the bone
Bone action data is used to drive first virtual image to move.
4. method according to claim 3, it is characterised in that the bone action data according to first promoter
It is determined that the first video including the first virtual image video includes:
The corresponding action command of bone action data of first promoter is obtained, the action command is used to perform at least one
Individual action;
First virtual image is driven to perform the action command, to determine to include that the first of the first virtual image video regards
Frequently.
5. method according to claim 1, it is characterised in that true in the facial muscles data according to the first promoter
Surely also include after the first video including the first virtual image video:
First promoter's audio is obtained, and the first promoter audio is added in first video, so that spectators are objective
Family termination receives the first video including the first promoter audio.
6. method according to claim 1, it is characterised in that first video also includes first promoter's video.
7. method according to claim 1, it is characterised in that described that first video is uploaded to service end, so that
Spectator client obtains first video to be included:First video is uploaded to service using real-time messages host-host protocol
End, so that spectator client obtains first video in real time.
8. a kind of method for processing virtual image, is applied to the second client, it is characterised in that including:
According to the facial muscles data of the second promoter for connecting wheat with the first promoter, generation includes the second virtual image video
Second video, the facial muscles data of second promoter are used to drive second virtual image to move;
Second video is uploaded to service end, first video and the second video are merged.
9. it is a kind of process virtual image method, it is characterised in that including:
Receive the first video of the first client and the second video of the second client;
First video and second video are merged by video fusion technology, interdynamic video is generated, so that spectators client
End obtains the interdynamic video.
10. a kind of device for processing virtual image, is applied to the first client, it is characterised in that including:
First generation module, for being determined to include the of the first virtual image video according to the facial muscles data of the first promoter
One video, the facial muscles data are used to drive first virtual image to move;
First uploading module, for first video to be uploaded into service end, so as to spectator client obtains described first regard
Frequently.
11. devices according to claim 9, it is characterised in that also include:
Parsing module, for parsing first promoter's video using facial recognition techniques, to obtain the face of first promoter
Portion's muscle data.
A kind of 12. devices for processing virtual image, are applied to the second client, it is characterised in that including:
Second generation module, the facial muscles data of second promoter of wheat are connected for basis with the first promoter, and generation includes
Second video of the second virtual image video, the facial muscles data of second promoter are used to drive the described second virtual shape
As motion;
Second uploading module, for second video to be uploaded into service end, first video and the second video is closed
And.
A kind of 13. devices for processing virtual image, it is characterised in that including:
Receiver module, for receiving the first video of the first client and the second video of spectator client;
Merging module, for merging first video and the second video by video fusion technology, generates interdynamic video, so that
Spectator client obtains the interdynamic video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710160405.5A CN106937154A (en) | 2017-03-17 | 2017-03-17 | Process the method and device of virtual image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710160405.5A CN106937154A (en) | 2017-03-17 | 2017-03-17 | Process the method and device of virtual image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106937154A true CN106937154A (en) | 2017-07-07 |
Family
ID=59432373
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710160405.5A Pending CN106937154A (en) | 2017-03-17 | 2017-03-17 | Process the method and device of virtual image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106937154A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107750014A (en) * | 2017-09-25 | 2018-03-02 | 迈吉客科技(北京)有限公司 | One kind connects wheat live broadcasting method and system |
CN108174227A (en) * | 2017-12-27 | 2018-06-15 | 广州酷狗计算机科技有限公司 | Display methods, device and the storage medium of virtual objects |
CN108200446A (en) * | 2018-01-12 | 2018-06-22 | 北京蜜枝科技有限公司 | Multimedia interactive system and method on the line of virtual image |
CN108875539A (en) * | 2018-03-09 | 2018-11-23 | 北京旷视科技有限公司 | Expression matching process, device and system and storage medium |
CN109120985A (en) * | 2018-10-11 | 2019-01-01 | 广州虎牙信息科技有限公司 | Image display method, apparatus and storage medium in live streaming |
CN109525483A (en) * | 2018-11-14 | 2019-03-26 | 惠州Tcl移动通信有限公司 | The generation method of mobile terminal and its interactive animation, computer readable storage medium |
CN109874021A (en) * | 2017-12-04 | 2019-06-11 | 腾讯科技(深圳)有限公司 | Living broadcast interactive method, apparatus and system |
CN110312144A (en) * | 2019-08-05 | 2019-10-08 | 广州华多网络科技有限公司 | Method, apparatus, terminal and the storage medium being broadcast live |
CN110433491A (en) * | 2019-07-25 | 2019-11-12 | 天脉聚源(杭州)传媒科技有限公司 | Movement sync response method, system, device and the storage medium of virtual spectators |
CN110519612A (en) * | 2019-08-26 | 2019-11-29 | 广州华多网络科技有限公司 | Even wheat interactive approach, live broadcast system, electronic equipment and storage medium |
CN110557625A (en) * | 2019-09-17 | 2019-12-10 | 北京达佳互联信息技术有限公司 | live virtual image broadcasting method, terminal, computer equipment and storage medium |
CN111263178A (en) * | 2020-02-20 | 2020-06-09 | 广州虎牙科技有限公司 | Live broadcast method, device, user side and storage medium |
CN111325819A (en) * | 2020-02-17 | 2020-06-23 | 网易(杭州)网络有限公司 | Motion data processing method, device, equipment and storage medium |
CN111641844A (en) * | 2019-03-29 | 2020-09-08 | 广州虎牙信息科技有限公司 | Live broadcast interaction method and device, live broadcast system and electronic equipment |
CN111787417A (en) * | 2020-06-23 | 2020-10-16 | 平安普惠企业管理有限公司 | Audio and video transmission control method based on artificial intelligence AI and related equipment |
WO2020221186A1 (en) * | 2019-04-30 | 2020-11-05 | 广州虎牙信息科技有限公司 | Virtual image control method, apparatus, electronic device and storage medium |
CN113965773A (en) * | 2021-11-03 | 2022-01-21 | 广州繁星互娱信息科技有限公司 | Live broadcast display method and device, storage medium and electronic equipment |
CN114245155A (en) * | 2021-11-30 | 2022-03-25 | 北京百度网讯科技有限公司 | Live broadcast method and device and electronic equipment |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930284A (en) * | 2009-06-23 | 2010-12-29 | 腾讯科技(深圳)有限公司 | Method, device and system for implementing interaction between video and virtual network scene |
CN102455898A (en) * | 2010-10-29 | 2012-05-16 | 张明 | Cartoon expression based auxiliary entertainment system for video chatting |
CN103368929A (en) * | 2012-04-11 | 2013-10-23 | 腾讯科技(深圳)有限公司 | Video chatting method and system |
CN103797761A (en) * | 2013-08-22 | 2014-05-14 | 华为技术有限公司 | Communication method, client, and terminal |
WO2014200513A1 (en) * | 2013-06-10 | 2014-12-18 | Thomson Licensing | Method and system for evolving an avatar |
CN104581360A (en) * | 2014-12-15 | 2015-04-29 | 乐视致新电子科技(天津)有限公司 | Television terminal and method for playing television programs |
CN105338370A (en) * | 2015-10-28 | 2016-02-17 | 北京七维视觉科技有限公司 | Method and apparatus for synthetizing animations in videos in real time |
CN105488489A (en) * | 2015-12-17 | 2016-04-13 | 掌赢信息科技(上海)有限公司 | Short video message transmitting method, electronic device and system |
CN105959718A (en) * | 2016-06-24 | 2016-09-21 | 乐视控股(北京)有限公司 | Real-time interaction method and device in video live broadcasting |
CN106303555A (en) * | 2016-08-05 | 2017-01-04 | 深圳市豆娱科技有限公司 | A kind of live broadcasting method based on mixed reality, device and system |
-
2017
- 2017-03-17 CN CN201710160405.5A patent/CN106937154A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101930284A (en) * | 2009-06-23 | 2010-12-29 | 腾讯科技(深圳)有限公司 | Method, device and system for implementing interaction between video and virtual network scene |
CN102455898A (en) * | 2010-10-29 | 2012-05-16 | 张明 | Cartoon expression based auxiliary entertainment system for video chatting |
CN103368929A (en) * | 2012-04-11 | 2013-10-23 | 腾讯科技(深圳)有限公司 | Video chatting method and system |
WO2014200513A1 (en) * | 2013-06-10 | 2014-12-18 | Thomson Licensing | Method and system for evolving an avatar |
CN103797761A (en) * | 2013-08-22 | 2014-05-14 | 华为技术有限公司 | Communication method, client, and terminal |
CN104581360A (en) * | 2014-12-15 | 2015-04-29 | 乐视致新电子科技(天津)有限公司 | Television terminal and method for playing television programs |
CN105338370A (en) * | 2015-10-28 | 2016-02-17 | 北京七维视觉科技有限公司 | Method and apparatus for synthetizing animations in videos in real time |
CN105488489A (en) * | 2015-12-17 | 2016-04-13 | 掌赢信息科技(上海)有限公司 | Short video message transmitting method, electronic device and system |
CN105959718A (en) * | 2016-06-24 | 2016-09-21 | 乐视控股(北京)有限公司 | Real-time interaction method and device in video live broadcasting |
CN106303555A (en) * | 2016-08-05 | 2017-01-04 | 深圳市豆娱科技有限公司 | A kind of live broadcasting method based on mixed reality, device and system |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019057194A1 (en) * | 2017-09-25 | 2019-03-28 | 迈吉客科技(北京)有限公司 | Linked microphone-based live streaming method and system |
CN107750014A (en) * | 2017-09-25 | 2018-03-02 | 迈吉客科技(北京)有限公司 | One kind connects wheat live broadcasting method and system |
CN107750014B (en) * | 2017-09-25 | 2020-10-16 | 迈吉客科技(北京)有限公司 | Live wheat-connecting method and system |
CN109874021B (en) * | 2017-12-04 | 2021-05-11 | 腾讯科技(深圳)有限公司 | Live broadcast interaction method, device and system |
CN109874021A (en) * | 2017-12-04 | 2019-06-11 | 腾讯科技(深圳)有限公司 | Living broadcast interactive method, apparatus and system |
CN108174227B (en) * | 2017-12-27 | 2020-12-22 | 广州酷狗计算机科技有限公司 | Virtual article display method and device and storage medium |
CN108174227A (en) * | 2017-12-27 | 2018-06-15 | 广州酷狗计算机科技有限公司 | Display methods, device and the storage medium of virtual objects |
CN108200446A (en) * | 2018-01-12 | 2018-06-22 | 北京蜜枝科技有限公司 | Multimedia interactive system and method on the line of virtual image |
CN108875539A (en) * | 2018-03-09 | 2018-11-23 | 北京旷视科技有限公司 | Expression matching process, device and system and storage medium |
CN109120985A (en) * | 2018-10-11 | 2019-01-01 | 广州虎牙信息科技有限公司 | Image display method, apparatus and storage medium in live streaming |
CN109525483A (en) * | 2018-11-14 | 2019-03-26 | 惠州Tcl移动通信有限公司 | The generation method of mobile terminal and its interactive animation, computer readable storage medium |
CN111641844B (en) * | 2019-03-29 | 2022-08-19 | 广州虎牙信息科技有限公司 | Live broadcast interaction method and device, live broadcast system and electronic equipment |
CN111641844A (en) * | 2019-03-29 | 2020-09-08 | 广州虎牙信息科技有限公司 | Live broadcast interaction method and device, live broadcast system and electronic equipment |
WO2020221186A1 (en) * | 2019-04-30 | 2020-11-05 | 广州虎牙信息科技有限公司 | Virtual image control method, apparatus, electronic device and storage medium |
CN110433491A (en) * | 2019-07-25 | 2019-11-12 | 天脉聚源(杭州)传媒科技有限公司 | Movement sync response method, system, device and the storage medium of virtual spectators |
CN110312144A (en) * | 2019-08-05 | 2019-10-08 | 广州华多网络科技有限公司 | Method, apparatus, terminal and the storage medium being broadcast live |
CN110312144B (en) * | 2019-08-05 | 2022-05-24 | 广州方硅信息技术有限公司 | Live broadcast method, device, terminal and storage medium |
CN110519612A (en) * | 2019-08-26 | 2019-11-29 | 广州华多网络科技有限公司 | Even wheat interactive approach, live broadcast system, electronic equipment and storage medium |
CN110557625A (en) * | 2019-09-17 | 2019-12-10 | 北京达佳互联信息技术有限公司 | live virtual image broadcasting method, terminal, computer equipment and storage medium |
WO2021164620A1 (en) * | 2020-02-17 | 2021-08-26 | 网易(杭州)网络有限公司 | Motion data processing method, apparatus and device, and storage medium |
CN111325819A (en) * | 2020-02-17 | 2020-06-23 | 网易(杭州)网络有限公司 | Motion data processing method, device, equipment and storage medium |
CN111263178A (en) * | 2020-02-20 | 2020-06-09 | 广州虎牙科技有限公司 | Live broadcast method, device, user side and storage medium |
CN111787417A (en) * | 2020-06-23 | 2020-10-16 | 平安普惠企业管理有限公司 | Audio and video transmission control method based on artificial intelligence AI and related equipment |
CN111787417B (en) * | 2020-06-23 | 2024-05-17 | 刘叶 | Audio and video transmission control method based on artificial intelligence AI and related equipment |
CN113965773A (en) * | 2021-11-03 | 2022-01-21 | 广州繁星互娱信息科技有限公司 | Live broadcast display method and device, storage medium and electronic equipment |
CN114245155A (en) * | 2021-11-30 | 2022-03-25 | 北京百度网讯科技有限公司 | Live broadcast method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106937154A (en) | Process the method and device of virtual image | |
WO2021103698A1 (en) | Face swapping method, device, electronic apparatus, and storage medium | |
US10938725B2 (en) | Load balancing multimedia conferencing system, device, and methods | |
EP3901829A1 (en) | Data processing method and device, storage medium, and electronic device | |
US9210372B2 (en) | Communication method and device for video simulation image | |
US9172979B2 (en) | Experience or “sentio” codecs, and methods and systems for improving QoE and encoding based on QoE experiences | |
US20240312212A1 (en) | Real-time video dimensional transformations of video for presentation in mixed reality-based virtual spaces | |
CN103460250A (en) | Object of interest based image processing | |
CN106576158A (en) | Immersive video | |
US20170195617A1 (en) | Image processing method and electronic device | |
CN103647922A (en) | Virtual video call method and terminals | |
CN110969572B (en) | Face changing model training method, face exchange device and electronic equipment | |
WO2024078243A1 (en) | Training method and apparatus for video generation model, and storage medium and computer device | |
WO2020062998A1 (en) | Image processing method, storage medium, and electronic device | |
WO2012021174A2 (en) | EXPERIENCE OR "SENTIO" CODECS, AND METHODS AND SYSTEMS FOR IMPROVING QoE AND ENCODING BASED ON QoE EXPERIENCES | |
CN111464827A (en) | Data processing method and device, computing equipment and storage medium | |
CN112581635A (en) | Universal quick face changing method and device, electronic equipment and storage medium | |
US20230396735A1 (en) | Providing a 3d representation of a transmitting participant in a virtual meeting | |
CN116229311B (en) | Video processing method, device and storage medium | |
US11734952B1 (en) | Facial image data generation using partial frame data and landmark data | |
CN110413109A (en) | Generation method, device, system, electronic equipment and the storage medium of virtual content | |
CN115526772A (en) | Video processing method, device, equipment and storage medium | |
KR102703662B1 (en) | Method and device for generating image based on object | |
CN114513647B (en) | Method and device for transmitting data in three-dimensional virtual scene | |
US20240185469A1 (en) | Coding of displacements using hierarchical coding at subdivision level for vertex mesh (v-mesh) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170707 |