CN106713988A - Beautifying method and system for virtual scene live - Google Patents
Beautifying method and system for virtual scene live Download PDFInfo
- Publication number
- CN106713988A CN106713988A CN201611131804.0A CN201611131804A CN106713988A CN 106713988 A CN106713988 A CN 106713988A CN 201611131804 A CN201611131804 A CN 201611131804A CN 106713988 A CN106713988 A CN 106713988A
- Authority
- CN
- China
- Prior art keywords
- virtual scene
- image
- live
- data
- audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/23412—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4788—Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to the field of multimedia data processing, and discloses a beautifying method and system for virtual scene live. The method comprising the steps that the data of a camera device are acquired in real time to acquire first image data; the first image data are converted into a texture image, and the texture image is updated to a virtual scene; a first object is extracted from the virtual scene through a GPU; and the first object is beautified and the virtual scene is rendered to acquire a render image. According to the technical scheme, the data of the camera device are converted into the texture image; the texture image is updated to the virtual scene; at the rendering stage of the virtual scene, the first object is extracted through the GPU; the first object is beautified to acquire the render image; rendering and beautifying are carried out in parallel; and the delay time of beautifying in virtual scene live is greatly reduced.
Description
Technical field
It is more particularly to a kind of that virtual scene live data is processed the present invention relates to multimedia-data procession field
Technology.
Background technology
Due to equipment cost decline and direct seeding technique it is ripe day by day, the live entertainment form of performance is carried out by network
It is gradually big well-established and welcome.Virtual scene synthetic technology is that one kind is applied to telecast hall recorded broadcast program or film
Multimedia data processing in making, such as weather predicting program etc..
In the prior art, virtual scene synthetic technology is generally by the portrait in the solid background that collects camera head
Extract, synthesis is then overlapped with the virtual scene background for rendering, then the view data after synthesis is exported into use
In broadcasting or recording and storing.
Existing virtual scene technology cannot realize the interaction of high-quality between main broadcaster's object and object of audience.Specifically
, such as in network direct broadcasting field, existing live platform and technology so that spectators can only see the shooting of main broadcaster's camera
Picture, spectators can give virtual present to main broadcaster, but these virtual presents can only be carried out cursorily under existing scene
Superposition.And for example, existing MTV makes records completion after generally being exchanged with performing artist by director, recording process lacks interesting, record
Single effect processed.And in existing direct seeding technique, in order that client is it can be seen that mutual between other clients and main broadcaster
, it is necessary to interactive information and material are sent to cloud server by client, cloud server notifies all online clients to dynamic effect
Material is downloaded in end from specified location, and is added on live picture by client.It can be seen that, client needs to download specifies material,
Inefficiency, and waste flow;And need each client that the interactive material is being locally stored, take client storage empty
Between, and interaction content is not easy to be extended in time.
The pixel of the main broadcaster's object being additionally, since captured by camera head is high, and imaging is true, causes main broadcaster's subject face
Flaw be all directly presented on camera lens, so, generally required to carry out U.S. face to main broadcaster's object before live.And existing U.S.
Face treatment, is that treatment computing is carried out to each pixel in image, and the data volume for processing computing is big, with delay higher,
And take a large amount of cpu resources, it is impossible to it is applied to higher live of requirement of real-time.In addition, at some existing live datas
It is that U.S. face treatment is directly carried out to video flowing in reason scheme, virtually increased live delay.
Therefore inventor thinks that needing research and development one kind to reduce the live U.S. face of network virtual scene processes caused live
The technology of delay.
The content of the invention
For this reason, it may be necessary to a kind of technology that quick U.S.'s face treatment can be carried out to virtual scene live data is provided, for dropping
The time delay of the low live U.S. face treatment of network virtual scene.
To achieve the above object, a kind of method for carrying out U.S. face treatment live to virtual scene is inventor provided, including
Following steps:
The data of camera head are obtained in real time, obtain the first view data;
First view data is converted into texture image, and the texture image is updated in virtual scene;
The first object is identified from texture image by GPU, remainder of the texture image in addition to the first object is saturating
Brightization, then carries out U.S. face treatment to the first object, then the first object and the virtual scene after U.S. face are rendered, and obtains
To rendering image.
Further, obtain rendering and also include step after image:
Composograph encoding operation is carried out to the image that renders, the first video data is obtained;
Obtain camera head data while, in real time obtain audio device voice data, by the voice data of audio device with
First video data is packaged into audio, video data.
Further, obtain also including step after the audio, video data:By real time streaming transport protocol, by the sound
The live online client in LAN of video data;
Or by real time streaming transport protocol, the audio, video data is sent to third party's webserver;
Third party's webserver generates the live link in internet of the audio, video data.
Further, also including receiving the interaction instruction that client sends, updated from and according to interaction instruction or switching is empty
Intend scene.
Further, it is described that composograph encoding operation is carried out including step to the image that renders:
Using hardware encoding block intercept described in render image, and the image that renders to being intercepted is compressed coding.
Further, it is described the treatment of U.S. face is carried out to the first object to include step:The first object is carried out using shader
Mill skin treatment, despeckle processes and highlights treatment.
Further, the virtual scene is 3D virtual reality scenarios or 3D video scenes.
To achieve the above object, inventor additionally provides another technical scheme:It is a kind of live to virtual scene to carry out U.S. face
The system for the treatment of, including:
Resolution unit, the data for obtaining camera head in real time, obtains the first view data;
Updating block, for the first view data to be converted into texture image, and the texture image is updated to virtually
In scene;And
Rendering unit, the first object is identified by GPU from texture image, and texture image is surplus in addition to the first object
Remaining part point transparence, then carries out U.S. face treatment, then the first object and the virtual scene after U.S. face are entered to the first object
Row is rendered, and obtains rendering image.
Further, the system for carrying out U.S. face treatment live to virtual scene also includes:
Composograph coding unit, for carrying out composograph encoding operation to the image that renders, obtains the first video
Data;And
Encapsulation unit, for obtaining while camera head data, obtains the voice data of audio device, by audio device in real time
Voice data and the first video data be packaged into audio, video data.
Further, also including transmission unit, for by real time streaming transport protocol, give the audio, video data is live
Online client in LAN;
Or for by real time streaming transport protocol, the audio, video data being sent into third party's webserver;
Third party's webserver generates the live link in internet of the audio, video data.
Further, also including interactive unit, the interaction instruction for receiving client transmission, from and according to interaction instruction
Update or switching virtual scene.
Further, the composograph coding unit to it is described render image carry out composograph coding behaviour include, make
With hardware encoding block intercept described in render image, and the image that renders to being intercepted is compressed coding.
Prior art is different from, U.S. face treatment is to carry out treatment computing to each pixel in live video, or to straight
The video flowing broadcast carries out U.S. face treatment, and the data volume for the treatment of is big, and time delay is long;Above-mentioned technical proposal is first by the number of camera head
According to being converted into texture image, and texture image is updated in virtual scene, then in virtual scene rendering stage, by GPU
The first object is recognized, and U.S. face treatment is carried out to the first object, obtain rendering image, wherein, GPU carries out image rendering and
The identification of one object is carried out parallel with U.S. face treatment, and be put into for the identification of the first object and U.S. face treatment by the technical scheme
In the image rendering stage, carried out parallel simultaneously by GPU, so as to greatly reduce virtual scene it is live when U.S. face treatment delay
Time.
Brief description of the drawings
Fig. 1 is the flow chart of the method for carrying out U.S. face treatment live to virtual scene described in specific embodiment;
Fig. 2 is to obtain rendering the operational flowchart of the laggard performing network living broadcast of image in specific embodiment;
The module frame chart of the system for carrying out U.S. face treatment live to virtual scene described in Fig. 3 specific embodiments;
The module frame chart of the system for carrying out U.S. face treatment live to virtual scene described in Fig. 4 specific embodiments;
The schematic diagram of network virtual scene live broadcast system described in Fig. 5 specific embodiments.
Description of reference numerals:
10th, resolution unit
20th, updating block
30th, rendering unit
40th, composograph coding unit
50th, encapsulation unit
501st, camera head
502nd, microphone
503rd, Video Decoder
504th, Unity ends
505th, Video Record Processor
506th, RTSP server
507th, client
508th, third-party server
Specific embodiment
To describe technology contents, structural feature, the objects and the effects of technical scheme in detail, below in conjunction with specific reality
Apply example and coordinate accompanying drawing to be explained in detail.
Fig. 1 is referred to, a kind of method for carrying out U.S. face treatment live to virtual scene of the present embodiment, the method can be applied to
The video living broadcast programs that true man are combined with virtual scene are carried out by network, is comprised the following steps:
S101, the data for obtaining camera head in real time, obtain the first view data.Specifically, to the camera head
Data are decoded, and when the complete decoding view data of a frame is often acquired, view data are put into data buffer zone.
In certain embodiments, the real-time stream of camera head is read out using the combination of FFMPEG+mediacodec, solved
Code and real-time update is in buffering area.FFMPEG+mediacodec is the class that hard coded hard decoder is realized under Android platform,
The speed of data flow encoding and decoding can be obviously improved.
S102, the first view data is converted into texture image, and the texture image is updated in virtual scene.Its
In, the texture image refers to the image with texture, and texture is computer graphics noun, is homogeneity in a kind of reflection image
The visual signature of phenomenon, it embodies being arranged with slowly varying or periodically variable surface textural for body surface
Attribute.Texture, also known as texture mapping, is to be attached to body surface, embodies the one or more X-Y scheme of body surface details.Line
Reason has three big marks:Certain local sequentiality is constantly repeated, non-random array, texture region are interior substantially uniform unified
Body.Texture is different from the characteristics of image such as gray scale, color, and it is showed by the intensity profile of pixel and its surrounding space neighborhood,
I.e.:Local grain information.The different degrees of repeatability of local grain information, i.e., global texture information.
In embodiment, the texture image is updated in virtual scene, can be with the shape of texture by texture image
Formula is embodied in a plane of virtual scene.The virtual scene includes the virtual reality scenario of computer simulation or true bat
Video scene taken the photograph etc..Further, embodiment can be combined with the 3D rendering technology of newly-developed to provide virtual scene,
Such as 3D virtual reality scenarios or 3D video scenes.
3D virtual reality scenario technologies are a kind of computer simulation systems that can be created with the experiencing virtual world, and it is utilized
Computer generates a kind of 3D simulated scenarios of reality scene, be a kind of Multi-source Information Fusion interactive Three-Dimensional Dynamic what comes into a driver's and
The system emulation of entity behavior.Virtual scene includes actual scene present in any actual life, appoints comprising vision, sense of hearing etc.
The scene what can be experienced by body-sensing, by computer technology come simulated implementation.One kind of 3D virtual reality scenarios is applied and is
3D virtual stages, 3D virtual stages are, by computer technology simulating reality stage, to realize the dance of a kind of third dimension, strong sense of reality
Platform effect.Can be realized by 3D virtual stages, the main broadcaster's object in reality not before the lights carries out table on various stages
The scene effect drilled.
When 3D videos are filmed images, left and right binocular parallax is simulated with two cameras, two films are shot respectively, then
This two films are shown onto screen simultaneously, allows spectators' left eye to can only see left-eye image during projection, right eye can only see the right side
Eye pattern picture.Last two images are by after brain overlapping, can just see the picture with three-dimensional depth feelings, as 3D videos.
S103, the first object is identified from texture image by GPU, by remainder of the texture image in addition to the first object
Divide transparence, U.S. face treatment is then carried out to the first object, then wash with watercolours is carried out to the first object and the virtual scene after U.S. face
Dye, obtains rendering image.Wherein, the first object is the object in the image captured by camera head.In various embodiments
As needed, the first object can be different specific objects, and such as the first object can be true man main broadcaster, can be that pet moves
Thing etc.;The quantity of the first object can be single, or more than 2.According to the difference of these actual demands, it is possible to use
Different algorithms and setting, effectively to extract the first object in virtual scene.Due to first pair as if in the first picture number
According to interior, therefore, the first object is extracted from virtual scene, the first object is actually extracted from the first view data.With
It is illustrated by a specific algorithm embodiment for extracting the first object down.
In a certain embodiment, in the first view data, the first object behaviour owner broadcasts, and the background residing for main broadcaster is pure color
Background.The first object concretely comprises the following steps in identification virtual scene:GPU by the color value of each pixel in texture image with it is pre-
If threshold value compare;If the color value of pixel is in default threshold value, the pixel is not belonging to the first object, and by the picture
The Alpha passages of vegetarian refreshments are set to zero, will background be shown as Transparent color;If the color value of pixel is outside default threshold value, should
Pixel belongs to the first object, and the Alpha passages of the pixel are set to 1, that is, retain the pixel so as to recognize and extract it is right
As.
Because background is pure color, so the present embodiment carries out scratching figure using chroma key method.Wherein default threshold value is background
The color value of color, for example, background color for green, then the threshold value of default pixel RGB color value for (0 ± 10,255-
10、0±10).Background colour can select green or blue, two kinds of backgrounds of color can simultaneously be set in the place for shooting, for master
Broadcast selection.When main broadcaster wears the clothes larger with green contrast to sing, the background of green is can select.Extracted in object (portrait)
Cheng Zhong, because the clothes that main broadcaster wears is larger with background hue difference, so the color value and default threshold of each pixel in image
After value is compared, the color value of background parts pixel in default threshold value, by the Alpha passages of background parts pixel
Be set to zero, will background be shown as Transparent color;And the pixel of portrait part retains portrait part not in default threshold value,
So as to realize the background parts transparence in texture image.
In object (portrait) extraction process, threshold interval also can be set, when the color value of pixel is more than higher limit, will
The Alpha passages of the pixel are set to zero, will background be shown as Transparent color;When pixel color value be less than lower limit when,
The Alpha passages of the pixel are set to 1, that is, retain portrait part;When the color value of pixel is between upper and lower bound, recognize
For the pixel is intermediate color (half is at image border), the alpha passages of the pixel are set using linear function at this moment
It is set to one in 0~1.Being set with for intermediate color carries out more natural fusion beneficial to the portrait and virtual scene for plucking out.
The U.S. face treatment refers to selected object to be processed to improve its face value, in various embodiments
As needed, it is described the treatment of U.S. face is carried out to the first object to include the treatment of skin mill skin, face's despeckle treatment and highlight place
Reason.U.S. face operation of the invention by shader can realize GPU to the identification of the first object and carry out U.S. face treatment.With people
As a example by thing is for U.S. face object, first in shader programs, parts of skin is first identified from who object, skin identification can profit
With human skin the characteristics of the presentation Gaussian Profile of YCbCr color spaces, skin and non-skin region are split.Its
In, the Shader (also known as tinter) is one section can be operated and the program performed by GPU for 3D objects.
Identify and can carry out after skin mill skin treatment, mill skin treatment is to use bilateral filtering algorithm, and skin is smoothed, and is put down
Avoided while sliding skin area boundary-passivated.Can carry out highlighting treatment after mill skin treatment, highlighted portion uses LOG curves
Carry out the gain calibration of whole pixel, it is to avoid image change is excessively stiff.
It is to carry out last procedure of Vision Design (certainly, except the later stage using computer technology that described image is rendered
Make), it is also the stage for making the image of design be presented with 3D visions.
In embodiment, by the data conversion of camera head into texture image, and texture image is updated to virtual scene
In, then in virtual scene rendering stage;The identification of first object and U.S. face, are all to pass through GPU in the image rendering stage
Executed in parallel, the CPU time is not take up, improve system speed;And because GPU is the special hardware processed image,
It is the same to the different size of Pixel calcualting time, for example, 8,16, Pixel calcualting time of 32, can save significantly
The operation time to pixel is saved;And common CPU can extend process time with the increase of pixel size, so the present embodiment
Portrait extraction rate is greatly improved.Above-mentioned distinctive points can also be using the embedded device with GPU in causing the present embodiment
Realize, even if the cpu performance in embedded scheme is weaker, but using the scheme of the present embodiment, embedded device scheme is still
Smooth display can be realized, if because extracting the first object from the first view data using CPU, CPU need to be read out shooting dress
The video of acquisition is put, and carries out scratching the treatment such as figure, CPU burdens are too heavy, it is impossible to carry out the display of smoothness.And the present embodiment is applied to
In embedded scheme, above-mentioned FIG pull handle is put into GPU to be carried out, and has both alleviated the burden of CPU, while will not be to the fortune of GPU
Row is impacted.
Fig. 2 is referred to, in certain embodiments, step is also included after obtaining rendering image:
S201, composograph encoding operation is carried out to the image that renders, obtain the first video data;
S202 and the first video data and voice data are packaged into audio, video data, wherein, the voice data is
While camera head data are obtained, obtained in real time by audio device.
Wherein, it is described that composograph encoding operation, including step are carried out to the image that renders:
The hardware encoding block carried by equipment such as Android, ios or windows renders image described in intercepting, and right
The image that renders for being intercepted is compressed coding.
And when the voice data is obtained, audio device obtains voice data, and by audio data coding into being easy to network
The audio format of transmission.In the embodiment of some live applications, the voice data be mainly the sound that live person sings and
The mixing sound of accompanying song.
In the above-described embodiments, the composograph encoding operation, acquisition voice data and encapsulation operation are by difference
Equipment be respectively completed, so as to ensure the speed and real-time of data processing.And in certain embodiments, above-mentioned composograph is compiled
Code operation, acquisition voice data and encapsulation operation can also be completed on same server.
Audio, video data obtained by encapsulation is that can be used for network direct broadcasting, display or store.Wherein, network direct broadcasting is to seal
The audio, video data for installing sends real time data streaming server (i.e. RTSP server) to, by real time data streaming server in local
It is live outside live or internet in net.
When live in LAN, real time data streaming server has detected whether that client is connected to the server, and
Whether there is playing request, connected client has been detected, and when receiving playing request, by real time streaming transport protocol, will
The audio, video data is sent to LAN online client.The client can be the player of various support RTSP, such as
PC, panel computer, smart mobile phone etc..Client is solved after the audio, video data that real time stream server is transmitted is received
Code can be played out, and the content of broadcasting is live object and is combined the picture that renders with virtual scene;In audio, video data
Voice data decoding after, by loudspeaker play be singer sing sound and accompaniment.
When internet is live, real time data streaming server is sent out the audio, video data by real time streaming transport protocol
Give third party the webserver, the live link of the audio, video data is generated by third party's webserver.Client is led to
Cross and click on the live link, you can obtain the real-time stream of the audio, video data, and played by decoding.
In certain embodiments, client can also send interaction instruction and carry out virtual scene with main broadcaster by computer network
It is interactive.When main broadcaster's termination receives the interaction instruction of client transmission, can be updated according to interaction instruction or switching virtual scene, and
Rendering stage is rendered to new virtual scene with the first object.Wherein, the computer network can be Internet
Network can also be LAN, can be by cable network, WiFi network, 3G/4G mobile communication networks, blueteeth network or ZigBee
Network etc. is attached.
In the different virtual scenes in interactive embodiment, interaction instruction can include different contents, in some implementations
In example, the interaction instruction includes updating the first material to the order in virtual scene.Specially:The first object is real-time
While updating virtual scene, according to the interaction instruction, during the first material also updated into virtual scene, and to U.S. face after
The first object and the virtual scene rendered, obtain rendering image.
First material can be the combination of picture material, sound material or picture material and sound material.With net
As a example by network is live, first material includes virtual present, thumb up, background sound, cheer etc., and the spectators of network direct broadcasting can pass through
Cell phone, the interaction instruction of the virtual presents such as fresh flower is sent to main broadcaster, the present for being sent by the form of fresh flower picture virtual
Embodied in scene.The spectators of network direct broadcasting can also send the interaction instruction applauded to main broadcaster by cell phone, applaud
Interaction instruction will be played out in the form of applause.
These first materials can be system intialization, and supply user's selection is used, and in certain embodiments, it is described mutual
Dynamic instruction is except including the first material is updated to the order in virtual scene, may also include the content-data of the first material.
Such as spectators give the interaction instruction of virtual present at one by mobile terminal upload, and be further comprises in interaction instruction
One picture for giving virtual present, after the interaction instruction is received, virtual field is updated to by the picture of the present
Jing Zhong.Therefore spectators, except that can select interactive mode, can also make by oneself when interaction instruction is sent according to the hobby of oneself
The content-data of adopted first material, the material that the picture materials such as liked, sound material or picture are combined with sound.
In certain embodiments, the interaction instruction also includes the order of conversion virtual scene camera lens, and the conversion is virtual
The order of scene shot includes the visual angle of switching virtual scene shot, changes virtual scene lens focus and to virtual scene
Carry out On Local Fuzzy treatment etc..By the visual angle of switching virtual scene shot, can simulate and watch virtual scene with different view
Picture;By changing virtual scene lens focus, can be to furthering and pushing away the picture of remote virtual scene;And to entering to virtual scene
The treatment of row On Local Fuzzy, non-Fuzzy Processing part picture is highlighted in can making virtual scene.By the conversion virtual field
The order of scape camera lens, is greatly improved the interactive degree and interest of spectators.
In the interactive embodiment of above-mentioned virtual scene, the interaction instruction that can be sent according to client is entered to virtual scene
Row updates or switches, with realize obtain render image in both there are colourful scene-change effects, and preserve simultaneously
There is the real-time activity effect of the first object.Spectators can send interaction instruction by client in above-mentioned technical proposal, at main broadcaster end
Interactive content and the first object are updated in virtual scene, the fusion of interaction content, main broadcaster's object and virtual scene is existed
Together, therefore each first terminal can be seen the effect of interaction, wherein interactive content is just to be fused to virtual field at main broadcaster end
Jing Zhong, therefore, the client of each spectators need not download interactive material from server, consequently facilitating the extension of interaction content.
Fig. 3 is referred to, another embodiment is a kind of system for carrying out U.S. face treatment live to virtual scene, the virtual scene
The live system for carrying out U.S. face treatment can be used to carry out the video living broadcast programs that true man are combined with virtual scene by network, should
System includes:
Resolution unit 10, the data for obtaining camera head in real time, obtains the first view data.To the camera head
Data decoded, and when the complete decoding view data of a frame is often acquired, view data is put to data buffer zone
In.The real-time stream of camera head is read out using the combination of FFMPEG+mediacodec in certain embodiments,
Decode and real-time update is in buffering area.FFMPEG+mediacodec is to realize hard coded hard decoder under Android platform
Class, can be obviously improved the speed of data flow encoding and decoding.
Updating block 20, for the first view data to be converted into texture image, and is updated to void by the texture image
In plan scene.In embodiment, the texture image is updated in virtual scene, can be with the shape of texture by texture image
Formula is embodied in a plane of virtual scene.Further, embodiment can be combined with the 3D rendering technology of newly-developed
To provide virtual scene, such as 3D virtual reality scenarios or 3D video scenes.
Rendering unit 30, for identifying the first object from texture image by GPU, the first object is removed by texture image
Outer remainder transparence, then carries out U.S. face treatment to the first object, then to the first object after U.S. face and described virtual
Scene is rendered, and obtains rendering image.Wherein, the first object is the object in the image captured by camera head.In difference
Embodiment in as needed, the first object can be different specific objects, and such as the first object can be true man main broadcaster, can
Being pet animals etc.;The quantity of the first object can be single, or more than 2.It is described that U.S. is carried out to the first object
Face treatment can include the treatment of skin mill skin, face's despeckle treatment and highlight treatment.
In embodiment, by the data conversion of camera head into texture image, and texture image is updated to virtual scene
In, then in virtual scene rendering stage;The extraction of first object and U.S. face, are all to pass through GPU in the image rendering stage
Executed in parallel, it is the same to the different size of Pixel calcualting time because GPU is the special hardware processed image, no
Only be not take up the CPU time, and the operation time to pixel can be greatlyd save, so as to effectively reduce virtual scene it is live when
The time delay of U.S. face treatment.
Fig. 4 is referred to, in certain embodiments, the system for carrying out U.S. face treatment live to virtual scene also includes:
Composograph coding unit 40, for carrying out composograph encoding operation to the image that renders, obtains first and regards
Frequency evidence;And
Encapsulation unit 50, for obtaining while camera head data, obtains the data of microphone, by sound effect device in real time
Data are packaged into audio, video data with the first video data.
Image is rendered described in the hardware encoding block interception that composograph coding unit passes through composograph encoder server,
And the image that renders to being intercepted is compressed coding.
And when the voice data is obtained, audio device obtains voice data, and by audio data coding into being easy to network
The audio format of transmission.In the embodiment of some live applications, the voice data be mainly the sound that live person sings and
The mixing sound of accompanying song.
In the above-described embodiments, the composograph encoding operation, acquisition voice data and encapsulation operation are by difference
Equipment be respectively completed, so as to ensure the speed and real-time of data processing.And in certain embodiments, above-mentioned composograph is compiled
Code operation, acquisition voice data and encapsulation operation can also be completed on same server.
The system for carrying out U.S. face treatment live to virtual scene also includes transmission unit, for being assisted by real-time streaming transport
View, it is live outside live or internet in LAN.
When live in LAN, real time data streaming server has detected whether that client is connected to the server, and
Whether there is playing request, connected client has been detected, and when receiving playing request, by real time streaming transport protocol, will
The audio, video data is sent to online client in LAN.Client is receiving the audio frequency and video that real time stream server is transmitted
After data, played out by being decoded, the content of broadcasting is live object and is combined the picture that renders with virtual scene;
After voice data decoding in audio, video data, sound and the accompaniment for being that singer sings are played by loudspeaker.
When internet is live, real time data streaming server is sent out the audio, video data by real time streaming transport protocol
Give third party the webserver, the live link in internet of the audio, video data is generated by third party's webserver.Visitor
Family end is by clicking on the link, you can obtains the real-time stream of the audio, video data, and is played by decoding.
In certain embodiments, the system for carrying out U.S. face treatment live to virtual scene also includes interactive unit, uses
In the interaction instruction that client sends is received, from and according to interaction instruction renewal or switching virtual scene.Therefore client can lead to
Cross computer network transmission interaction instruction carries out virtual scene interaction with main broadcaster.The interaction that main broadcaster's termination receives client transmission refers to
When making, can be updated according to interaction instruction or switching virtual scene, and new virtual scene is entered with the first object in rendering stage
Row is rendered.Wherein, the computer network can be that Internet network can also be LAN, can be by cable network,
WiFi network, 3G/4G mobile communication networks, blueteeth network or ZigBee-network etc. are attached.
In the different virtual scenes in interactive embodiment, interaction instruction can include different contents, in some implementations
In example, the interaction instruction includes updating the first material to the order in virtual scene.Specially:The first object is real-time
While updating virtual scene, according to the interaction instruction, during the first material also updated into virtual scene, and to U.S. face after
The first object and the virtual scene rendered, obtain rendering image.
First material can be the combination of picture material, sound material or picture material and sound material.With net
As a example by network is live, first material includes virtual present, thumb up, background sound, cheer etc., and the spectators of network direct broadcasting can pass through
Cell phone, the interaction instruction of the virtual presents such as fresh flower is sent to main broadcaster, the present for being sent by the form of fresh flower picture virtual
Embodied in scene.The spectators of network direct broadcasting can also send the interaction instruction applauded to main broadcaster by cell phone, applaud
Interaction instruction will be played out in the form of applause.
These first materials can be system intialization, and supply user's selection is used, and in certain embodiments, it is described mutual
Dynamic instruction is except including the first material is updated to the order in virtual scene, may also include the content-data of the first material.
Such as spectators give the interaction instruction of virtual present at one by mobile terminal upload, and be further comprises in interaction instruction
One picture for giving virtual present, after the interaction instruction is received, virtual field is updated to by the picture of the present
Jing Zhong.Therefore spectators, except that can select interactive mode, can also make by oneself when interaction instruction is sent according to the hobby of oneself
The content-data of adopted first material, the material that the picture materials such as liked, sound material or picture are combined with sound.
In certain embodiments, the interaction instruction also includes the order of conversion virtual scene camera lens, and the conversion is virtual
The order of scene shot includes the visual angle of switching virtual scene shot, changes virtual scene lens focus and to virtual scene
Carry out On Local Fuzzy treatment etc..By the visual angle of switching virtual scene shot, can simulate and watch virtual scene with different view
Picture;By changing virtual scene lens focus, can be to furthering and pushing away the picture of remote virtual scene;And to entering to virtual scene
The treatment of row On Local Fuzzy, non-Fuzzy Processing part picture is highlighted in can making virtual scene.By the conversion virtual field
The order of scape camera lens, is greatly improved the interactive degree and interest of spectators.
In the interactive embodiment of above-mentioned virtual scene, the interaction instruction that can be sent according to client is entered to virtual scene
Row updates or switches, with realize obtain render image in both there are colourful scene-change effects, and preserve simultaneously
There is the real-time activity effect of the first object.Spectators can send interaction instruction by client in above-mentioned technical proposal, at main broadcaster end
Interactive content and the first object are updated in virtual scene, the fusion of interaction content, main broadcaster's object and virtual scene is existed
Together, therefore each first terminal can be seen the effect of interaction, wherein interactive content is just to be fused to virtual field at main broadcaster end
Jing Zhong, therefore, the client of each spectators need not download interactive material from server, consequently facilitating the extension of interaction content.
Fig. 5 is referred to, is the schematic diagram of network virtual scene live broadcast system in an embodiment, the network virtual scene is live
System can carry out U.S. face treatment to live object, and carry out real-time live broadcast, and the system includes camera head 501, sound effect device
502nd, Video Decoder 503, Unity ends 504, Video Record Processor 505, RTSP server 506, third-party server 508 and visitor
Family end 507.
Wherein, camera head 501 is used to obtain the video data of live object.
Sound effect device 502 is used to obtain the voice data of live object.
The Video Decoder 503 is used to decode the video data acquired in camera head 501, and in every acquisition
During the complete view data of one frame (YCrCb forms), view data is put into the buffering area of decoder;The Video Decoder
503 are additionally operable to that the view data for obtaining will be decoded, and are converted into texture mapping, and pass to Unity ends 504.Wherein, video decoding
Device 503 is using the combination of FFMPEG+mediacodec (class of hard coded hard decoder is realized under Android platform) to network shooting
The video data stream of head is read out, decodes and real-time update is in buffering area, so as to be obviously improved encoding and decoding speed.Video
Decoder 503 gives camera data genaration texture mapping using GLES.Wherein, the GLES is OpenGL ES (English is complete
Referred to as:OpenGL for Embedded Systems), be the subset of OpenGL 3-D graphic API, be directed to mobile phone, PDA and
The embedded devices such as game host and design, GLES2.0 is a version of GLES.
Unity ends 504 are integrated into GPU, and are provided with Unity3D softwares, and Unity3D softwares are by Unity
One of Technologies exploitations allows player easily to create such as 3 D video game, building visualization, realtime three dimensional animation
It is a professional game engine integrated comprehensively etc. the multi-platform comprehensive development of games instrument of type interaction content.Unity
Similar to Director, Blender game engine, Virtools or Torque Game Builder etc. are using interaction
Patterning development environment is the software of primary manner.
In the plane that texture mapping is embodied in virtual scene in the form of the texture by Unity ends 504, afterwards, Unity
The GPU at end 504 is additionally operable to render virtual scene, in rendering stage, to view data in live object carry out scratching figure
With U.S. face, image is rendered so as to obtain including live object and virtual scene.
Figure is scratched by live object and background separation, background parts is not shown, a default pixel reference point is made when scratching figure
Figure threshold value bound is scratched to scratch the base colors of figure, and setting.Computing is compared to image pixel by pixel, draw current pixel with
The measures of dispersion of presetted pixel.When measures of dispersion is less than bottom threshold, then it is assumed that the pixel is background colour, and by the pixel
Alpha passages are set to 0, are transparent;When measures of dispersion is more than upper threshold, at this time think that the pixel is foreground,
Alpha passages are set to 1, remain it;When measures of dispersion is between upper and lower bound, it is believed that the pixel is intermediate color, is made
Alpha passages are set to one in 0~1 with linear function, make the pixel half at image border.Intermediate color
It is set with beneficial to the more natural of portrait and the virtual scene fusion for plucking out.
Skin color model first is carried out to object (mainly who object) during U.S. face, skin color model is existed using human skin
The characteristics of presentation Gaussian Profile of YCbCr color spaces, skin and non-skin region are made a distinction.To skin after skin color model
Skin carries out mill skin, and mill skin is to use bilateral filtering algorithm, is avoided while even skin region boundary-passivated.Followed by removal
Spot and treatment is highlighted, it is the gain calibration that whole pixel is carried out using LOG curves to highlight treatment, it is to avoid image change is excessively raw
Firmly.Figure is scratched above and U.S. face operates what is realized by the rendering step (shader steps) of GPU.
Video Record Processor 505 is used to use hardware encoding block, the screen-picture at Unity ends 504 is intercepted, is compressed
Coding, obtains video data;And for by the audio data coding of sound effect device into the audio format for being easy to network transmission.Finally
Video Record Processor 505 is additionally operable to being packaged after the voice data and video data, and packaged data are and are available for broadcasting
The audio, video data put, Video Record Processor 505 is by packaged data transfer to RTSP server 506.
RTSP server 506 be real time stream server, for the live of audio, video data, it is live be divided into Intranet it is live and
The live two ways of outer net, in the present embodiment, audio, video data is pushed to third-party server 508 by RTSP server 506
In, the generation outer net link of third-party server 508 supplies the program request of various regions networking client 507.Client is receiving third party's clothes
After the audio, video data that business device is transmitted, display picture by decoding, image content is the picture that virtual scene is rendered;Audio number
The sound and accompaniment that as singer sings are played by loudspeaker according to after decoding.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality
Body or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or deposited between operating
In any this actual relation or order.And, term " including ", "comprising" or its any other variant be intended to
Nonexcludability is included, so that process, method, article or terminal device including a series of key elements not only include those
Key element, but also other key elements including being not expressly set out, or also include being this process, method, article or end
The intrinsic key element of end equipment.In the absence of more restrictions, limited by sentence " including ... " or " including ... "
Key element, it is not excluded that also there is other key element in the process including the key element, method, article or terminal device.This
Outward, herein, " it is more than ", " being less than ", " exceeding " etc. are interpreted as not including this number;" more than ", " below ", " within " etc. understand
It is to include this number.
It should be understood by those skilled in the art that, the various embodiments described above can be provided as method, device or computer program producing
Product.These embodiments can be using the embodiment in terms of complete hardware embodiment, complete software embodiment or combination software and hardware
Form.All or part of step in the method that the various embodiments described above are related to can be instructed by program correlation hardware come
Complete, described program can be stored in the storage medium that computer equipment can read, for performing the various embodiments described above side
All or part of step described in method.The computer equipment, including but not limited to:Personal computer, server, general-purpose computations
Machine, special-purpose computer, the network equipment, embedded device, programmable device, intelligent mobile terminal, intelligent home device, Wearable
Smart machine, vehicle intelligent equipment etc.;Described storage medium, including but not limited to:RAM, ROM, magnetic disc, tape, CD, sudden strain of a muscle
Deposit, USB flash disk, mobile hard disk, storage card, memory stick, webserver storage, network cloud storage etc..
The various embodiments described above are with reference to the method according to embodiment, equipment (system) and computer program product
Flow chart and/or block diagram are described.It should be understood that every during flow chart and/or block diagram can be realized by computer program instructions
The combination of flow and/or square frame in one flow and/or square frame and flow chart and/or block diagram.These computers can be provided
Programmed instruction is to the processor of computer equipment producing a machine so that by the finger of the computing device of computer equipment
Order is produced for realizing what is specified in one flow of flow chart or multiple one square frame of flow and/or block diagram or multiple square frames
The device of function.
These computer program instructions may be alternatively stored in the computer that computer equipment can be guided to work in a specific way and set
In standby readable memory so that instruction of the storage in the computer equipment readable memory is produced and include the manufacture of command device
Product, the command device is realized in one flow of flow chart or multiple one square frame of flow and/or block diagram or multiple square frame middle fingers
Fixed function.
These computer program instructions can be also loaded on computer equipment so that performed on a computing device a series of
Operating procedure is to produce computer implemented treatment, so that the instruction for performing on a computing device is provided for realizing in flow
The step of function of being specified in one flow of figure or multiple one square frame of flow and/or block diagram or multiple square frames.
Although being described to the various embodiments described above, those skilled in the art once know basic wound
The property made concept, then can make other change and modification to these embodiments, so embodiments of the invention are the foregoing is only,
Not thereby scope of patent protection of the invention, the equivalent structure that every utilization description of the invention and accompanying drawing content are made are limited
Or equivalent flow conversion, or other related technical fields are directly or indirectly used in, similarly it is included in patent of the invention
Within protection domain.
Claims (12)
1. a kind of method for carrying out U.S. face treatment live to virtual scene, it is characterised in that comprise the following steps:
The data of camera head are obtained in real time, obtain the first view data;
First view data is converted into texture image, and the texture image is updated in virtual scene;
The first object is identified from texture image by GPU, the remainder transparence by texture image in addition to the first object,
Then U.S. face treatment is carried out to the first object, then the first object and the virtual scene after U.S. face is rendered, obtain wash with watercolours
Dye image.
2. the method for carrying out U.S. face treatment live to virtual scene according to claim 1, it is characterised in that rendered
Also include step after image:
Composograph encoding operation is carried out to the image that renders, the first video data is obtained;
While obtaining camera head data, the voice data of audio device is obtained in real time, by the voice data of audio device and first
Video data is packaged into audio, video data.
3. the method for carrying out U.S. face treatment live to virtual scene according to claim 2, it is characterised in that obtain described
Also include step after audio, video data:It is by real time streaming transport protocol, the audio, video data is live in LAN
Online client;
Or by real time streaming transport protocol, the audio, video data is sent to third party's webserver;
Third party's webserver generates the live link in internet of the audio, video data.
4. the method for carrying out U.S. face treatment live to virtual scene according to claim 3, it is characterised in that also including connecing
The interaction instruction that client sends is received, from and according to interaction instruction renewal or switching virtual scene.
5. the method for carrying out U.S. face treatment live to virtual scene according to claim 2, it is characterised in that described to institute
State and render image and carry out composograph encoding operation including step:
Using hardware encoding block intercept described in render image, and the image that renders to being intercepted is compressed coding.
6. the method for carrying out U.S. face treatment live to virtual scene according to claim 1, it is characterised in that described to
One object carries out U.S. face treatment includes step:Mill skin is carried out to the first object using shader to process, despeckle processes and highlight place
Reason.
7. the method for carrying out U.S. face treatment live to virtual scene according to claim 1, it is characterised in that described virtual
Scene is 3D virtual reality scenarios or 3D video scenes.
8. a kind of system for carrying out U.S. face treatment live to virtual scene, it is characterised in that including:
Resolution unit, the data for obtaining camera head in real time, obtains the first view data;
Updating block, for the first view data to be converted into texture image, and is updated to virtual scene by the texture image
In;And
Rendering unit, the first object is identified by GPU from texture image, by remainder of the texture image in addition to the first object
Divide transparence, U.S. face treatment is then carried out to the first object, then wash with watercolours is carried out to the first object and the virtual scene after U.S. face
Dye, obtains rendering image.
9. the system for carrying out U.S. face treatment live to virtual scene according to claim 8, it is characterised in that also include:
Composograph coding unit, for carrying out composograph encoding operation to the image that renders, obtains the first video data;
And
Encapsulation unit, for obtaining while camera head data, obtains the voice data of audio device, by the sound of audio device in real time
Frequency evidence is packaged into audio, video data with the first video data.
10. the system for carrying out U.S. face treatment live to virtual scene according to claim 9, it is characterised in that also include
Transmission unit, for by real time streaming transport protocol, by the live online client in LAN of the audio, video data;
Or transmission unit is additionally operable to by real time streaming transport protocol, the audio, video data is sent to third party's network service
Device;
Third party's webserver generates the live link in internet of the audio, video data.
11. systems for carrying out U.S. face treatment live to virtual scene according to claim 10, it is characterised in that also include
Interactive unit, the interaction instruction for receiving client transmission, from and according to interaction instruction renewal or switching virtual scene.
12. systems for carrying out U.S. face treatment live to virtual scene according to claim 9, it is characterised in that the conjunction
Into image coding unit to it is described render image carry out composograph coding behaviour include, intercept the wash with watercolours using hardware encoding block
Dye image, and the image that renders to being intercepted is compressed coding.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611131804.0A CN106713988A (en) | 2016-12-09 | 2016-12-09 | Beautifying method and system for virtual scene live |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611131804.0A CN106713988A (en) | 2016-12-09 | 2016-12-09 | Beautifying method and system for virtual scene live |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106713988A true CN106713988A (en) | 2017-05-24 |
Family
ID=58936553
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611131804.0A Pending CN106713988A (en) | 2016-12-09 | 2016-12-09 | Beautifying method and system for virtual scene live |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106713988A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107197341A (en) * | 2017-06-02 | 2017-09-22 | 福建星网视易信息系统有限公司 | It is a kind of that screen display methods, device and a kind of storage device are dazzled based on GPU |
CN107340856A (en) * | 2017-06-12 | 2017-11-10 | 美的集团股份有限公司 | Control method, controller, Intelligent mirror and computer-readable recording medium |
CN107770595A (en) * | 2017-09-19 | 2018-03-06 | 浙江科澜信息技术有限公司 | A kind of method of real scene embedded in virtual scene |
CN107770605A (en) * | 2017-09-25 | 2018-03-06 | 广东九联科技股份有限公司 | A kind of portrait image special efficacy realization method and system |
CN107864343A (en) * | 2017-10-09 | 2018-03-30 | 上海幻电信息科技有限公司 | The live image rendering method of computer and system based on video card |
CN109151539A (en) * | 2017-06-16 | 2019-01-04 | 武汉斗鱼网络科技有限公司 | A kind of net cast method and system based on unity3d |
CN109191414A (en) * | 2018-08-21 | 2019-01-11 | 北京旷视科技有限公司 | A kind of image processing method, device, electronic equipment and storage medium |
CN109379629A (en) * | 2018-11-27 | 2019-02-22 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and storage medium |
CN109646950A (en) * | 2018-11-20 | 2019-04-19 | 苏州紫焰网络科技有限公司 | One kind being applied to image processing method, device and terminal in scene of game |
CN110009624A (en) * | 2019-04-11 | 2019-07-12 | 成都四方伟业软件股份有限公司 | Method for processing video frequency, video process apparatus and electronic equipment |
CN110047122A (en) * | 2019-04-04 | 2019-07-23 | 北京字节跳动网络技术有限公司 | Render method, apparatus, electronic equipment and the computer readable storage medium of image |
CN110045817A (en) * | 2019-01-14 | 2019-07-23 | 启云科技股份有限公司 | Using the interactive camera chain of virtual reality technology |
CN110215704A (en) * | 2019-04-26 | 2019-09-10 | 平安科技(深圳)有限公司 | Game open method, device, electronic equipment and storage medium |
CN110384924A (en) * | 2019-08-21 | 2019-10-29 | 网易(杭州)网络有限公司 | The display control method of virtual objects, device, medium and equipment in scene of game |
CN110442389A (en) * | 2019-08-07 | 2019-11-12 | 北京技德系统技术有限公司 | A kind of shared method using GPU of more desktop environments |
CN111935494A (en) * | 2020-08-13 | 2020-11-13 | 上海识装信息科技有限公司 | 3D commodity live display method and system |
CN111970527A (en) * | 2020-08-18 | 2020-11-20 | 广州虎牙科技有限公司 | Live broadcast data processing method and device |
CN112153409A (en) * | 2020-09-29 | 2020-12-29 | 广州虎牙科技有限公司 | Live broadcast method and device, live broadcast receiving end and storage medium |
WO2020258907A1 (en) * | 2019-06-28 | 2020-12-30 | 香港乐蜜有限公司 | Virtual article generation method, apparatus and device |
CN112929740A (en) * | 2021-01-20 | 2021-06-08 | 广州虎牙科技有限公司 | Method, device, storage medium and equipment for rendering video stream |
CN113177900A (en) * | 2021-05-26 | 2021-07-27 | 广州市百果园网络科技有限公司 | Image processing method, device, equipment and storage medium |
CN114302153A (en) * | 2021-11-25 | 2022-04-08 | 阿里巴巴达摩院(杭州)科技有限公司 | Video playing method and device |
CN115147312A (en) * | 2022-08-10 | 2022-10-04 | 田海艳 | Facial skin-grinding special effect simplified identification platform |
WO2024051535A1 (en) * | 2022-09-06 | 2024-03-14 | 北京字跳网络技术有限公司 | Method and apparatus for processing live-streaming image frame, and device, readable storage medium and product |
CN113177900B (en) * | 2021-05-26 | 2024-04-26 | 广州市百果园网络科技有限公司 | Image processing method, device, equipment and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102821323A (en) * | 2012-08-01 | 2012-12-12 | 成都理想境界科技有限公司 | Video playing method, video playing system and mobile terminal based on augmented reality technique |
CN103136793A (en) * | 2011-12-02 | 2013-06-05 | 中国科学院沈阳自动化研究所 | Live-action fusion method based on augmented reality and device using the same |
CN105654471A (en) * | 2015-12-24 | 2016-06-08 | 武汉鸿瑞达信息技术有限公司 | Augmented reality AR system applied to internet video live broadcast and method thereof |
CN106028138A (en) * | 2016-07-22 | 2016-10-12 | 乐视控股(北京)有限公司 | Live broadcast video processing method and device |
CN106131591A (en) * | 2016-06-30 | 2016-11-16 | 广州华多网络科技有限公司 | Live broadcasting method, device and terminal |
-
2016
- 2016-12-09 CN CN201611131804.0A patent/CN106713988A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103136793A (en) * | 2011-12-02 | 2013-06-05 | 中国科学院沈阳自动化研究所 | Live-action fusion method based on augmented reality and device using the same |
CN102821323A (en) * | 2012-08-01 | 2012-12-12 | 成都理想境界科技有限公司 | Video playing method, video playing system and mobile terminal based on augmented reality technique |
CN105654471A (en) * | 2015-12-24 | 2016-06-08 | 武汉鸿瑞达信息技术有限公司 | Augmented reality AR system applied to internet video live broadcast and method thereof |
CN106131591A (en) * | 2016-06-30 | 2016-11-16 | 广州华多网络科技有限公司 | Live broadcasting method, device and terminal |
CN106028138A (en) * | 2016-07-22 | 2016-10-12 | 乐视控股(北京)有限公司 | Live broadcast video processing method and device |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107197341A (en) * | 2017-06-02 | 2017-09-22 | 福建星网视易信息系统有限公司 | It is a kind of that screen display methods, device and a kind of storage device are dazzled based on GPU |
CN107340856A (en) * | 2017-06-12 | 2017-11-10 | 美的集团股份有限公司 | Control method, controller, Intelligent mirror and computer-readable recording medium |
CN109151539A (en) * | 2017-06-16 | 2019-01-04 | 武汉斗鱼网络科技有限公司 | A kind of net cast method and system based on unity3d |
CN109151539B (en) * | 2017-06-16 | 2021-05-28 | 武汉斗鱼网络科技有限公司 | Video live broadcasting method, system and equipment based on unity3d |
CN107770595B (en) * | 2017-09-19 | 2019-11-22 | 浙江科澜信息技术有限公司 | A method of it being embedded in real scene in virtual scene |
CN107770595A (en) * | 2017-09-19 | 2018-03-06 | 浙江科澜信息技术有限公司 | A kind of method of real scene embedded in virtual scene |
CN107770605A (en) * | 2017-09-25 | 2018-03-06 | 广东九联科技股份有限公司 | A kind of portrait image special efficacy realization method and system |
CN107864343A (en) * | 2017-10-09 | 2018-03-30 | 上海幻电信息科技有限公司 | The live image rendering method of computer and system based on video card |
CN109191414A (en) * | 2018-08-21 | 2019-01-11 | 北京旷视科技有限公司 | A kind of image processing method, device, electronic equipment and storage medium |
CN109646950A (en) * | 2018-11-20 | 2019-04-19 | 苏州紫焰网络科技有限公司 | One kind being applied to image processing method, device and terminal in scene of game |
CN109379629A (en) * | 2018-11-27 | 2019-02-22 | Oppo广东移动通信有限公司 | Method for processing video frequency, device, electronic equipment and storage medium |
CN110045817A (en) * | 2019-01-14 | 2019-07-23 | 启云科技股份有限公司 | Using the interactive camera chain of virtual reality technology |
CN110047122A (en) * | 2019-04-04 | 2019-07-23 | 北京字节跳动网络技术有限公司 | Render method, apparatus, electronic equipment and the computer readable storage medium of image |
CN110009624A (en) * | 2019-04-11 | 2019-07-12 | 成都四方伟业软件股份有限公司 | Method for processing video frequency, video process apparatus and electronic equipment |
CN110215704A (en) * | 2019-04-26 | 2019-09-10 | 平安科技(深圳)有限公司 | Game open method, device, electronic equipment and storage medium |
CN110215704B (en) * | 2019-04-26 | 2023-03-21 | 平安科技(深圳)有限公司 | Game starting method and device, electronic equipment and storage medium |
WO2020258907A1 (en) * | 2019-06-28 | 2020-12-30 | 香港乐蜜有限公司 | Virtual article generation method, apparatus and device |
CN110442389A (en) * | 2019-08-07 | 2019-11-12 | 北京技德系统技术有限公司 | A kind of shared method using GPU of more desktop environments |
CN110442389B (en) * | 2019-08-07 | 2024-01-09 | 北京技德系统技术有限公司 | Method for sharing GPU (graphics processing Unit) in multi-desktop environment |
CN110384924A (en) * | 2019-08-21 | 2019-10-29 | 网易(杭州)网络有限公司 | The display control method of virtual objects, device, medium and equipment in scene of game |
CN111935494A (en) * | 2020-08-13 | 2020-11-13 | 上海识装信息科技有限公司 | 3D commodity live display method and system |
CN111970527B (en) * | 2020-08-18 | 2022-03-29 | 广州虎牙科技有限公司 | Live broadcast data processing method and device |
CN111970527A (en) * | 2020-08-18 | 2020-11-20 | 广州虎牙科技有限公司 | Live broadcast data processing method and device |
CN112153409B (en) * | 2020-09-29 | 2022-08-19 | 广州虎牙科技有限公司 | Live broadcast method and device, live broadcast receiving end and storage medium |
CN112153409A (en) * | 2020-09-29 | 2020-12-29 | 广州虎牙科技有限公司 | Live broadcast method and device, live broadcast receiving end and storage medium |
CN112929740A (en) * | 2021-01-20 | 2021-06-08 | 广州虎牙科技有限公司 | Method, device, storage medium and equipment for rendering video stream |
CN113177900A (en) * | 2021-05-26 | 2021-07-27 | 广州市百果园网络科技有限公司 | Image processing method, device, equipment and storage medium |
CN113177900B (en) * | 2021-05-26 | 2024-04-26 | 广州市百果园网络科技有限公司 | Image processing method, device, equipment and storage medium |
CN114302153A (en) * | 2021-11-25 | 2022-04-08 | 阿里巴巴达摩院(杭州)科技有限公司 | Video playing method and device |
CN114302153B (en) * | 2021-11-25 | 2023-12-08 | 阿里巴巴达摩院(杭州)科技有限公司 | Video playing method and device |
CN115147312A (en) * | 2022-08-10 | 2022-10-04 | 田海艳 | Facial skin-grinding special effect simplified identification platform |
WO2024051535A1 (en) * | 2022-09-06 | 2024-03-14 | 北京字跳网络技术有限公司 | Method and apparatus for processing live-streaming image frame, and device, readable storage medium and product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106713988A (en) | Beautifying method and system for virtual scene live | |
CN106789991A (en) | A kind of multi-person interactive method and system based on virtual scene | |
CN106792246A (en) | A kind of interactive method and system of fusion type virtual scene | |
CN110290425A (en) | A kind of method for processing video frequency, device and storage medium | |
US11475666B2 (en) | Method of obtaining mask frame data, computing device, and readable storage medium | |
WO2018045927A1 (en) | Three-dimensional virtual technology based internet real-time interactive live broadcasting method and device | |
US11871086B2 (en) | Method of displaying comment information, computing device, and readable storage medium | |
CN106792214A (en) | A kind of living broadcast interactive method and system based on digital audio-video place | |
CN106210855A (en) | Object displaying method and device | |
CN110035321B (en) | Decoration method and system for online real-time video | |
CN103997687B (en) | For increasing the method and device of interaction feature to video | |
KR101536501B1 (en) | Moving image distribution server, moving image reproduction apparatus, control method, recording medium, and moving image distribution system | |
CN104461006A (en) | Internet intelligent mirror based on natural user interface | |
CN105959814B (en) | Video barrage display methods based on scene Recognition and its display device | |
CN106792228A (en) | A kind of living broadcast interactive method and system | |
CN106730815A (en) | The body-sensing interactive approach and system of a kind of easy realization | |
CA2803956A1 (en) | Moving image distribution server, moving image reproduction apparatus, control method, program, and recording medium | |
CN109361949A (en) | Method for processing video frequency, device, electronic equipment and storage medium | |
CN105556574A (en) | Rendering apparatus, rendering method thereof, program and recording medium | |
CN112102422B (en) | Image processing method and device | |
CN107358659A (en) | More pictures fusion display methods and storage device based on 3D technology | |
CN103685976A (en) | A method and a device for raising the display quality of an LED display screen in recording a program | |
WO2020258907A1 (en) | Virtual article generation method, apparatus and device | |
CN108399653A (en) | augmented reality method, terminal device and computer readable storage medium | |
WO2019239396A1 (en) | Method and system for automatic real-time frame segmentation of high resolution video streams into constituent features and modifications of features in each frame to simultaneously create multiple different linear views from same video source |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170524 |