CN110213640A - Generation method, device and the equipment of virtual objects - Google Patents
Generation method, device and the equipment of virtual objects Download PDFInfo
- Publication number
- CN110213640A CN110213640A CN201910578523.7A CN201910578523A CN110213640A CN 110213640 A CN110213640 A CN 110213640A CN 201910578523 A CN201910578523 A CN 201910578523A CN 110213640 A CN110213640 A CN 110213640A
- Authority
- CN
- China
- Prior art keywords
- image data
- virtual objects
- adjusted
- mask
- transparent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000004891 communication Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 13
- 238000009877 rendering Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 39
- 230000001360 synchronised effect Effects 0.000 abstract description 16
- 239000011159 matrix material Substances 0.000 description 64
- 230000008569 process Effects 0.000 description 12
- 238000007906 compression Methods 0.000 description 7
- 230000006835 compression Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 239000011800 void material Substances 0.000 description 5
- 241001269238 Data Species 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000009738 saturating Methods 0.000 description 3
- 208000003164 Diplopia Diseases 0.000 description 2
- 238000013144 data compression Methods 0.000 description 2
- 208000029444 double vision Diseases 0.000 description 2
- 235000013399 edible fruits Nutrition 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2355—Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Processing Or Creating Images (AREA)
Abstract
Generation method, device and the equipment of a kind of virtual objects provided in an embodiment of the present invention.Wherein, a kind of generation method of virtual objects is applied to server, in the generation instruction for detecting virtual objects, obtains first image data for generating corresponding first video of instruction, audio data and play time stamp;Wherein, the first image data provide the picture of first video;The audio data provides the sound of first video;The play time stamp is the mark for guaranteeing the content synchronization of the first image data and the audio data;According to the appearance information of preset virtual objects, the first image data are adjusted, and transcoding is carried out to the first image data adjusted, obtain virtual objects;It is stabbed according to the play time, carries out the displaying of the virtual objects and the broadcasting of the audio data.This programme may be implemented in the diversified bandwagon effect for the virtual objects that guarantee generates, and sound draws synchronous effect.
Description
Technical field
The present invention relates to behavior virtual objects technical fields, generation method, device more particularly to a kind of virtual objects
And equipment.
Background technique
The relevant all kinds of clients in internet would generally provide the various virtual objects that configuration is completed, so that user utilizes void
Quasi- article realizes dummy activity.Illustratively, user can use virtual objects and realize virtual trading and the personal network that dresss up
Community etc..Such as: the virtual present in purchase live streaming client gives main broadcaster, and uses the virtual pendant in social software
Decorate head portrait and personal homepage etc..Wherein, virtual objects are usually static image, for example, flower image, fireworks display image etc.
Deng.The picture that only one is fixed due to still image, it is easy to cause the bandwagon effect of the virtual objects of generation more single
One.
In this regard, audio can be added for virtual objects, to improve the diversification of the bandwagon effect of virtual objects.But phase
In the technology of pass, the addition manner of audio is only to play music simply when showing virtual objects, be easy to appear virtual objects
It is asynchronous with audio, that is, the nonsynchronous problem of sound picture.Therefore, how to guarantee that the diversification of the virtual objects generated shows effect
In fruit, sound, which is drawn, to be synchronized, and is a problem to be solved.
Summary of the invention
The generation method for being designed to provide a kind of virtual objects, device and the equipment of the embodiment of the present invention are mentioned with realizing
The effect of the convenience of the configuration inspection of high virtual objects.Specific technical solution is as follows:
In a first aspect, being applied to server, this method the embodiment of the invention provides a kind of generation method of virtual objects
Include:
In the generation instruction for detecting virtual objects, first image for generating and instructing corresponding first video is obtained
Data, audio data and play time stamp;Wherein, the first image data provide the picture of first video;It is described
Audio data provides the sound of first video;Play time stamp is for guaranteeing the first image data and described
The mark of the content synchronization of audio data;
According to the appearance information of preset virtual objects, the first image data are adjusted, and to the first figure adjusted
As data carry out transcoding, acquisition virtual objects;
It is stabbed according to the play time, carries out the displaying of the virtual objects and the broadcasting of the audio data.
Optionally, in the appearance information according to preset virtual objects, the first image data are adjusted, and are exchanged
Before the step of the first image data after whole carries out transcoding, obtains virtual objects, the method also includes:
Obtain the second image data of the second video that picture color is transparent color;Second image data provides institute
State the picture of the second video;
The appearance information according to preset virtual objects adjusts the first image data, and to adjusted the
The step of one image data carries out transcoding, generates virtual objects, comprising:
According to the appearance information of preset virtual objects, the first image data and second image data are adjusted,
The first image data and the second image data adjusted after being adjusted;
It obtains in the display area for showing virtual objects corresponding with the generation instruction, belongs to the saturating of transparent region
Bright position;
Using second image data adjusted, by the transparent position in first image data adjusted
Setting mask is transparent color, the image data after obtaining mask;
Based on the image data after the mask, virtual objects are obtained.
Optionally, described to utilize second image data adjusted, it will be in first image data adjusted
The transparent place mask be transparent color, the IMAGE DATA step after obtaining mask, comprising:
Transcoding is carried out to second image data adjusted, obtains mask video;The display position of the mask video
It is set to the transparent place;
First image data adjusted is subjected to transcoding, obtains article video;The display position of the article video
It is set to the position in the display area in addition to the transparent place;
The mask video and the article video are set as showing jointly, the image data after obtaining mask.
Optionally, described to utilize second image data adjusted, it will be in first image data adjusted
The transparent place mask be transparent color, the step of image data after obtaining mask, comprising:
Using the transparent place of first image data adjusted as transparent channel;
By in second image data adjusted, the pixel in the transparent place is filled to described transparent logical
In road, transparent channel data are obtained;
By in first image data adjusted, pixel in addition to the pixel in the transparent place, as
Nontransparent channel data;
To in first image data adjusted the transparent channel data and the nontransparent channel data
It is rendered, the image data after obtaining mask.
Optionally, before the step of image data to after the mask carries out transcoding, obtains virtual objects, institute
State method further include:
The position between image data after obtaining third image data and the third image data and the mask
Relationship;The third image data is for the element-specific as virtual objects;
According to the positional relationship, the third image data is added in the image data after the mask, obtains spy
Imitate image data;
The step of image data to after the mask carries out transcoding, obtains virtual objects, comprising:
Transcoding is carried out to the special efficacy image data, obtains virtual objects.
Optionally, the positional relationship includes: using each pixel of the image data after the mask as the member of matrix
Element, and position of each pixel of the image data after the mask in the image data after the mask is existed as element
Position in matrix;
The positional relationship is the corresponding relationship between the element of the second picture element matrix and the element of the first picture element matrix;Institute
Stating the second picture element matrix is the corresponding picture element matrix of the third image data;First picture element matrix is after the mask
The corresponding picture element matrix of image data;
It is described according to the positional relationship, the third image data is added in the image data after the mask, is obtained
The step of to special efficacy image data, comprising:
Respectively by the image data and the third image data after the mask, it is converted into the first matrix and the second square
Battle array;
In first matrix, according to the positional relationship, the element in second matrix is added, obtains third square
Battle array;
Image data is converted by the third matrix, obtains special efficacy image data.
Second aspect, the embodiment of the invention provides a kind of generating means of virtual objects, are applied to server, the device
Include:
Data acquisition module, for it is corresponding to obtain the generation instruction in the generation instruction for detecting virtual objects
The first image data, audio data and the play time stamp of first video;Wherein, the first image data provide described the
The picture of one video;The audio data provides the sound of first video;The play time stamp is described for guaranteeing
The mark of first image data and the content synchronization of the audio data;
Virtual objects generation module adjusts the first image number for the appearance information according to preset virtual objects
According to, and transcoding is carried out to the first image data adjusted, obtain virtual objects;
Virtual objects display module carries out the displaying of the virtual objects and described for stabbing according to the play time
The broadcasting of audio data.
Optionally, the data acquisition module, is specifically used for:
In the appearance information according to preset virtual objects, the first image data are adjusted, and to adjusted
First image data carries out transcoding, before obtaining virtual objects, obtains the second of the second video that picture color is transparent color
Image data;Second image data provides the picture of second video;
The virtual objects generation module, comprising: mask submodule and virtual objects acquisition submodule;
The mask submodule adjusts the first image data for the appearance information according to preset virtual objects
With second image data, the first image data and the second image data adjusted after being adjusted;It obtains for opening up
In the display area for showing virtual objects corresponding with the generation instruction, belong to the transparent place of transparent region;Utilize the tune
The transparent place mask in first image data adjusted is transparent color by the second image data after whole,
Image data after obtaining mask;
The virtual objects acquisition submodule, for obtaining virtual objects based on the image data after the mask.
Optionally, the mask submodule, is specifically used for:
Transcoding is carried out to second image data adjusted, obtains mask video;The display position of the mask video
It is set to the transparent place;
First image data adjusted is subjected to transcoding, obtains article video;The display position of the article video
It is set to the position in the display area in addition to the transparent place;
The mask video and the article video are shown jointly, the image data after obtaining mask.
Optionally, the mask submodule, is specifically used for:
Using the transparent place of first image data adjusted as transparent channel;
By in second image data adjusted, the pixel in the transparent place is filled to described transparent logical
In road, transparent channel data are obtained;
By in first image data adjusted, pixel in addition to the pixel in the transparent place, as
Nontransparent channel data;
To in first image data adjusted the transparent channel data and the nontransparent channel data
It is rendered, the image data after obtaining mask.
Optionally, the data acquisition module, is specifically used for:
The position between image data after obtaining third image data and the third image data and the mask
Relationship;The third image data is for the element-specific as virtual objects;
According to the positional relationship, the third image data is added in the image data after the mask, obtains spy
Imitate image data;
The virtual objects acquisition submodule, is specifically used for:
Transcoding is carried out to the special efficacy image data, obtains virtual objects.
Optionally, the positional relationship includes: using each pixel of the image data after the mask as the member of matrix
Element, and position of each pixel of the image data after the mask in the image data after the mask is existed as element
Position in matrix;
The positional relationship is the corresponding relationship between the element of the second picture element matrix and the element of the first picture element matrix;Institute
Stating the second picture element matrix is the corresponding picture element matrix of the third image data;First picture element matrix is after the mask
The corresponding picture element matrix of image data;
The data acquisition module, is specifically used for:
Respectively by the image data and the third image data after the mask, it is converted into the first matrix and the second square
Battle array;
In first matrix, according to the positional relationship, the element in second matrix is added, obtains third square
Battle array;
Image data is converted by the third matrix, obtains special efficacy image data.
The third aspect, the embodiment of the invention provides a kind of electronic equipment, which includes:
Processor, communication interface, memory and communication bus, wherein processor, communication interface, memory pass through bus
Complete mutual communication;Memory, for storing computer program;Processor, for executing the journey stored on memory
Sequence, the step of realizing the generation method for the virtual objects that above-mentioned first aspect provides.
Fourth aspect is stored in the storage medium the embodiment of the invention provides a kind of computer readable storage medium
Computer program, the computer program realize the generation method for the virtual objects that above-mentioned first aspect provides when being executed by processor
The step of.
In scheme provided in an embodiment of the present invention, server passes through acquisition in the generation instruction for detecting virtual objects
The first image data for instructing corresponding first video, audio data and play time stamp are generated, may be implemented according to default
Virtual objects appearance information, adjust the first image data, and transcoding is carried out to the first image data adjusted, obtain empty
Quasi- article;To stab according to play time, virtual objects and playing audio-fequency data are shown.Since server can use
First image data of one video generates the virtual objects for meeting the appearance information of preset virtual objects, and by the first video
Audio of the audio data as virtual objects.Therefore, the picture effect of virtual objects is identical as the content of the first image data,
Audio is identical as the content of audio data.On this basis, the play time stamp of the first video can guarantee the first image data
With the content synchronization of audio data.Therefore, it is stabbed according to play time and shows virtual objects and playing audio-fequency data, Ke Yibao
The picture effect for demonstrate,proving virtual objects is synchronous with audio.As it can be seen that the more of the virtual objects for guaranteeing to generate may be implemented by this programme
In sample bandwagon effect, sound draws synchronous effect.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described.
Fig. 1 is the flow diagram of the generation method for the virtual objects that one embodiment of the invention provides;
Fig. 2 be another embodiment of the present invention provides virtual objects generation method flow diagram;
Fig. 3 is the structural schematic diagram of the generating means for the virtual objects that one embodiment of the invention provides;
Fig. 4 be another embodiment of the present invention provides virtual objects generating means structural schematic diagram;
Fig. 5 is the structural schematic diagram for the electronic equipment that one embodiment of the invention provides.
Specific embodiment
In order to make those skilled in the art more fully understand the technical solution in the present invention, implement below in conjunction with the present invention
Attached drawing in example, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment
Only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, the common skill in this field
Art personnel every other embodiment obtained without creative efforts belongs to the model that the present invention protects
It encloses.
The generation method of the virtual objects of one embodiment of the invention is introduced first below.
The generation method of virtual objects provided in an embodiment of the present invention can be applied to Internet-related client pair
The server answered, the server may include desktop computer, portable computer, internet television, intelligent mobile terminal with
And wearable intelligent terminal etc., it is not limited thereto, any server that the embodiment of the present invention may be implemented belongs to this hair
The protection scope of bright embodiment.
In a particular application, Internet-related client can be a variety of.Illustratively, Internet-related
Client can be live streaming client or social client etc..Correspondingly, virtual objects specifically can be it is a variety of.It lifts
For example, virtual objects can be present, pendant etc. virtual objects
As shown in Figure 1, the process of the generation method of the virtual objects of one embodiment of the invention, this method may include as follows
Step:
S101 obtains the first figure for generating and instructing corresponding first video in the generation instruction for detecting virtual objects
As data, audio data and play time stamp.Wherein, the first image data provides the picture of the first video;Audio data mentions
For the sound of the first video;Play time stamp is the mark for guaranteeing the content synchronization of the first image data and audio data.
In order to generate diversified virtual objects, the generation instruction for different virtual objects can be issued, correspondingly,
The first image data, audio data and the play time for needing to obtain the first video corresponding with instruction is generated are stabbed, to pass through
Subsequent step S102 to S103 generates virtual objects corresponding with instruction is generated.Also, for generating different virtual objects
One video can be different.
Wherein, the first video specifically may include the first image data and audio data, and the first image data provides first
The picture of video, audio data provide the sound of the first video.Therefore, for example, the first image data specifically can be
The image frame queue of one video, audio data specifically can be the audio pack queue of the first video.Illustratively, the first picture number
It can specifically include according to the acquisition modes with audio data: utilizing preset model, read the first video, obtain image frame queue
And audio pack queue, respectively as the first image data and audio data.For example, utilizing FFMPEG (Fast Forward
MPEG has the function of the tool of the functions such as video acquisition, video format conversion and video capture), the first video is read
It takes, to obtain the first image data and audio data.
Also, in order to avoid the sound of the first video picture is asynchronous, there are the play time of the first video stamp, the play times
Stamp can be used as the playing sequence label of the first image data and audio data, when for guaranteeing to play the first video, be shown
The first image data and the content synchronization of audio data that is played.Therefore, the play time stamp of available first video,
The virtual objects that there is sound to draw synchronous effect are obtained so as to subsequent.
S102 adjusts the first image data according to the appearance information of preset virtual objects, and to the first figure adjusted
As data carry out transcoding, acquisition virtual objects.
Wherein, the appearance information of preset virtual objects can be a variety of.It illustratively, may include preset virtual
At least one of size, color, brightness and the shape of article etc. appearance information.Correspondingly, according to preset virtual object
The appearance information of product, to the first image data carry out adjustment, specifically may is that the shape by the first image data adjust to
Shape corresponding to the appearance information of preset virtual objects is identical.
Also, it, can be to adjustment in order to guarantee that the picture effect of virtual objects is synchronous with audio in subsequent step S103
The first image data afterwards carries out transcoding, to obtain the virtual objects of the picture effect with video.In a particular application, it exchanges
The mode that the first image data after whole carries out transcoding can be a variety of.Illustratively, interframe compression can be used, to adjustment
The first image data afterwards carries out transcoding, alternatively, frame data compression can be used, turns to the first image data adjusted
Code.Any mode that transcoding can be carried out to the first image data adjusted, is used equally for the present invention, the present embodiment to this not
It is restricted.
Wherein, the redundant data with very big correlation is likely that there are between former and later two continuous picture frames of video,
Therefore, interframe compression can implement compression by comparing the data between different images frame on time shaft, improve compression ratio, reduce
To the occupancy of data processing resources.Frame data compression only considers the data of this picture frame, does not consider when compressing a certain picture frame
Redundancy between this picture frame and adjacent image frame, similar with the compression of still image, compression ratio is in contrast lower, right
The occupancy of data processing resources is in contrast more.On this basis, for needing reduction to occupy the feelings of data processing resources
Present in condition, such as live streaming generates, and obtains virtual objects using interframe compression, advantageously reduces and account for data processing resources
Dosage.
S103 is stabbed according to play time, carries out the displaying of virtual objects and the broadcasting of audio data.
Since play time stamp can guarantee the content synchronization of the first image data and audio data, according to broadcasting
Timestamp shows virtual objects and playing audio-fequency data, it is ensured that the picture effect of virtual objects is synchronous with audio.
Illustratively, it is stabbed according to play time, carries out the displaying of virtual objects and the broadcasting of audio data, specifically can wrap
It includes: the data of virtual objects will be represented, i.e., the data for obtain after transcoding to the first image data adjusted are sent to figure
As display device, so that image display device from the data for representing virtual objects, selects number corresponding with play time stamp
According to being shown;Also, it is synchronous that audio data is sent to audio output device, so that audio output device plays audio number
The data corresponding with play time stamp in.For example, corresponding with play time stamp T1 in the data for representing virtual objects
Data be the 0th second to the 10th second video frame VF0, data corresponding with play time stamp T1 are video frames in audio data
The matched audio pack AP0 of VF0.While image display device starts to show video frame VF0, audio playing apparatus can start
Play audio pack AP0.
In scheme provided in an embodiment of the present invention, server passes through acquisition in the generation instruction for detecting virtual objects
The first image data for instructing corresponding first video, audio data and play time stamp are generated, may be implemented according to default
Virtual objects appearance information, adjust the first image data, and transcoding is carried out to the first image data adjusted, obtain empty
Quasi- article;To stab according to play time, virtual objects and playing audio-fequency data are shown.Since server can use
First image data of one video generates the virtual objects for meeting the appearance information of preset virtual objects, and by the first video
Audio of the audio data as virtual objects.Therefore, the picture effect of virtual objects is identical as the content of the first image data,
Audio is identical as the content of audio data.On this basis, the play time stamp of the first video can guarantee the first image data
With the content synchronization of audio data.Therefore, it is stabbed according to play time and shows virtual objects and playing audio-fequency data, Ke Yibao
The picture effect for demonstrate,proving virtual objects is synchronous with audio.As it can be seen that the more of the virtual objects for guaranteeing to generate may be implemented by this programme
In sample bandwagon effect, sound draws synchronous effect.
As shown in Fig. 2, the process of the generation method of the virtual objects of another embodiment of the present invention, this method may include:
S201 obtains the first figure for generating and instructing corresponding first video in the generation instruction for detecting virtual objects
As data, audio data and play time stamp.
S201 is identical step with the S101 of Fig. 1 embodiment of the present invention, and details are not described herein, and it is real to be detailed in Fig. 1 of the present invention
Apply the description of example.
S202 obtains the second image data of the second video that picture color is transparent color;Second image data provides
The picture of second video.
In a particular application, transparent part can be set in the picture of virtual objects, to guarantee the exhibition of virtual objects
It is three-dimensional to show that the effect is more real.For example, for a certain virtual objects:, can be by automobile itself for virtual present automobile
Picture color in addition is set as transparent color, so that the image content of the virtual objects shown is automobile itself, there is no
Large stretch of black or white are equal to the unrelated content of automobile itself.For this purpose, can use data relevant to transparent color to void
The region with transparent effect is needed to carry out mask in quasi- article.Wherein, data relevant to transparent color can be picture face
Color is the second image data of the second video of transparent color, and the second image data provides the picture of the second video, alternatively, can be with
It is the image that picture color is transparent color.
Second image data and the first image data are that video provides picture, and difference is the picture of the second image data
Color is transparent color, and provides the picture of the second video.Therefore, using the second image data of the second video as with transparent face
When the relevant data of color, it can use the processing logic similar with the first image data is obtained, process object need to be only changed to
Second video and the second image data, without increasing individually processing logic.For example, can use to obtain the first image data
Preset model, read the second video, obtain the second image data.
S203 adjusts the first image data and the second image data, obtains according to the appearance information of preset virtual objects
First image data adjusted and the second image data adjusted.
Adjust that the process of the first image data is similar, and difference is in the step S203 and S102 of Fig. 1 embodiment of the present invention
The object adjusted in S203 is the first image data and the second image data.Also, to the first image data and the second picture number
According to the adjustment of progress, parallel adjustment can be, while carrying out the adjustment of two kinds of image datas, to improve efficiency;Alternatively, can be
Serial adjustment, successively carries out the adjustment of two kinds of image datas.For identical content, details are not described herein, and it is real to be detailed in Fig. 1 of the present invention
Apply the description of example.
S204 obtains for showing and generating in the display area for instructing corresponding virtual objects, belongs to transparent region
Transparent place.
In a particular application, empty in order to completely be shown in the display area for showing virtual objects corresponding with instruction is generated
Quasi- article, the screen size of virtual objects are usually matched with the display area.Therefore, for a certain virtual objects, the void
Quasi- article needs to be set as the position of the transparent part of transparent color, is equivalent to the transparent position for belonging to transparent region in display area
It sets.Therefore, available for showing and generating in the display area for instructing corresponding virtual objects, belong to the saturating of transparent region
Bright position, for carrying out mask in subsequent step S205 to obtain having the image data after the mask of opaque color zone.
Also, illustrative, transparent place specifically can be in the two-dimensional coordinate system of display area, and the corresponding two dimension of transparent region is sat
Mark.
Wherein, for showing and generating in the display area for instructing corresponding virtual objects, belong to the transparent of transparent region
The acquisition modes of position can be a variety of.Illustratively, the shape and size of display area are fixed, and virtual objects exist
What the display location in display area was also possible to pre-set, at this point, display location of the virtual objects in display area
The image content itself that can be regarded as showing virtual objects, does not show the position of the transparent part of virtual objects.It therefore, can be with
The shape and size of display location and display area previously according to virtual objects, by article non-virtual in display area
The position of display location, is stored as transparent place.Correspondingly, the transparent place prestored can be read directly.Alternatively, exemplary
, the shape and size of display area and the display location of virtual objects can be read in real time, and according to read number
According to the position of the display location of determining non-virtual article from display area, as transparent place.
Any mode that transcoding can be carried out to the first image data adjusted, is used equally for the present invention, the present embodiment
With no restriction to this.
S205, using the second image data adjusted, by the transparent place mask in the first image data adjusted
Image data for transparent color, after obtaining mask.
Second image data adjusted and the first image adjusted are corresponded to by turn by pixel, be of the same size and
Shape, also, the second image data adjusted is transparent color.Therefore, can use the second image data adjusted will
Transparent place mask in first image data adjusted is transparent color, the image data after obtaining mask.Specifically answering
It is transparent face by the transparent place mask in the first image data adjusted using the second image data adjusted in
Color, the mode of the image data after obtaining mask can be a variety of.It is carried out specifically in the form of alternative embodiment below
It is bright.
In a kind of optional embodiment, above-mentioned steps S205: the second image data adjusted is utilized, after adjustment
The first image data in transparent place mask be transparent color, the image data after obtaining mask, can specifically include as
Lower step:
Transcoding is carried out to the second image data adjusted, obtains mask video;The display position of mask video is transparent
Position;
First image data adjusted is subjected to transcoding, obtains article video;The display position of article video is to show
Position in region in addition to transparent place;
Mask video and article video are shown jointly, the image data after obtaining mask.
Image data after mask includes mask video and article video, wherein mask video is to adjusted second
Image data carries out transcoding acquisition, and display effect is transparent color;Article video be to the first image data adjusted into
What row transcoding obtained, display effect is virtual objects itself.It therefore, can be in the saturating of the display area for showing virtual objects
Bright position shows mask video, and the position in the display area in addition to transparent place shows article video, and mask regards
Frequency and article video are shown jointly, so that the position in display area in addition to transparent place shows void with article video
The image content of quasi- article itself, shows transparent color in transparent place with mask video, the image data after obtaining mask.
It in a particular application, can be by mask video and article video copy to the caching of display module, so that display
Module carries out the common display of double vision frequency in the display area for showing virtual objects.For example, in Android (peace
It is tall and erect) in operating system, it can be by mask video and article video copy to the caching of display module, so that display module exists
In NativeWindow (native window), the common display of double vision frequency is carried out.
In this alternative embodiment, the acquisition of the image data after mask can be by the second image adjusted and tune
The transcoding of first image and common Display Realization after whole need not move through complicated image channel filling and render process, because
This, has realization process in contrast simple advantage, the formation efficiency of virtual objects can be improved.
In another optional embodiment, above-mentioned steps S205: the second image data adjusted is utilized, will be adjusted
The transparent place mask in the first image data afterwards is transparent color, and the image data after obtaining mask can specifically include
Following steps:
Using the transparent place of the first image data adjusted as transparent channel;
By in the second image data adjusted, the pixel in transparent place is filled into transparent channel, is obtained transparent
Channel data;
By in the first image data adjusted, pixel in addition to the pixel in transparent place, as nontransparent logical
Track data;
To in the first image data transparent channel data and nontransparent channel data render, after obtaining mask
Image data.
Wherein, the position in the first image adjusted in the first image data of each pixel after the adjustment, with the picture
Display location of the content that element represents in display area is corresponding, and the transparent place of display area is considered as adjusted first
The transparent place of image data.Therefore, can using the transparent place of the first image data adjusted as transparent channel, so as to
There is in the next steps for transparent channel filling the pixel of transparent color, realize transparent effect.Also, the second figure adjusted
As data and the pixel of the first image data adjusted are corresponding by turn, therefore, can by the second image data adjusted,
Pixel in transparent place fills into transparent channel, obtains transparent channel data, to obtain comprising transparent channel and non-
The first image data adjusted of transparent channel.On this basis, there is transparent color and energy in transparent place in order to obtain
Image data after enough showing the mask of the image content of virtual objects, can be according to pixel in the first image data adjusted
Distributing position, transparent channel data and nontransparent channel data are rendered.
Illustratively, in the first image data adjusted transparent channel data and nontransparent channel data into
Row rendering, the image data after obtaining mask can specifically include: by the transparent channel number in the first image data adjusted
According to and nontransparent channel data generate Texture (texture) data, using OpenGL (Open Graphics Library,
Open graphic library) rendering Texture data, the image data after obtaining mask.In addition, the executing subject rendered specifically may be used
To be GPU (Graphics Processing Unit, image processor), to improve the acquisition efficiency of the image after mask.
In this alternative embodiment, the acquisition of the image data after mask is by the pixel filling of the second image adjusted
Into the transparent channel of the first image adjusted, thus to the transparent channel data in the first image adjusted, Yi Jifei
Transparent channel data are rendered, the image data after obtaining mask.It is imitated due to can targetedly render with different displays
The transparent channel data of fruit and nontransparent channel data, therefore, the acquisition side with the image data after the mask without rendering
Formula is compared, and is equivalent to secondary rendering is carried out, can be promoted the virtual objects that the subsequent image data based on after mask obtains
Show quality.
S206 obtains virtual objects based on the image data after mask.
The mode of virtual objects is obtained based on the image data after mask in concrete application, specifically can be a variety of.
Illustratively, the image data after mask be fill that the transparent channel of the second image adjusted obtains when, can be to screening
Image data after cover carries out transcoding, obtains virtual objects.At this point, right in the step S206 and S102 of Fig. 1 embodiment of the present invention
First image data adjusted carries out transcoding, and the process for obtaining virtual objects is similar, and difference is the object of S206 transit code
For the image data after mask.For identical content, details are not described herein, is detailed in the description of Fig. 1 embodiment of the present invention.Alternatively, showing
Example property, image data after mask be mask video in above-mentioned steps S205 another alternative embodiment and article video into
The common display of row obtain when, can be directly using the image data after mask as virtual objects.
It is any to be used equally for the present invention, this implementation based on the image data after mask, the mode for obtaining virtual objects
Example to this with no restriction.
S207 is stabbed according to play time, carries out the displaying of virtual objects and the broadcasting of audio data.
S207 is identical step with the S103 of Fig. 1 embodiment of the present invention, and details are not described herein, and it is real to be detailed in Fig. 1 of the present invention
Apply the description of example.
In above-mentioned Fig. 2 embodiment, by obtaining the second image data of the second video, do not increasing individually to second
In the case where the processing logic of image, mask is carried out to the transparent place of the first image data adjusted, after obtaining mask
Image, and then based on the image after mask, virtual objects are obtained, to realize the transparent effect of the transparent place of virtual objects.
When showing the virtual objects, the three-dimensional sense and the sense of reality of virtual objects can be improved.
Optionally, in above-mentioned steps S206: based on the image data after mask, before obtaining virtual objects, the present invention is real
The generation method for applying the virtual objects of example offer, can also include the following steps:
The positional relationship between image data after obtaining third image data and third image data and mask;The
Three image datas are for the element-specific as virtual objects;
According to positional relationship, third image data is added in the image data after mask, obtains special efficacy image data;
Correspondingly, above-mentioned steps S206: based on the image data after mask, obtaining virtual objects, can specifically include: right
Special efficacy image data carries out transcoding, obtains virtual objects.
Wherein, third image data is specifically as follows vector image.The element-specific of virtual objects can be a variety of.Show
Example property, element-specific can be user's head portrait, the text of user's input and special efficacy etc..After third image data and mask
Image data between positional relationship acquisition can be it is a variety of.Illustratively, which can be fixed, because
This, can be read directly the positional relationship prestored.For example, positional relationship can be the upper left of the image data after mask
The positional relationship is stored in advance in angle or the upper right corner etc., to read when obtaining special efficacy image data.Alternatively, exemplary
, which can be from the corresponding relationship of the type of preset third image data and point of addition, search acquired
The corresponding point of addition of the type of third image is closed as the position between the image data after third image data and mask
System.For example, it is corresponding when such as user's head portrait and the text of user's input when the type of third image data is user information
Point of addition the position of virtual present can not be blocked for transparent place or the boundary position of virtual present etc..Third
When the type of image data is special efficacy, when special efficacy of such as snowing, corresponding point of addition can not block for transparent place or void
The designated position of quasi- present.
The acquisition modes of the positional relationship between image data after any third image data and mask are used equally for this
Invention, the present embodiment to this with no restriction.
On this basis, since positional relationship can show that addition of the third image data in the image data after mask
Therefore according to positional relationship, third image data, available special efficacy image are added in the image data after mask in position
Data.Specifically, third image data can be added the image after mask according to the point of addition indicated by positional relationship
At the point of addition of data, special efficacy image data is obtained.For example, third image data is user's head portrait, positional relationship
For the upper left corner of image data of the third image data after mask.It therefore, can be in the upper left corner of the image data after mask
User's head portrait is added, obtains that there is transparent effect and the special efficacy image data comprising user's head portrait.
Correspondingly, needing to carry out transcoding to special efficacy image data, virtual objects are obtained.This step and Fig. 1 of the present invention are implemented
Transcoding is carried out to the first image data adjusted in the S102 of example, the process for obtaining virtual objects is similar, and difference is this step
The object of rapid transit code is special efficacy image data.For identical content, details are not described herein, is detailed in retouching for Fig. 1 embodiment of the present invention
It states.
In this alternative embodiment, by adding element-specific for virtual objects, it can increase expressed by virtual objects
The abundant degree of content, and improve the diversified degree of bandwagon effect.Also, element-specific uses the shape of third image data
Therefore formula adds the element-specific of the form of third image data in the image data being similarly after the mask of image data,
In contrast more convenient.
Optionally, above-mentioned positional relationship is corresponding between the element and the element of the first picture element matrix of the second picture element matrix
Relationship;Second picture element matrix is the corresponding picture element matrix of third image data;First picture element matrix is the image data after mask
Corresponding picture element matrix;
Correspondingly, it is above-mentioned according to positional relationship, third image data is added in the image data after mask, obtains special efficacy
The step of image data, can specifically include:
Respectively by the image data and third image data after mask, it is converted into the first matrix and the second matrix;
In the first matrix, according to positional relationship, the element in the second matrix is added, obtains third matrix;
Image data is converted by third matrix, obtains special efficacy image data.
Wherein, corresponding second picture element matrix of third image data is using each pixel of third image data as square
The element of battle array, and using position of each pixel of third image data in third image data as the element of matrix in matrix
In position, obtained matrix.Similar, the first picture element matrix is using each pixel of the image data after mask as square
The element of battle array, and using position of each pixel of the image data after mask in the image data after mask as element in square
Position in battle array, obtained matrix.It therefore, can will be between the element of the second picture element matrix and the element of the first picture element matrix
Corresponding relationship is as the positional relationship between the image data after third image data and mask.
On this basis, in order to which according to positional relationship, addition third image data, is obtained in the image data after mask
Special efficacy image data, needs to convert the first matrix for the image data after mask, converts the second square for third image data
Battle array, to according to positional relationship, add the element in the second matrix in the first matrix, obtain third matrix;And by third
Matrix is converted into image data, obtains special efficacy image data.Illustratively, positional relationship be the second matrix in element S 11 to
S36, by turn correspond to the first matrix in the upper left corner element F11 to F36, therefore, can in the first matrix element F11 to F36
The element S 11 in the second matrix is added at the position of element to S36, obtains third matrix.
In this alternative embodiment, the third image data as element-specific is added by location of pixels, with tradition
Added and compare in a manner of drawing, in contrast reduce the complexity and data volume to be treated when addition, can be with
Reduce the consumption to data processing resources.
Corresponding to above method embodiment, one embodiment of the invention additionally provides the generating means of virtual objects.
As shown in figure 3, the generating means for the virtual objects that one embodiment of the invention provides, are applied to server, the device
May include:
Data acquisition module 301 is corresponded in the generation instruction for detecting virtual objects, obtaining the generation instruction
The first image data of the first video, audio data and play time stamp;Wherein, described in the first image data provide
The picture of first video;The audio data provides the sound of first video;The play time stamp is for guaranteeing
State the mark of the content synchronization of the first image data and the audio data;
Virtual objects generation module 302 adjusts the first image for the appearance information according to preset virtual objects
Data, and transcoding is carried out to the first image data adjusted, obtain virtual objects;
Virtual objects display module 303 carries out displaying and the institute of the virtual objects for stabbing according to the play time
State the broadcasting of audio data.
In scheme provided in an embodiment of the present invention, server passes through acquisition in the generation instruction for detecting virtual objects
The first image data for instructing corresponding first video, audio data and play time stamp are generated, may be implemented according to default
Virtual objects appearance information, adjust the first image data, and transcoding is carried out to the first image data adjusted, obtain empty
Quasi- article;To stab according to play time, virtual objects and playing audio-fequency data are shown.Since server can use
First image data of one video generates the virtual objects for meeting the appearance information of preset virtual objects, and by the first video
Audio of the audio data as virtual objects.Therefore, the picture effect of virtual objects is identical as the content of the first image data,
Audio is identical as the content of audio data.On this basis, the play time stamp of the first video can guarantee the first image data
With the content synchronization of audio data.Therefore, it is stabbed according to play time and shows virtual objects and playing audio-fequency data, Ke Yibao
The picture effect for demonstrate,proving virtual objects is synchronous with audio.As it can be seen that the more of the virtual objects for guaranteeing to generate may be implemented by this programme
In sample bandwagon effect, sound draws synchronous effect.
As shown in figure 4, another embodiment of the present invention provides virtual objects generating means, be applied to server, the dress
It sets and may include:
Data acquisition module 401 is corresponded in the generation instruction for detecting virtual objects, obtaining the generation instruction
The first image data of the first video, audio data and play time stamp;Wherein, described in the first image data provide
The picture of first video;The audio data provides the sound of first video;The play time stamp is for guaranteeing
State the mark of the content synchronization of the first image data and the audio data;Obtain the second video that picture color is transparent color
The second image data;Second image data provides the picture of second video;
Virtual objects generation module 402, comprising: mask submodule 4021 and virtual objects acquisition submodule 4022;
The mask submodule 4021 adjusts the first image for the appearance information according to preset virtual objects
Data and second image data, the first image data and the second image data adjusted after being adjusted;It obtains and uses
In the display area for showing virtual objects corresponding with the generation instruction, belong to the transparent place of transparent region;Using institute
The second image data adjusted is stated, is transparent face by the transparent place mask in first image data adjusted
Color, the image data after obtaining mask;
The virtual objects acquisition submodule 4022, for obtaining virtual objects based on the image data after the mask;
Virtual objects display module 403 carries out displaying and the institute of the virtual objects for stabbing according to the play time
State the broadcasting of audio data.
Optionally, the mask submodule 4021, is specifically used for:
Transcoding is carried out to second image data adjusted, obtains mask video;The display position of the mask video
It is set to the transparent place;
First image data adjusted is subjected to transcoding, obtains article video;The display position of the article video
It is set to the position in the display area in addition to the transparent place;
The mask video and the article video are shown jointly, the image data after obtaining mask.
Optionally, the mask submodule 4021, is specifically used for:
Using the transparent place of first image data adjusted as transparent channel;
By in second image data adjusted, the pixel in the transparent place is filled to described transparent logical
In road, transparent channel data are obtained;
By in first image data adjusted, pixel in addition to the pixel in the transparent place, as
Nontransparent channel data;
To in first image data adjusted the transparent channel data and the nontransparent channel data
It is rendered, the image data after obtaining mask.
Optionally, the data acquisition module 401, is specifically used for:
The position between image data after obtaining third image data and the third image data and the mask
Relationship;The third image data is for the element-specific as virtual objects;
According to the positional relationship, the third image data is added in the image data after the mask, obtains spy
Imitate image data;
Correspondingly, the virtual objects acquisition submodule 4022, is specifically used for:
Transcoding is carried out to the special efficacy image data, obtains virtual objects.
Optionally, the positional relationship is corresponding between the element and the element of the first picture element matrix of the second picture element matrix
Relationship;Second picture element matrix is the corresponding picture element matrix of the third image data;First picture element matrix is described
The corresponding picture element matrix of image data after mask;
The data acquisition module 401, is specifically used for:
Respectively by the image data and the third image data after the mask, it is converted into the first matrix and the second square
Battle array;
In first matrix, according to the positional relationship, the element in second matrix is added, obtains third square
Battle array;
Image data is converted by the third matrix, obtains special efficacy image data.
Corresponding to above-described embodiment, the embodiment of the invention also provides a kind of electronic equipment, as shown in figure 5, the equipment can
To include:
Processor 501, communication interface 502, memory 503 and communication bus 504, wherein processor 501, communication interface
502, memory logical 503 crosses communication bus 504 and completes mutual communication;
Memory 503, for storing computer program;
Processor 501 when for executing the computer program stored on above-mentioned memory 503, realizes above-described embodiment
In any virtual objects generation method the step of.
It is understood that the electronic equipment in Fig. 5 embodiment of the present invention is specifically as follows Internet-related client
The corresponding server in end.
In scheme provided in an embodiment of the present invention, server passes through acquisition in the generation instruction for detecting virtual objects
The first image data for instructing corresponding first video, audio data and play time stamp are generated, may be implemented according to default
Virtual objects appearance information, adjust the first image data, and transcoding is carried out to the first image data adjusted, obtain empty
Quasi- article;To stab according to play time, virtual objects and playing audio-fequency data are shown.Since server can use
First image data of one video generates the virtual objects for meeting the appearance information of preset virtual objects, and by the first video
Audio of the audio data as virtual objects.Therefore, the picture effect of virtual objects is identical as the content of the first image data,
Audio is identical as the content of audio data.On this basis, the play time stamp of the first video can guarantee the first image data
With the content synchronization of audio data.Therefore, it is stabbed according to play time and shows virtual objects and playing audio-fequency data, Ke Yibao
The picture effect for demonstrate,proving virtual objects is synchronous with audio.As it can be seen that the more of the virtual objects for guaranteeing to generate may be implemented by this programme
In sample bandwagon effect, sound draws synchronous effect.
Above-mentioned memory may include RAM (Random Access Memory, random access memory), also may include
NVM (Non-Volatile Memory, nonvolatile memory), for example, at least a magnetic disk storage.Optionally, memory
It can also be that at least one is located away from the storage device of above-mentioned processor.
Above-mentioned processor can be general processor, including CPU (Central Processing Unit, central processing
Device), NP (Network Processor, network processing unit) etc.;Can also be DSP (Digital Signal Processor,
Digital signal processor), ASIC (Application Specific Integrated Circuit, specific integrated circuit),
FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic device are divided
Vertical door or transistor logic, discrete hardware components.
The computer readable storage medium that one embodiment of the invention provides, is contained in electronic equipment, this is computer-readable to deposit
It is stored with computer program in storage media, when which is executed by processor, realizes any virtual in above-described embodiment
The step of generation method of article.
In scheme provided in an embodiment of the present invention, server passes through acquisition in the generation instruction for detecting virtual objects
The first image data for instructing corresponding first video, audio data and play time stamp are generated, may be implemented according to default
Virtual objects appearance information, adjust the first image data, and transcoding is carried out to the first image data adjusted, obtain empty
Quasi- article;To stab according to play time, virtual objects and playing audio-fequency data are shown.Since server can use
First image data of one video generates the virtual objects for meeting the appearance information of preset virtual objects, and by the first video
Audio of the audio data as virtual objects.Therefore, the picture effect of virtual objects is identical as the content of the first image data,
Audio is identical as the content of audio data.On this basis, the play time stamp of the first video can guarantee the first image data
With the content synchronization of audio data.Therefore, it is stabbed according to play time and shows virtual objects and playing audio-fequency data, Ke Yibao
The picture effect for demonstrate,proving virtual objects is synchronous with audio.As it can be seen that the more of the virtual objects for guaranteeing to generate may be implemented by this programme
In sample bandwagon effect, sound draws synchronous effect.
In another embodiment provided by the invention, a kind of computer program product comprising instruction is additionally provided, when it
When running on computers, so that computer executes the generation method of any virtual objects in above-described embodiment.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real
It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.The computer program
Product includes one or more computer instructions.When loading on computers and executing the computer program instructions, all or
It partly generates according to process or function described in the embodiment of the present invention.The computer can be general purpose computer, dedicated meter
Calculation machine, computer network or other programmable devices.The computer instruction can store in computer readable storage medium
In, or from a computer readable storage medium to the transmission of another computer readable storage medium, for example, the computer
Instruction can pass through wired (such as coaxial cable, optical fiber, DSL from a web-site, computer, server or data center
(Digital Subscriber Line, digital operation maintenance personnel line) or wireless (such as: infrared ray, radio, microwave etc.) mode
It is transmitted to another web-site, computer, server or data center.The computer readable storage medium can be
Any usable medium that computer can access either includes the integrated server of one or more usable mediums, data center
Equal data storage devices.The usable medium can be magnetic medium, (for example, floppy disk, hard disk, tape), optical medium (such as:
DVD (Digital Versatile Disc, digital versatile disc)) or semiconductor medium (such as: SSD (Solid State
Disk, solid state hard disk)) etc..
Herein, relational terms such as first and second and the like be used merely to by an entity or operation with it is another
One entity or operation distinguish, and without necessarily requiring or implying between these entities or operation, there are any this reality
Relationship or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to the packet of nonexcludability
Contain, so that the process, method, article or equipment for including a series of elements not only includes those elements, but also including
Other elements that are not explicitly listed, or further include for elements inherent to such a process, method, article, or device.
In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including the element
Process, method, article or equipment in there is also other identical elements.
Each embodiment in this specification is all made of relevant mode and describes, same and similar portion between each embodiment
Dividing may refer to each other, and each embodiment focuses on the differences from other embodiments.Especially for device and
For electronic equipment embodiment, since it is substantially similar to the method embodiment, so be described relatively simple, related place referring to
The part of embodiment of the method illustrates.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the scope of the present invention.It is all
Any modification, equivalent replacement, improvement and so within the spirit and principles in the present invention, are all contained in protection scope of the present invention
It is interior.
Claims (10)
1. a kind of generation method of virtual objects, which is characterized in that be applied to server, which comprises
In the generation instruction for detecting virtual objects, first picture number for generating and instructing corresponding first video is obtained
According to, audio data and play time stamp;Wherein, the first image data provide the picture of first video;The sound
Frequency is according to providing the sound of first video;The play time stamp is for guaranteeing the first image data and the sound
The mark of the content synchronization of frequency evidence;
According to the appearance information of preset virtual objects, the first image data are adjusted, and to the first picture number adjusted
According to transcoding is carried out, virtual objects are obtained;
It is stabbed according to the play time, carries out the displaying of the virtual objects and the broadcasting of the audio data.
2. the method according to claim 1, wherein in the appearance information according to preset virtual objects,
The first image data are adjusted, and before the step of carrying out transcoding to the first image data adjusted, obtain virtual objects,
The method also includes:
Obtain the second image data of the second video that picture color is transparent color;Second image data provides described the
The picture of two videos;
The appearance information according to preset virtual objects adjusts the first image data, and to the first figure adjusted
The step of carrying out transcoding as data, generate virtual objects, comprising:
According to the appearance information of preset virtual objects, the first image data and second image data are adjusted, is obtained
First image data adjusted and the second image data adjusted;
It obtains in the display area for showing virtual objects corresponding with the generation instruction, belongs to the transparent position of transparent region
It sets;
Using second image data adjusted, the transparent place in first image data adjusted is hidden
Cover is transparent color, the image data after obtaining mask;
Based on the image data after the mask, virtual objects are obtained.
3. according to the method described in claim 2, it is characterized in that, the utilization second image data adjusted, is incited somebody to action
The transparent place mask in first image data adjusted is transparent color, the image data step after obtaining mask
Suddenly, comprising:
Transcoding is carried out to second image data adjusted, obtains mask video;The display position of the mask video is
The transparent place;
First image data adjusted is subjected to transcoding, obtains article video;The display position of the article video is
Position in the display area in addition to the transparent place;
The mask video and the article video are set as showing jointly, the image data after obtaining mask.
4. according to the method described in claim 2, it is characterized in that, the utilization second image data adjusted, is incited somebody to action
The transparent place mask in first image data adjusted is transparent color, the image data after obtaining mask
Step, comprising:
Using the transparent place of first image data adjusted as transparent channel;
By in second image data adjusted, the pixel in the transparent place is filled into the transparent channel,
Obtain transparent channel data;
By in first image data adjusted, pixel in addition to the pixel in the transparent place, as non-
Bright channel data;
To the transparent channel data and the nontransparent channel data progress in first image data adjusted
Rendering, the image data after obtaining mask.
5. a kind of generating means of virtual objects, which is characterized in that be applied to server, described device includes:
Data acquisition module, for obtaining the generation instruction corresponding first in the generation instruction for detecting virtual objects
The first image data, audio data and the play time stamp of video;Wherein, the first image data provide first view
The picture of frequency;The audio data provides the sound of first video;The play time stamp is for guaranteeing described first
The mark of image data and the content synchronization of the audio data;
Virtual objects generation module adjusts the first image data for the appearance information according to preset virtual objects, and
Transcoding is carried out to the first image data adjusted, obtains virtual objects;
Virtual objects display module carries out the displaying and the audio of the virtual objects for stabbing according to the play time
The broadcasting of data.
6. device according to claim 5, which is characterized in that the data acquisition module is specifically used for:
In the appearance information according to preset virtual objects, the first image data are adjusted, and to adjusted first
Image data carries out transcoding, before obtaining virtual objects, obtains the second image of the second video that picture color is transparent color
Data;Second image data provides the picture of second video;
The virtual objects generation module, comprising: mask submodule and virtual objects acquisition submodule;
The mask submodule adjusts the first image data and institute for the appearance information according to preset virtual objects
The second image data is stated, the first image data and the second image data adjusted after being adjusted;Obtain for show with
It is described to generate in the display area for instructing corresponding virtual objects, belong to the transparent place of transparent region;After the adjustment
The second image data, by the transparent place mask in first image data adjusted be transparent color, obtain
Image data after mask;
The virtual objects acquisition submodule, for obtaining virtual objects based on the image data after the mask.
7. device according to claim 6, which is characterized in that the mask submodule is specifically used for:
Transcoding is carried out to second image data adjusted, obtains mask video;The display position of the mask video is
The transparent place;
First image data adjusted is subjected to transcoding, obtains article video;The display position of the article video is
Position in the display area in addition to the transparent place;
The mask video and the article video are shown jointly, the image data after obtaining mask.
8. device according to claim 6, which is characterized in that the mask submodule is specifically used for:
Using the transparent place of first image data adjusted as transparent channel;
By in second image data adjusted, the pixel in the transparent place is filled into the transparent channel,
Obtain transparent channel data;
By in first image data adjusted, pixel in addition to the pixel in the transparent place, as non-
Bright channel data;
To the transparent channel data and the nontransparent channel data progress in first image data adjusted
Rendering, the image data after obtaining mask.
9. a kind of electronic equipment, which is characterized in that including processor, communication interface, memory and communication bus, wherein processing
Device, communication interface, memory complete mutual communication by bus;Memory, for storing computer program;Processor,
For executing the program stored on memory, the method and step as described in claim 1-4 is any is realized.
10. a kind of computer readable storage medium, which is characterized in that computer program is stored in the storage medium, it is described
The method and step as described in claim 1-4 is any is realized when computer program is executed by processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910578523.7A CN110213640B (en) | 2019-06-28 | 2019-06-28 | Virtual article generation method, device and equipment |
PCT/CN2020/077034 WO2020258907A1 (en) | 2019-06-28 | 2020-02-27 | Virtual article generation method, apparatus and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910578523.7A CN110213640B (en) | 2019-06-28 | 2019-06-28 | Virtual article generation method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110213640A true CN110213640A (en) | 2019-09-06 |
CN110213640B CN110213640B (en) | 2021-05-14 |
Family
ID=67795510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910578523.7A Active CN110213640B (en) | 2019-06-28 | 2019-06-28 | Virtual article generation method, device and equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110213640B (en) |
WO (1) | WO2020258907A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020258907A1 (en) * | 2019-06-28 | 2020-12-30 | 香港乐蜜有限公司 | Virtual article generation method, apparatus and device |
CN112348969A (en) * | 2020-11-06 | 2021-02-09 | 北京市商汤科技开发有限公司 | Display method and device in augmented reality scene, electronic equipment and storage medium |
WO2022271086A1 (en) * | 2021-06-21 | 2022-12-29 | Lemon Inc. | Rendering virtual articles of clothing based on audio characteristics |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950405A (en) * | 2010-08-10 | 2011-01-19 | 浙江大学 | Video content-based watermarks adding method |
CN102289339A (en) * | 2010-06-21 | 2011-12-21 | 腾讯科技(深圳)有限公司 | Method and device for displaying expression information |
CN102663785A (en) * | 2012-03-29 | 2012-09-12 | 上海华勤通讯技术有限公司 | Mobile terminal and image processing method thereof |
CN104619258A (en) * | 2012-09-13 | 2015-05-13 | 富士胶片株式会社 | Device and method for displaying three-dimensional image, and program |
CN104995662A (en) * | 2013-03-20 | 2015-10-21 | 英特尔公司 | Avatar-based transfer protocols, icon generation and doll animation |
CN105338410A (en) * | 2014-07-07 | 2016-02-17 | 乐视网信息技术(北京)股份有限公司 | Method and device for displaying barrage of video |
KR20160064328A (en) * | 2014-11-27 | 2016-06-08 | 정승화 | Apparatus and method for supporting special effects with motion cartoon systems |
CN106303653A (en) * | 2016-08-12 | 2017-01-04 | 乐视控股(北京)有限公司 | A kind of image display method and device |
CN107027046A (en) * | 2017-04-13 | 2017-08-08 | 广州华多网络科技有限公司 | Auxiliary live audio/video processing method and device |
CN107169872A (en) * | 2017-05-09 | 2017-09-15 | 北京龙杯信息技术有限公司 | Method, storage device and terminal for generating virtual present |
CN108093307A (en) * | 2017-12-29 | 2018-05-29 | 广州酷狗计算机科技有限公司 | Obtain the method and system of played file |
CN108174227A (en) * | 2017-12-27 | 2018-06-15 | 广州酷狗计算机科技有限公司 | Display methods, device and the storage medium of virtual objects |
WO2018116468A1 (en) * | 2016-12-22 | 2018-06-28 | マクセル株式会社 | Projection video display device and method of video display therefor |
CN109191549A (en) * | 2018-11-14 | 2019-01-11 | 广州酷狗计算机科技有限公司 | Show the method and device of animation |
CN109300180A (en) * | 2018-10-18 | 2019-02-01 | 看见故事(苏州)影视文化发展有限公司 | A kind of 3D animation method and calculate producing device |
CN109413338A (en) * | 2018-09-28 | 2019-03-01 | 北京戏精科技有限公司 | A kind of method and system of scan picture |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101005609B (en) * | 2006-01-21 | 2010-11-03 | 腾讯科技(深圳)有限公司 | Method and system for forming interaction video frequency image |
CN106713988A (en) * | 2016-12-09 | 2017-05-24 | 福建星网视易信息系统有限公司 | Beautifying method and system for virtual scene live |
CN108769826A (en) * | 2018-06-22 | 2018-11-06 | 广州酷狗计算机科技有限公司 | Live media stream acquisition methods, device, terminal and storage medium |
CN110213640B (en) * | 2019-06-28 | 2021-05-14 | 香港乐蜜有限公司 | Virtual article generation method, device and equipment |
-
2019
- 2019-06-28 CN CN201910578523.7A patent/CN110213640B/en active Active
-
2020
- 2020-02-27 WO PCT/CN2020/077034 patent/WO2020258907A1/en active Application Filing
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102289339A (en) * | 2010-06-21 | 2011-12-21 | 腾讯科技(深圳)有限公司 | Method and device for displaying expression information |
CN101950405A (en) * | 2010-08-10 | 2011-01-19 | 浙江大学 | Video content-based watermarks adding method |
CN102663785A (en) * | 2012-03-29 | 2012-09-12 | 上海华勤通讯技术有限公司 | Mobile terminal and image processing method thereof |
CN104619258A (en) * | 2012-09-13 | 2015-05-13 | 富士胶片株式会社 | Device and method for displaying three-dimensional image, and program |
CN104995662A (en) * | 2013-03-20 | 2015-10-21 | 英特尔公司 | Avatar-based transfer protocols, icon generation and doll animation |
CN105338410A (en) * | 2014-07-07 | 2016-02-17 | 乐视网信息技术(北京)股份有限公司 | Method and device for displaying barrage of video |
KR20160064328A (en) * | 2014-11-27 | 2016-06-08 | 정승화 | Apparatus and method for supporting special effects with motion cartoon systems |
CN106303653A (en) * | 2016-08-12 | 2017-01-04 | 乐视控股(北京)有限公司 | A kind of image display method and device |
WO2018116468A1 (en) * | 2016-12-22 | 2018-06-28 | マクセル株式会社 | Projection video display device and method of video display therefor |
CN107027046A (en) * | 2017-04-13 | 2017-08-08 | 广州华多网络科技有限公司 | Auxiliary live audio/video processing method and device |
CN107169872A (en) * | 2017-05-09 | 2017-09-15 | 北京龙杯信息技术有限公司 | Method, storage device and terminal for generating virtual present |
CN108174227A (en) * | 2017-12-27 | 2018-06-15 | 广州酷狗计算机科技有限公司 | Display methods, device and the storage medium of virtual objects |
CN108093307A (en) * | 2017-12-29 | 2018-05-29 | 广州酷狗计算机科技有限公司 | Obtain the method and system of played file |
CN109413338A (en) * | 2018-09-28 | 2019-03-01 | 北京戏精科技有限公司 | A kind of method and system of scan picture |
CN109300180A (en) * | 2018-10-18 | 2019-02-01 | 看见故事(苏州)影视文化发展有限公司 | A kind of 3D animation method and calculate producing device |
CN109191549A (en) * | 2018-11-14 | 2019-01-11 | 广州酷狗计算机科技有限公司 | Show the method and device of animation |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020258907A1 (en) * | 2019-06-28 | 2020-12-30 | 香港乐蜜有限公司 | Virtual article generation method, apparatus and device |
CN112348969A (en) * | 2020-11-06 | 2021-02-09 | 北京市商汤科技开发有限公司 | Display method and device in augmented reality scene, electronic equipment and storage medium |
CN112348969B (en) * | 2020-11-06 | 2023-04-25 | 北京市商汤科技开发有限公司 | Display method and device in augmented reality scene, electronic equipment and storage medium |
WO2022271086A1 (en) * | 2021-06-21 | 2022-12-29 | Lemon Inc. | Rendering virtual articles of clothing based on audio characteristics |
Also Published As
Publication number | Publication date |
---|---|
CN110213640B (en) | 2021-05-14 |
WO2020258907A1 (en) | 2020-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102560187B1 (en) | Method and system for rendering virtual reality content based on two-dimensional ("2D") captured images of a three-dimensional ("3D") scene | |
CN110351592B (en) | Animation presentation method and device, computer equipment and storage medium | |
CN110536151A (en) | The synthetic method and device of virtual present special efficacy, live broadcast system | |
WO2022048097A1 (en) | Single-frame picture real-time rendering method based on multiple graphics cards | |
JP4481166B2 (en) | Method and system enabling real-time mixing of composite and video images by a user | |
CN106713988A (en) | Beautifying method and system for virtual scene live | |
CN110493630A (en) | The treating method and apparatus of virtual present special efficacy, live broadcast system | |
CN108108140B (en) | Multi-screen cooperative display method, storage device and equipment supporting 3D display | |
CN110213640A (en) | Generation method, device and the equipment of virtual objects | |
CN111669646A (en) | Method, device, equipment and medium for playing transparent video | |
US20080012988A1 (en) | System and method for virtual content placement | |
US20120170833A1 (en) | Multi-view image generating method and apparatus | |
CN108235055A (en) | Transparent video implementation method and equipment in AR scenes | |
US11468629B2 (en) | Methods and apparatus for handling occlusions in split rendering | |
US11989814B2 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN105554416A (en) | FPGA (Field Programmable Gate Array)-based high-definition video fade-in and fade-out processing system and method | |
CN105578172B (en) | Bore hole 3D image display methods based on Unity3D engines | |
JP2019527899A (en) | System and method for generating a 3D interactive environment using virtual depth | |
CN113781660A (en) | Method and device for rendering and processing virtual scene on line in live broadcast room | |
CN103685976A (en) | A method and a device for raising the display quality of an LED display screen in recording a program | |
KR102073230B1 (en) | Apparaturs for playing vr video to improve quality of specific area | |
EP4178199A1 (en) | Information processing device, information processing method, video distribution method, and information processing system | |
US20200221165A1 (en) | Systems and methods for efficient video content transition effects generation | |
Kim et al. | Design and implementation for interactive augmented broadcasting system | |
CN113301425A (en) | Video playing method, video playing device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210527 Address after: 25, 5th floor, shuangjingfang office building, 3 frisha street, Singapore Patentee after: Zhuomi Private Ltd. Address before: Room 1101, Santai Commercial Building, 139 Connaught Road, Hong Kong, China Patentee before: HONG KONG LIVE.ME Corp.,Ltd. |
|
TR01 | Transfer of patent right |