WO2020258907A1 - Virtual article generation method, apparatus and device - Google Patents
Virtual article generation method, apparatus and device Download PDFInfo
- Publication number
- WO2020258907A1 WO2020258907A1 PCT/CN2020/077034 CN2020077034W WO2020258907A1 WO 2020258907 A1 WO2020258907 A1 WO 2020258907A1 CN 2020077034 W CN2020077034 W CN 2020077034W WO 2020258907 A1 WO2020258907 A1 WO 2020258907A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- data
- adjusted
- transparent
- virtual item
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000000694 effects Effects 0.000 claims abstract description 82
- 239000011159 matrix material Substances 0.000 claims description 100
- 238000004590 computer program Methods 0.000 claims description 14
- 230000001360 synchronised effect Effects 0.000 claims description 13
- 238000009877 rendering Methods 0.000 claims description 11
- 238000004891 communication Methods 0.000 claims description 10
- 230000000873 masking effect Effects 0.000 claims description 6
- 230000001131 transforming effect Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 description 9
- 230000006835 compression Effects 0.000 description 8
- 238000007906 compression Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/233—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2355—Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Definitions
- This application relates to the technical field of behavioral virtual items, in particular to a method, device and equipment for generating virtual items.
- Various types of Internet-related clients usually provide various configured virtual items so that users can use virtual items to realize virtual activities.
- users can use virtual items to implement virtual transactions, dress up personal online communities, and so on.
- virtual items are usually static images, such as flower images, fireworks images, and so on. Since the static image has only one fixed picture, it is easy to cause the display effect of the generated virtual item to be relatively simple.
- sound effects can be added to virtual items to increase the variety of virtual items' display effects.
- the way of adding sound effects is simply to play music when displaying virtual items, which may easily cause the picture and sound effects of the virtual items to be out of sync, leading to the problem of the sound and picture being out of sync with the display effects of the virtual items. Therefore, how to ensure the synchronization of audio and video of the diversified display effects of the generated virtual items is an urgent problem to be solved.
- the purpose of the embodiments of the present application is to provide a method, device, and equipment for generating virtual items, so as to achieve the effect of improving the convenience of checking the configuration of virtual items.
- the specific technical solutions are as follows:
- an embodiment of the present application provides a method for generating a virtual item, which is applied to a server, and the method includes:
- the generation instruction of the virtual item When the generation instruction of the virtual item is detected, acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction; wherein the first image data provides a picture of the first video; The audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure synchronization of the content of the first image data and the audio data;
- the screen data and the audio data of the virtual item are played synchronously to obtain the virtual item corresponding to the generation instruction.
- the method before the step of transcoding the adjusted first image data to obtain screen data of the virtual item, the method further includes:
- the step of transcoding the adjusted first image data to obtain the screen data of the virtual item includes:
- the step of using the adjusted second image data to mask the transparent position in the adjusted first image data to a transparent color to obtain the masked image data includes:
- the adjusted second image data is displayed to obtain the masked image data.
- the step of using the adjusted second image data to mask the transparent position in the adjusted first image data to a transparent color to obtain the masked image data includes :
- the method before the step of transcoding the masked image data to obtain the screen data of the virtual item, the method further includes:
- the third image data is used as a specific element of the virtual item
- the step of transcoding the masked image data to obtain the image data of the virtual item includes:
- the position relationship includes: taking each pixel of the masked image data as an element of a matrix, and setting each pixel of the masked image data in the masked image The position in the data is used as the position of the element in the matrix;
- the positional relationship is the corresponding relationship between the elements of the second pixel matrix and the elements of the first pixel matrix;
- the second pixel matrix is the pixel matrix of pixels in the third image data;
- the first pixel matrix is The pixel matrix of the pixels in the masked image data;
- the third matrix is converted into image data to obtain special effect image data.
- an embodiment of the present application provides a virtual item generation device applied to a server, and the device includes:
- the data acquisition module is configured to acquire the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction when the generation instruction of the virtual item is detected; wherein, the first image data provides the The picture of the first video; the audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure that the content of the first image data and the audio data are synchronized;
- the screen data generation module is used to adjust the first image data according to the preset shape information about the virtual item to obtain the adjusted first image data, and to transcode the adjusted first image data to obtain the virtual Picture data of the item;
- the virtual item display module is configured to synchronously play the screen data and the audio data of the virtual item according to the playback time stamp to obtain the virtual item corresponding to the generation instruction.
- the data acquisition module is specifically used for:
- the second image data of the second video Before transcoding the adjusted first image data to obtain the screen data of the virtual item, obtain the second image data of the second video whose screen color is a transparent color; the second image data provides the second The picture of the video;
- the picture data generation module includes: a mask sub-module and a transcoding sub-module;
- the mask sub-module is configured to adjust the first image data and the second image data according to preset shape information about the virtual item to obtain adjusted second image data; and obtain the adjusted first image data; In one image data, the transparent position belonging to the transparent area; using the adjusted second image data, mask the transparent position in the adjusted first image data with a transparent color to obtain the masked Image data
- the transcoding sub-module is used to transcode the masked image data to obtain the screen data of the virtual item.
- the mask submodule is specifically used for:
- the adjusted second image data is displayed to obtain the masked image data.
- the mask submodule is specifically used for:
- the data acquisition module is specifically used for:
- the transcoding submodule transcodes the masked image data to obtain the screen data of the virtual item, obtains third image data, and the difference between the third image data and the masked image data
- the positional relationship between; the third image data is used as a specific element of the virtual item;
- the transcoding submodule is specifically used for:
- the position relationship includes: taking each pixel of the masked image data as an element of a matrix, and setting each pixel of the masked image data in the masked image The position in the data is used as the position of the element in the matrix;
- the positional relationship is the corresponding relationship between the elements of the second pixel matrix and the elements of the first pixel matrix;
- the second pixel matrix is the pixel matrix of pixels in the third image data;
- the first pixel matrix is The pixel matrix of the pixels in the masked image data;
- the data acquisition module is specifically used for:
- the third matrix is converted into image data to obtain special effect image data.
- an electronic device which includes:
- processor communication interface
- memory communication bus
- the processor, communication interface, and memory communicate with each other through the bus;
- the memory is used to store computer programs;
- the processor is used to execute the programs stored in the memory to realize The steps of the method for generating virtual items provided in the first aspect described above.
- an embodiment of the present application provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, the steps of the method for generating a virtual item provided in the first aspect are implemented.
- the server when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item For the shape information of the item, adjust the first image data, and transcode the adjusted first image data to obtain the picture data of the virtual item; and then according to the playback time stamp, the picture data and audio data of the virtual item are played synchronously, Obtain the virtual item corresponding to the generation instruction.
- the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
- FIG. 1 is a schematic flowchart of a method for generating virtual items provided by an embodiment of the application
- FIG. 2 is a schematic flowchart of a method for generating virtual items provided by another embodiment of the application.
- FIG. 3 is a schematic structural diagram of a virtual item generating apparatus provided by an embodiment of the application.
- FIG. 4 is a schematic structural diagram of a virtual item generating apparatus provided by another embodiment of the application.
- FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
- the method for generating virtual items provided by the embodiments of this application can be applied to a server corresponding to an Internet-related client.
- the server can include a desktop computer, a portable computer, an Internet TV, a smart mobile terminal, a wearable smart terminal, etc. This is not limited, and any server that can implement the embodiments of the present application falls within the protection scope of the embodiments of the present application.
- the client related to the Internet may be a live broadcast client, or a social client, etc.
- the virtual items may be gifts, pendants and other virtual items.
- the flow of the method for generating virtual items according to an embodiment of the present application may include the following steps:
- related personnel can issue generation instructions for different virtual items.
- the issuer of virtual item generation instructions is not limited to related personnel.
- the device automatically It is also reasonable to generate instructions for generating virtual items.
- one generation instruction is used to generate a virtual item
- one first video is used to provide a picture and sound of a virtual item
- one first video corresponds to one generation instruction.
- the generation instruction of the virtual item is detected, the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction can be acquired, so as to generate the corresponding generation instruction through the subsequent steps S102 to S103.
- the different first videos may be independent videos that are not related to each other, or the different first videos may be different video segments obtained by segmentation from the same original video.
- the first video may specifically include first image data and audio data, the first image data provides a picture of the first video, and the audio data provides a sound of the first video. Therefore, for example, the first image data may specifically be an image frame queue of the first video, and the audio data may specifically be an audio packet queue of the first video.
- the method for acquiring the first image data and audio data may specifically include: using a preset model to read the first video to obtain an image frame queue and an audio packet queue, which are used as the first image data and audio data, respectively.
- the preset model is a model capable of video capture and decoding.
- the preset model may be FFMPEG (Fast Forward MPEG, a tool with functions such as video capture, video format conversion, and video capture). At this time, FFMPEG can be used to read the first video to obtain First image data and audio data.
- the playback time stamp is an identifier used to ensure that the content of the first image data and the audio data are synchronized.
- the playback timestamp may include 1 second and 2 seconds, the playback timestamp of the video frame VF1 in the first image data is 1 second, and the playback timestamp of the video frame VF2 is 2 seconds; the playback of the audio package AP1 in the audio data
- the time stamp is 1 second, and the playback time stamp of the audio packet AP2 is 2 seconds.
- the first image data and audio data can be played synchronously according to the playback time stamp to ensure that the content of the first image data and audio data is synchronized: at 0-1 second, the playback time stamp is both 1 second.
- Frame VF1 and audio package AP1 are played synchronously.
- the video frame VF2 and audio package AP2 with a playback time stamp of 2 seconds are played synchronously.
- the playback timestamp of the first video can be acquired, so that in the subsequent step S103, the screen data and audio data of the virtual item are synchronously played according to the acquired playback timestamp to obtain the virtual item corresponding to the generation instruction.
- the screen data and audio data of the virtual item can be played synchronously to ensure that the screen effect and sound effect of the generated virtual item are synchronized.
- S102 Adjust the first image data according to the preset shape information about the virtual item to obtain the adjusted first image data, and transcode the adjusted first image data to obtain the screen data of the virtual item.
- the preset shape information about the virtual item is information about the shape of the virtual item, which may be of various types.
- the preset shape information about the virtual item may include at least one of the preset size, color, brightness and shape of the virtual item and other information about the shape of the virtual item.
- the adjustment of the first image data according to the preset shape information about the virtual item may specifically be: adjusting the appearance parameters of the video frame included in the first image data to be the same as the preset about the virtual item The shape information is the same.
- the size, color, brightness, shape and other parameters of the video frame included in the first image data can be set Values are adjusted to size S1, color C1, brightness L1, and shape F1.
- the adjusted first image data may be transcoded to obtain the picture data of the virtual item.
- the screen data of the virtual item is equivalent to a video without sound effects.
- transcoding the adjusted first image data is equivalent to encoding the adjusted first image data into a video without sound effects, and there may be multiple specific types.
- an inter-frame compression encoding method may be used to transcode the adjusted first image data
- an intra-frame compression encoding method may be adopted to transcode the adjusted first image data. Any method that can transcode the adjusted first image data can be used in this application, and this embodiment does not limit this.
- inter-frame compression can be implemented by comparing data between different image frames on the time axis to increase the compression ratio and reduce The amount of data processing resources occupied.
- Intra-frame compression when compressing an image frame, only considers the data of the image frame, and does not consider the redundant information between the image frame and adjacent image frames. It is similar to the compression of static images, and the compression ratio is relatively low. , The occupation of data processing resources is relatively large. Therefore, for application scenarios that need to reduce the occupation of data processing resources, such as the generation of virtual gifts in a live broadcast, inter-frame compression can be used to obtain virtual items, which is beneficial to reduce the occupation of data processing resources.
- S103 Synchronously play the screen data and audio data of the virtual item according to the play timestamp to obtain the virtual item corresponding to the generation instruction.
- the playback time stamp can ensure the synchronization of the content of the first image data and audio data
- the screen data and audio data of the virtual item are played synchronously according to the playback time stamp to obtain the virtual item corresponding to the generation instruction, which can ensure that the generated virtual item
- the picture effects of the items are synchronized with the sound effects.
- the screen data and audio data of the virtual item are played synchronously to obtain the virtual item corresponding to the generation instruction, which may specifically include: sending the screen data of the virtual item to the image display device, so that the image is displayed
- the device selects data corresponding to the playback time stamp from the screen data of the virtual item for display; and synchronously sends the audio data to the audio output device, so that the audio output device plays the data corresponding to the playback time stamp in the audio data.
- the playback timestamp from the 0th to the 10th second is T1.
- the duration 10S or the number 1 or the number 0 can be identified as the timestamp T1 to mark the 0th to 10th time period. second.
- the data corresponding to the playback time stamp T1 is the video frame VF0 from the 0th second to the 10th second
- the data corresponding to the playback time stamp T1 in the audio data is the audio packet AP0 of the video frame VF0.
- the audio playback device may start to play the audio package AP0.
- the server when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item
- the shape information of the item, the first image data is adjusted, and the adjusted first image data is transcoded to obtain the picture data of the virtual item; and then the picture data and audio data of the virtual item are played according to the playback timestamp to obtain Generate virtual items corresponding to the command.
- the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
- the flow of the method for generating virtual items according to another embodiment of the present application may include:
- S201 is the same step as S101 in the embodiment of FIG. 1 of this application, and will not be repeated here. For details, refer to the description of the embodiment of FIG. 1 of this application.
- S202 Adjust the first image data according to preset shape information about the virtual item to obtain adjusted first image data.
- the step of adjusting the first image data in S202 is the same as the step of adjusting the first image data in S102 of the embodiment of FIG. 1 of this application, which will not be repeated here.
- S203 Acquire second image data of the second video whose screen color is a transparent color; the second image data provides a screen of the second video.
- a transparent part can be set in the screen of the virtual item to ensure that the display effect of the virtual item is more realistic and three-dimensional.
- the color of the screen other than the car itself can be set to a transparent color, so that the screen content of the virtual item displayed is the car itself, and there will be no large black or white areas. Content that has nothing to do with the car itself.
- the data related to the transparent color can be used to mask the area of the virtual item that needs to have a transparent effect.
- the data related to the transparent color may be second image data of the second video whose screen color is a transparent color, and the second image data provides a picture of the second video, or may be an image whose screen color is a transparent color.
- the second image data and the first image data are both video providing pictures, and the difference is that the picture color of the second image data is a transparent color, and the second video picture is provided. Therefore, when the second image data of the second video is used as the data related to the transparent color, the processing logic similar to that of the first image data can be used. The difference is that the processing object is replaced with the second video and the second image data. Add separate processing logic. For example, a preset model for obtaining the first image data can be used to read the second video to obtain the second image data. Moreover, step S203 can be performed before or after step S202.
- S204 Adjust the second image data according to preset shape information about the virtual item to obtain adjusted second image data.
- Step S204 is similar to the process of adjusting the first image data in S102 of the embodiment of FIG. 1 of the present application, except that the object of adjustment in S204 is the second image data. Moreover, when step S203 is executed before step S202, step S204 may be executed before step S202, or simultaneously with step S202. Among them, adjusting two kinds of image data at the same time can improve efficiency. The same content will not be repeated here. For details, please refer to the description of the embodiment in FIG. 1 of the present application.
- S205 Acquire a transparent position belonging to a transparent area in the adjusted first image data.
- the virtual item needs to be set to the position of the transparent part of the transparent color, that is, the transparent position belonging to the transparent area in the adjusted first image data. Therefore, the transparent position belonging to the transparent area in the adjusted first image data can be obtained, and used for masking in the subsequent step S206 to obtain the masked image data with the transparent color area.
- the transparent position may specifically be the two-dimensional coordinates of the transparent area in the two-dimensional coordinate system of the adjusted first image data.
- the adjusted shape and size of the first image data are preset shape information about the virtual item, which is fixed, and the position of the non-transparent area of the virtual item is preset ,
- the transparent area is the area other than the non-transparent area in the adjusted first image data. Therefore, according to the position of the non-transparent area and the adjusted shape and size of the first image data, a position different from the position of the non-transparent area in the adjusted first image data can be stored as a transparent position in advance. Correspondingly, the pre-stored transparent position can be directly read.
- the position of the non-transparent area of the virtual item may be read, and a position different from the position of the non-transparent area may be determined from the adjusted first image data as the transparent position.
- Any method that can obtain the transparent position belonging to the transparent area in the adjusted first image data can be used in this application, and this embodiment does not limit this.
- the adjusted second image data corresponds to the adjusted first image bit by bit, and has the same size and shape, and the adjusted second image data is a transparent color. Therefore, the adjusted second image data can be used to mask the transparent position in the adjusted first image data into a transparent color to obtain masked image data. In a specific application, the adjusted second image data is used to mask the transparent position in the adjusted first image data into a transparent color, and there are multiple ways to obtain the masked image data. The following is a specific description in the form of an alternative embodiment.
- step S206 using the adjusted second image data, mask the transparent position in the adjusted first image data to a transparent color to obtain the masked image data, specifically It can include the following steps:
- the adjusted second image data is displayed to obtain the masked image data.
- displaying the adjusted second image data at the transparent position in the adjusted first image data specifically refers to: displaying the adjusted transparent position in each video frame included in the adjusted first image data The second image data.
- the transparent position in each video frame included in the adjusted first image data can display one video frame included in the second image data.
- the masked image data includes adjusted second image data and adjusted first image data, where the display effect of the adjusted second image data is a transparent color; the display effect of the adjusted first image data is The screen content of the virtual item itself and the area that needs to be set to a transparent color. Therefore, the adjusted second image data can be displayed at the transparent position in the adjusted first image data, so that the area in the adjusted first image data that needs to be set to a transparent color for the adjusted second image data The transparent color is displayed, and the masked image data is obtained.
- displaying the adjusted second image data at the transparent position in the adjusted first image data to obtain the masked image data may specifically include: combining the adjusted first image data with The adjusted second image data is copied to the buffer of the display module, so that while the display module displays the first image data, it simultaneously displays the adjusted second image data in a transparent position in the adjusted first image data to achieve Dual display.
- the adjusted first image data and the adjusted second image data can be copied to the cache of the display module, so that the display module is in the NativeWindow (local window). Dual display.
- the acquisition of the masked image data can be achieved by dual display of the adjusted second image and the adjusted first image, without the need for complex image channel filling and rendering processes. Therefore, It has the advantage of a relatively simple implementation process, and can improve the efficiency of generating virtual items.
- step S206 using the adjusted second image data, mask the transparent position in the adjusted first image data to a transparent color to obtain the masked image data, Specifically, it can include the following steps:
- each pixel in the video frame included in the adjusted first image data belongs to the transparent position or the non-transparent position of the adjusted first image data; here Basically, when the pixel at the transparent position has a transparent color, the picture effect at that position is a transparent effect. Therefore, the adjusted transparent position of the first image data can be used as the transparent channel, so that the transparent channel can be filled with pixels with transparent colors in subsequent steps to achieve a transparent effect.
- the adjusted second image data corresponds to the pixels of the adjusted first image data bit by bit.
- pixels in the transparent position in the adjusted second image data can be filled into the transparent channel to obtain the transparent channel Data, thereby obtaining adjusted first image data including a transparent channel and a non-transparent channel.
- the transparent channel data and the transparent channel data can be adjusted according to the pixel distribution position in the adjusted first image data.
- Non-transparent channel data is rendered.
- rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain the masked image data may specifically include: changing the transparent channel in the adjusted first image data
- the texture data is generated from the data and the non-transparent channel data
- the Texture data is rendered using OpenGL (Open Graphics Library) to obtain the masked image data.
- the execution subject for rendering may specifically be a GPU (Graphics Processing Unit, image processor) to improve the efficiency of acquiring the masked image.
- the acquisition of the masked image data is to fill the adjusted pixels of the second image into the transparency channel of the adjusted first image, so as to make the adjustment transparent to the first image.
- the channel data and the non-transparent channel data are rendered to obtain the masked image data. Since transparent channel data and non-transparent channel data with different display effects can be rendered in a targeted manner, compared with the way of obtaining image data after masking without rendering, it is equivalent to performing secondary rendering, which can improve subsequent The display quality of the virtual item obtained based on the masked image data.
- step S207 is similar to the process of transcoding the adjusted first image data in S102 of the embodiment of FIG. 1 of the present application to obtain the image data of the virtual item, except that the object of the transcoding in S207 is the masked image data; And at this time, the screen data of the virtual item is screen data for masking the transparent area that needs to be set to a transparent color to a transparent color.
- the same content will not be repeated here.
- S208 Synchronously play the screen data and audio data of the virtual item according to the play timestamp.
- S208 is a similar step to S103 in the embodiment of FIG. 1 of the present application.
- the screen data of the virtual item in S208 is the screen data that masks the transparent area that needs to be set to a transparent color as a transparent color, which can improve the generated virtual item The sense of three-dimensionality and realism.
- the same parts will not be repeated here. For details, please refer to the description of the embodiment in FIG. 1 of this application.
- step S206 transcoding the masked image data to obtain the virtual item screen data
- the method for generating virtual items provided in the embodiment of the present application may further include the following steps:
- the third image data is used as a specific element of the virtual item
- step S206: transcoding the masked image data to obtain the screen data of the virtual item may specifically include: transcoding the special effect image data to obtain the screen data of the virtual item.
- the third image data may specifically be a vector image.
- the specific elements of a virtual item can be of various types.
- the specific element may be a user avatar, text input by the user, special effects, and so on.
- the acquisition of the positional relationship between the third image data and the masked image data may be multiple.
- the position relationship may be fixed, and therefore, the pre-stored position relationship can be directly read.
- the positional relationship may be the upper left corner or the upper right corner of the masked image data, etc., and the positional relationship is stored in advance so as to be read when the special effect image data is acquired.
- the positional relationship may be to search for the added position corresponding to the acquired type of the third image from the preset correspondence between the type of the third image data and the adding position, as the third image data and the shadow
- the positional relationship between the image data after the mask may be to search for the added position corresponding to the acquired type of the third image from the preset correspondence between the type of the third image data and the adding position, as the third image data and the shadow
- the corresponding adding position may be a transparent position, or a virtual gift boundary position, etc., which does not block the virtual gift position.
- the corresponding adding position may be a transparent position or a designated position that does not block the virtual gift.
- Any method for acquiring the positional relationship between the third image data and the masked image data can be used in this application, and this embodiment does not limit this.
- the third image data is added to the masked image data to obtain a special effect image data.
- the third image data may be added at the adding position of the masked image data to obtain the special effect image data.
- the third image data is a user avatar, and the position relationship is that the third image data is at the upper left corner of the masked image data.
- a user avatar can be added to the upper left corner of the masked image data to obtain special effect image data with transparent effects and containing the user avatar.
- the special effect image data can be transcoded to obtain the picture data of the virtual item.
- This step is similar to the process of transcoding the adjusted first image data in S102 of the embodiment of FIG. 1 of the present application to obtain the image data of the virtual item, except that the object of the transcoding in this step is special effect image data.
- the same content will not be repeated here.
- the richness of the content expressed by the virtual item can be increased, and the diversification of the display effect can be improved.
- the specific element takes the form of the third image data. Therefore, it is relatively convenient to add the specific element in the form of the third image data to the masked image data that is also image data.
- the above-mentioned positional relationship is the corresponding relationship between the elements of the second pixel matrix and the elements of the first pixel matrix;
- the second pixel matrix is the matrix of pixels in the third image data;
- the first pixel matrix is the masked Pixel matrix of pixels in image data;
- the above step of adding third image data to the masked image data according to the position relationship to obtain the special effect image data may specifically include:
- the matrix of the pixels in the third image data takes each pixel of the third image data as an element of the matrix, and uses the position of each pixel of the third image data in the third image data as The position of the elements of the matrix in the matrix, the resulting matrix.
- the first pixel matrix takes each pixel of the masked image data as an element of the matrix, and uses the position of each pixel of the masked image data in the masked image data as an element in the The position in the matrix, the resulting matrix. Therefore, the correspondence between the elements of the second pixel matrix and the elements of the first pixel matrix can be taken as the positional relationship between the third image data and the masked image data.
- the positional relationship is that the elements S11 to S36 in the second matrix correspond to the elements F11 to F36 in the upper left corner of the first matrix bit by bit. Therefore, the first matrix can be added to the positions of the elements F11 to F36 in the first matrix. The elements S11 to S36 in the second matrix obtain the third matrix.
- the third image data as a specific element is added according to the pixel position. Compared with the traditional adding by drawing, the complexity of adding and the amount of data to be processed are relatively reduced. , Can reduce the consumption of data processing resources.
- an embodiment of the present application also provides a virtual item generation device.
- the virtual item generation device As shown in FIG. 3, the virtual item generation device provided by an embodiment of the present application is applied to a server, and the device may include:
- the data acquisition module 301 is configured to acquire the first image data, audio data, and playback time stamp of the first video corresponding to the generation instruction when the generation instruction of the virtual item is detected; wherein the first image data provides The picture of the first video; the audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure that the content of the first image data and the audio data are synchronized;
- the screen data generation module 302 is configured to adjust the first image data according to preset shape information about the virtual item to obtain adjusted first image data, and transcode the adjusted first image data to obtain Screen data of virtual items;
- the virtual item display module 303 is configured to synchronously play the screen data and the audio data of the virtual item according to the playback timestamp to obtain the virtual item corresponding to the generation instruction.
- the server when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item
- the shape information of the item, the first image data is adjusted, and the adjusted first image data is transcoded to obtain the picture data of the virtual item; and then the picture data and audio data of the virtual item are played according to the playback timestamp to obtain Generate virtual items corresponding to the command.
- the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
- the device for generating virtual items is applied to a server, and the device may include:
- the data acquisition module 401 is configured to acquire the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction when the generation instruction of the virtual item is detected; wherein the first image data provides The picture of the first video; the audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure that the content of the first image data and the audio data are synchronized; the color of the acquired picture is Second image data of a second video in a transparent color; the second image data provides a picture of the second video;
- the screen data generation module 402 includes: a mask sub-module 4021 and a transcoding sub-module 4022;
- the mask sub-module 4021 is configured to adjust the second image data according to preset shape information about the virtual item to obtain adjusted second image data; obtain the adjusted first image data that is transparent The transparent position of the region; using the adjusted second image data to mask the transparent position in the adjusted first image data into a transparent color to obtain the masked image data;
- the transcoding sub-module 4022 is used to transcode the masked image data to obtain screen data of virtual items;
- the virtual item display module 403 is configured to synchronously play the screen data and the audio data of the virtual item according to the playback time stamp to obtain the virtual item corresponding to the generation instruction.
- the mask submodule 4021 is specifically used for:
- the adjusted second image data is displayed to obtain the masked image data.
- the mask submodule 4021 is specifically used for:
- the data acquisition module 401 is specifically configured to:
- the transcoding submodule 4022 transcodes the masked image data to obtain the screen data of the virtual item, acquire third image data, as well as the third image data and the masked image data The positional relationship between; the third image data is used as a specific element of the virtual item;
- transcoding submodule 4022 is specifically used for:
- the positional relationship is a correspondence relationship between elements of a second pixel matrix and elements of a first pixel matrix;
- the second pixel matrix is a pixel matrix of pixels in the third image data;
- a pixel matrix is the pixel matrix of pixels in the masked image data;
- the data acquisition module 401 is specifically used for:
- the third matrix is converted into image data to obtain special effect image data.
- an embodiment of the present application also provides an electronic device.
- the device may include:
- the memory 503 is used to store computer programs
- the processor 501 is configured to implement the steps of any virtual item generation method in the foregoing embodiment when executing the computer program stored in the foregoing memory 503.
- the electronic device in the embodiment of FIG. 5 of the present application may specifically be a server corresponding to a client related to the Internet.
- the server when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item
- the shape information of the item, the first image data is adjusted, and the adjusted first image data is transcoded to obtain the picture data of the virtual item; and then the picture data and audio data of the virtual item are played according to the playback timestamp to obtain Generate virtual items corresponding to the command.
- the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
- the foregoing memory may include RAM (Random Access Memory, random access memory), and may also include NVM (Non-Volatile Memory, non-volatile memory), such as at least one disk storage.
- NVM Non-Volatile Memory, non-volatile memory
- the memory may also be at least one storage device located far away from the foregoing processor.
- the above-mentioned processor may be a general-purpose processor, including CPU (Central Processing Unit), NP (Network Processor, network processor), etc.; it may also be DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
- CPU Central Processing Unit
- NP Network Processor, network processor
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field-Programmable Gate Array, field programmable gate array
- other programmable logic devices discrete gates or transistor logic devices, discrete hardware components.
- the computer-readable storage medium provided by an embodiment of the present application is included in an electronic device.
- the computer-readable storage medium stores a computer program.
- the computer program When the computer program is executed by a processor, it realizes the generation of any virtual item in the above-mentioned embodiments. Method steps.
- the server when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item
- the shape information of the item, the first image data is adjusted, and the adjusted first image data is transcoded to obtain the picture data of the virtual item; and then the picture data and audio data of the virtual item are played according to the playback timestamp to obtain Generate virtual items corresponding to the command.
- the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
- a computer program product containing instructions is also provided, which when running on a computer, causes the computer to execute the method for generating virtual items described in any of the above embodiments.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
- the available medium may be a magnetic medium, (for example, a floppy disk) , Hard disk, magnetic tape), optical media (for example: DVD (Digital Versatile Disc)), or semiconductor media (for example: SSD (Solid State Disk, solid state drive)), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Processing Or Creating Images (AREA)
Abstract
Provided are a virtual article generation method, apparatus and device. A virtual article generation method is applied to a server, and comprises: when a generation instruction for a virtual article is detected, acquiring first image data, audio data and a playing timestamp of a first video corresponding to the generation instruction, wherein the first image data provides a picture of the first video, the audio data provides a sound of the first video, and the playing timestamp is an identifier used for ensuring content synchronization of the first image data and the audio data; adjusting the first image data according to preset appearance information of the virtual article, and transcoding the adjusted first image data to obtain picture data of the virtual article; and synchronously playing the picture data of the virtual article and the audio data according to the playing timestamp so as to obtain the virtual article corresponding to the generation instruction. The solution can achieve the effect of ensuring the synchronization of sounds and pictures in a diversified display effect of a generated virtual article.
Description
本申请要求于2019年6月28日提交中国专利局、申请号为201910578523.7发明名称为“虚拟物品的生成方法、装置及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on June 28, 2019 with the application number 201910578523.7 and the invention titled "Method, Apparatus and Equipment for Generating Virtual Objects", the entire contents of which are incorporated into this application by reference in.
本申请涉及行为虚拟物品技术领域,特别是涉及一种虚拟物品的生成方法、装置及设备。This application relates to the technical field of behavioral virtual items, in particular to a method, device and equipment for generating virtual items.
互联网相关的各类客户端通常会提供配置完成的各种虚拟物品,以便用户利用虚拟物品实现虚拟活动。示例性的,用户可以利用虚拟物品实现虚拟交易、以及装扮个人网络社区等等。例如:购买直播客户端中的虚拟礼物送给主播,以及使用社交软件中的虚拟挂件装饰头像和个人主页等等。其中,虚拟物品通常是静态的图像,例如,花朵图像,礼花图像等等。由于静态图像只有一个固定的画面,因此,容易导致生成的虚拟物品的展示效果较为单一。Various types of Internet-related clients usually provide various configured virtual items so that users can use virtual items to realize virtual activities. Exemplarily, users can use virtual items to implement virtual transactions, dress up personal online communities, and so on. For example: buying virtual gifts in the live broadcast client to give to the anchor, and using virtual pendants in social software to decorate the avatar and personal homepage, etc. Among them, virtual items are usually static images, such as flower images, fireworks images, and so on. Since the static image has only one fixed picture, it is easy to cause the display effect of the generated virtual item to be relatively simple.
对此,可以为虚拟物品添加声音效果,以提高虚拟物品的展示效果的多样化。但是,相关技术中,声音效果的添加方式仅仅是简单地在展示虚拟物品时,播放音乐,容易造成虚拟物品的画面与声音效果不同步,导致虚拟物品的展示效果存在音画不同步的问题。因此,如何保证生成的虚拟物品的多样化展示效果的音画同步,是亟待解决的问题。In this regard, sound effects can be added to virtual items to increase the variety of virtual items' display effects. However, in the related art, the way of adding sound effects is simply to play music when displaying virtual items, which may easily cause the picture and sound effects of the virtual items to be out of sync, leading to the problem of the sound and picture being out of sync with the display effects of the virtual items. Therefore, how to ensure the synchronization of audio and video of the diversified display effects of the generated virtual items is an urgent problem to be solved.
发明内容Summary of the invention
本申请实施例的目的在于提供一种虚拟物品的生成方法、装置及设备,以实现提高虚拟物品的配置检查的便捷性的效果。具体技术方案如下:The purpose of the embodiments of the present application is to provide a method, device, and equipment for generating virtual items, so as to achieve the effect of improving the convenience of checking the configuration of virtual items. The specific technical solutions are as follows:
第一方面,本申请实施例提供了一种虚拟物品的生成方法,应用于服务器,该方法包括:In the first aspect, an embodiment of the present application provides a method for generating a virtual item, which is applied to a server, and the method includes:
在检测到虚拟物品的生成指令时,获取所述生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳;其中,所述第一图像数据提供所述第一视频的画面;所述音频数据提供所述第一视频的声音;所述播放时间 戳为用于保证所述第一图像数据和所述音频数据的内容同步的标识;When the generation instruction of the virtual item is detected, acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction; wherein the first image data provides a picture of the first video; The audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure synchronization of the content of the first image data and the audio data;
按照预设的关于虚拟物品的外形信息,调整所述第一图像数据,得到调整后的第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;Adjusting the first image data according to preset shape information about the virtual item to obtain adjusted first image data, and transcoding the adjusted first image data to obtain the screen data of the virtual item;
按照所述播放时间戳,对所述虚拟物品的画面数据以及所述音频数据进行同步播放,得到所述生成指令对应的虚拟物品。According to the playback time stamp, the screen data and the audio data of the virtual item are played synchronously to obtain the virtual item corresponding to the generation instruction.
可选的,在所述对调整后的第一图像数据进行转码,得到虚拟物品的画面数据的步骤之前,所述方法还包括:Optionally, before the step of transcoding the adjusted first image data to obtain screen data of the virtual item, the method further includes:
获取画面颜色为透明颜色的第二视频的第二图像数据;所述第二图像数据提供所述第二视频的画面;Acquiring second image data of a second video whose screen color is a transparent color; the second image data provides a screen of the second video;
按照预设的关于虚拟物品的外形信息,调整所述第二图像数据,得到调整后的第二图像数据;Adjust the second image data according to preset shape information about the virtual item to obtain adjusted second image data;
所述对调整后的第一图像数据进行转码,得到虚拟物品的画面数据的步骤,包括:The step of transcoding the adjusted first image data to obtain the screen data of the virtual item includes:
获取所述调整后的第一图像数据中,属于透明区域的透明位置;Acquiring a transparent position belonging to a transparent area in the adjusted first image data;
利用所述调整后的第二图像数据,将所述调整后的第一图像数据中的所述透明位置遮罩为透明颜色,得到遮罩后的图像数据;Masking the transparent position in the adjusted first image data to a transparent color by using the adjusted second image data to obtain masked image data;
对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据。Transcoding the masked image data to obtain the image data of the virtual item.
可选的,所述利用所述调整后的第二图像数据,将所述调整后的第一图像数据中的所述透明位置遮罩为透明颜色,得到遮罩后的图像数据步骤,包括:Optionally, the step of using the adjusted second image data to mask the transparent position in the adjusted first image data to a transparent color to obtain the masked image data includes:
在所述调整后的第一图像数据中的所述透明位置,展示所述调整后的第二图像数据,得到遮罩后的图像数据。In the transparent position in the adjusted first image data, the adjusted second image data is displayed to obtain the masked image data.
可选的,所述利用所述调整后的第二图像数据,将所述调整后的第一图像数据中的所述透明位置遮罩为透明颜色,得到遮罩后的图像数据的步骤,包括:Optionally, the step of using the adjusted second image data to mask the transparent position in the adjusted first image data to a transparent color to obtain the masked image data includes :
将所述调整后的第一图像数据的所述透明位置作为透明通道;Using the transparent position of the adjusted first image data as a transparent channel;
将所述调整后的第二图像数据中,处于所述透明位置的像素,填充至所述透明通道中,得到透明通道数据;Filling the pixels in the transparent position in the adjusted second image data into the transparent channel to obtain transparent channel data;
将所述调整后的第一图像数据中,除处于所述透明位置的像素以外的像素,作为非透明通道数据;Taking pixels other than the pixels in the transparent position in the adjusted first image data as non-transparent channel data;
对所述调整后的第一图像数据中的所述透明通道数据,以及所述非透明通道数据进行渲染,得到遮罩后的图像数据。Rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain masked image data.
可选的,在所述对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据的步骤之前,所述方法还包括:Optionally, before the step of transcoding the masked image data to obtain the screen data of the virtual item, the method further includes:
获取第三图像数据,以及所述第三图像数据与所述遮罩后的图像数据之间的位置关系;所述第三图像数据用于作为虚拟物品的特定元素;Acquiring third image data and the positional relationship between the third image data and the masked image data; the third image data is used as a specific element of the virtual item;
按照所述位置关系,在所述遮罩后的图像数据中添加所述第三图像数据,得到特效图像数据;Adding the third image data to the masked image data according to the position relationship to obtain special effect image data;
所述对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据的步骤,包括:The step of transcoding the masked image data to obtain the image data of the virtual item includes:
对所述特效图像数据进行转码,得到虚拟物品的画面数据。Transcoding the special effect image data to obtain the screen data of the virtual item.
可选的,所述位置关系包括:将所述遮罩后的图像数据的每个像素作为矩阵的元素,并将所述遮罩后的图像数据的每个像素在所述遮罩后的图像数据中的位置作为元素在矩阵中的位置;Optionally, the position relationship includes: taking each pixel of the masked image data as an element of a matrix, and setting each pixel of the masked image data in the masked image The position in the data is used as the position of the element in the matrix;
所述位置关系为第二像素矩阵的元素与第一像素矩阵的元素之间的对应关系;所述第二像素矩阵为所述第三图像数据中像素的像素矩阵;所述第一像素矩阵为所述遮罩后的图像数据中像素的像素矩阵;The positional relationship is the corresponding relationship between the elements of the second pixel matrix and the elements of the first pixel matrix; the second pixel matrix is the pixel matrix of pixels in the third image data; the first pixel matrix is The pixel matrix of the pixels in the masked image data;
所述按照所述位置关系,在所述遮罩后的图像数据中添加所述第三图像数据,得到特效图像数据的步骤,包括:The step of adding the third image data to the masked image data according to the position relationship to obtain special effect image data includes:
分别将所述遮罩后的图像数据和所述第三图像数据,转化为第一矩阵和第二矩阵;Respectively transforming the masked image data and the third image data into a first matrix and a second matrix;
在所述第一矩阵中,按照所述位置关系,添加所述第二矩阵中的元素,获得第三矩阵;In the first matrix, add elements in the second matrix according to the position relationship to obtain a third matrix;
将所述第三矩阵转化为图像数据,得到特效图像数据。The third matrix is converted into image data to obtain special effect image data.
第二方面,本申请实施例提供了一种虚拟物品的生成装置,应用于服务器,该装置包括:In the second aspect, an embodiment of the present application provides a virtual item generation device applied to a server, and the device includes:
数据获取模块,用于在检测到虚拟物品的生成指令时,获取所述生成指 令对应的第一视频的第一图像数据、音频数据以及播放时间戳;其中,所述第一图像数据提供所述第一视频的画面;所述音频数据提供所述第一视频的声音;所述播放时间戳为用于保证所述第一图像数据和所述音频数据的内容同步的标识;The data acquisition module is configured to acquire the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction when the generation instruction of the virtual item is detected; wherein, the first image data provides the The picture of the first video; the audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure that the content of the first image data and the audio data are synchronized;
画面数据生成模块,用于按照预设的关于虚拟物品的外形信息,调整所述第一图像数据,得到调整后的第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;The screen data generation module is used to adjust the first image data according to the preset shape information about the virtual item to obtain the adjusted first image data, and to transcode the adjusted first image data to obtain the virtual Picture data of the item;
虚拟物品展示模块,用于按照所述播放时间戳,对所述虚拟物品的的画面数据以及所述音频数据进行同步播放,得到所述生成指令对应的虚拟物品。The virtual item display module is configured to synchronously play the screen data and the audio data of the virtual item according to the playback time stamp to obtain the virtual item corresponding to the generation instruction.
可选的,所述数据获取模块,具体用于:Optionally, the data acquisition module is specifically used for:
在所述对调整后的第一图像数据进行转码,得到虚拟物品的画面数据之前,获取画面颜色为透明颜色的第二视频的第二图像数据;所述第二图像数据提供所述第二视频的画面;Before transcoding the adjusted first image data to obtain the screen data of the virtual item, obtain the second image data of the second video whose screen color is a transparent color; the second image data provides the second The picture of the video;
所述画面数据生成模块,包括:遮罩子模块和转码子模块;The picture data generation module includes: a mask sub-module and a transcoding sub-module;
所述遮罩子模块,用于按照预设的关于虚拟物品的外形信息,调整所述第一图像数据和所述第二图像数据,得到调整后的第二图像数据;获取所述调整后的第一图像数据中,属于透明区域的透明位置;利用所述调整后的第二图像数据,将所述调整后的第一图像数据中的所述透明位置遮罩为透明颜色,得到遮罩后的图像数据;The mask sub-module is configured to adjust the first image data and the second image data according to preset shape information about the virtual item to obtain adjusted second image data; and obtain the adjusted first image data; In one image data, the transparent position belonging to the transparent area; using the adjusted second image data, mask the transparent position in the adjusted first image data with a transparent color to obtain the masked Image data
所述转码子模块,用于对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据。The transcoding sub-module is used to transcode the masked image data to obtain the screen data of the virtual item.
可选的,所述遮罩子模块,具体用于:Optionally, the mask submodule is specifically used for:
在所述调整后的第一图像数据中的所述透明位置,展示所述调整后的第二图像数据,得到遮罩后的图像数据。In the transparent position in the adjusted first image data, the adjusted second image data is displayed to obtain the masked image data.
可选的,所述遮罩子模块,具体用于:Optionally, the mask submodule is specifically used for:
将所述调整后的第一图像数据的所述透明位置作为透明通道;Using the transparent position of the adjusted first image data as a transparent channel;
将所述调整后的第二图像数据中,处于所述透明位置的像素,填充至所述透明通道中,得到透明通道数据;Filling the pixels in the transparent position in the adjusted second image data into the transparent channel to obtain transparent channel data;
将所述调整后的第一图像数据中,除处于所述透明位置的像素以外的像 素,作为非透明通道数据;Use pixels other than the pixels in the transparent position in the adjusted first image data as non-transparent channel data;
对所述调整后的第一图像数据中的所述透明通道数据,以及所述非透明通道数据进行渲染,得到遮罩后的图像数据。Rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain masked image data.
可选的,所述数据获取模块,具体用于:Optionally, the data acquisition module is specifically used for:
在所述转码子模块对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据之前,获取第三图像数据,以及所述第三图像数据与所述遮罩后的图像数据之间的位置关系;所述第三图像数据用于作为虚拟物品的特定元素;Before the transcoding submodule transcodes the masked image data to obtain the screen data of the virtual item, obtains third image data, and the difference between the third image data and the masked image data The positional relationship between; the third image data is used as a specific element of the virtual item;
按照所述位置关系,在所述遮罩后的图像数据中添加所述第三图像数据,得到特效图像数据;Adding the third image data to the masked image data according to the position relationship to obtain special effect image data;
所述转码子模块,具体用于:The transcoding submodule is specifically used for:
对所述特效图像数据进行转码,获得虚拟物品。Transcoding the special effect image data to obtain virtual items.
可选的,所述位置关系包括:将所述遮罩后的图像数据的每个像素作为矩阵的元素,并将所述遮罩后的图像数据的每个像素在所述遮罩后的图像数据中的位置作为元素在矩阵中的位置;Optionally, the position relationship includes: taking each pixel of the masked image data as an element of a matrix, and setting each pixel of the masked image data in the masked image The position in the data is used as the position of the element in the matrix;
所述位置关系为第二像素矩阵的元素与第一像素矩阵的元素之间的对应关系;所述第二像素矩阵为所述第三图像数据中像素的像素矩阵;所述第一像素矩阵为所述遮罩后的图像数据中像素的像素矩阵;The positional relationship is the corresponding relationship between the elements of the second pixel matrix and the elements of the first pixel matrix; the second pixel matrix is the pixel matrix of pixels in the third image data; the first pixel matrix is The pixel matrix of the pixels in the masked image data;
所述数据获取模块,具体用于:The data acquisition module is specifically used for:
分别将所述遮罩后的图像数据和所述第三图像数据,转化为第一矩阵和第二矩阵;Respectively transforming the masked image data and the third image data into a first matrix and a second matrix;
在所述第一矩阵中,按照所述位置关系,添加所述第二矩阵中的元素,获得第三矩阵;In the first matrix, add elements in the second matrix according to the position relationship to obtain a third matrix;
将所述第三矩阵转化为图像数据,得到特效图像数据。The third matrix is converted into image data to obtain special effect image data.
第三方面,本申请实施例提供了一种电子设备,该设备包括:In a third aspect, an embodiment of the present application provides an electronic device, which includes:
处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过总线完成相互间的通信;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序,实现上述第一方面提供的虚拟物品的生成方法的步骤。Processor, communication interface, memory and communication bus. Among them, the processor, communication interface, and memory communicate with each other through the bus; the memory is used to store computer programs; the processor is used to execute the programs stored in the memory to realize The steps of the method for generating virtual items provided in the first aspect described above.
第四方面,本申请实施例提供了一种计算机可读存储介质,该存储介质 内存储有计算机程序,该计算机程序被处理器执行时实现上述第一方面提供的虚拟物品的生成方法的步骤。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, the steps of the method for generating a virtual item provided in the first aspect are implemented.
本申请实施例提供的方案中,服务器在检测到虚拟物品的生成指令时,通过获取生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳,可以实现按照预设的关于虚拟物品的外形信息,调整第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;进而按照播放时间戳,对虚拟物品的画面数据以及音频数据进行同步播放,得到生成指令对应的虚拟物品。本方案中,生成虚拟物品所需的画面数据和音频数据为从音画同步的同一视频中提取的内容,且在虚拟物品生成时,按照播放时间对虚拟物品的画面数据和音频数据进行播放所生成,这样,可以保证虚拟物品的声音效果和画面效果的同步。可见,通过本方案,可以实现保证生成的虚拟物品的多样化展示效果中,音画同步的效果。In the solution provided by the embodiment of the present application, when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item For the shape information of the item, adjust the first image data, and transcode the adjusted first image data to obtain the picture data of the virtual item; and then according to the playback time stamp, the picture data and audio data of the virtual item are played synchronously, Obtain the virtual item corresponding to the generation instruction. In this solution, the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present application and the technical solutions of the prior art more clearly, the following briefly introduces the drawings needed in the embodiments and the prior art. Obviously, the drawings in the following description are only the present For some of the embodiments of the application, for those of ordinary skill in the art, other drawings can be obtained from these drawings without creative work.
图1为本申请一实施例提供的虚拟物品的生成方法的流程示意图;FIG. 1 is a schematic flowchart of a method for generating virtual items provided by an embodiment of the application;
图2为本申请另一实施例提供的虚拟物品的生成方法的流程示意图;2 is a schematic flowchart of a method for generating virtual items provided by another embodiment of the application;
图3为本申请一实施例提供的虚拟物品的生成装置的结构示意图;FIG. 3 is a schematic structural diagram of a virtual item generating apparatus provided by an embodiment of the application;
图4为本申请另一实施例提供的虚拟物品的生成装置的结构示意图;4 is a schematic structural diagram of a virtual item generating apparatus provided by another embodiment of the application;
图5为本申请一实施例提供的电子设备的结构示意图。FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
为使本申请的目的、技术方案、及优点更加清楚明白,以下参照附图并举实施例,对本申请进一步详细说明。显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions, and advantages of the present application clearer, the following further describes the present application in detail with reference to the drawings and embodiments. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of this application.
下面首先对本申请一实施例的虚拟物品的生成方法进行介绍。The following first introduces a method for generating a virtual item according to an embodiment of the present application.
本申请实施例提供的虚拟物品的生成方法,可以应用于与互联网相关的客户端对应的服务器,该服务器可以包括台式计算机、便携式计算机、互联网电视、智能移动终端以及可穿戴式智能终端等,在此不作限定,任何可以实现本申请实施例的服务器,均属于本申请实施例的保护范围。The method for generating virtual items provided by the embodiments of this application can be applied to a server corresponding to an Internet-related client. The server can include a desktop computer, a portable computer, an Internet TV, a smart mobile terminal, a wearable smart terminal, etc. This is not limited, and any server that can implement the embodiments of the present application falls within the protection scope of the embodiments of the present application.
在具体应用中,与互联网相关的客户端可以是多种的。示例性的,与互联网相关的客户端可以是直播客户端,或者社交客户端等等。相应的,虚拟物品具体可以是多种的。举例而言,虚拟物品可以是礼物,挂件等等虚拟物品。In a specific application, there may be multiple clients related to the Internet. Exemplarily, the client related to the Internet may be a live broadcast client, or a social client, etc. Correspondingly, there can be multiple types of virtual items. For example, the virtual items may be gifts, pendants and other virtual items.
如图1所示,本申请一实施例的虚拟物品的生成方法的流程,该方法可以包括如下步骤:As shown in Fig. 1, the flow of the method for generating virtual items according to an embodiment of the present application may include the following steps:
S101,在检测到虚拟物品的生成指令时,获取生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳。其中,第一图像数据提供第一视频的画面;音频数据提供第一视频的声音;播放时间戳为用于保证第一图像数据和音频数据的内容同步的标识。S101: When a generation instruction of a virtual item is detected, first image data, audio data, and a playback time stamp of a first video corresponding to the generation instruction are acquired. Wherein, the first image data provides the picture of the first video; the audio data provides the sound of the first video; and the playback timestamp is an identifier for ensuring synchronization of the content of the first image data and the audio data.
为了生成多样化的虚拟物品,相关人员可以下发针对不同的虚拟物品的生成指令,当然,虚拟物品的生成指令的发出者并不局限于相关人员,例如:在检测到预定触发时,设备自动生成虚拟物品的生成指令,这也是合理的。并且,由于一个生成指令用于生成一个虚拟物品,而一个第一视频用于提供一个虚拟物品的画面和声音,因此,一个第一视频与一个生成指令对应。相应的,在检测到虚拟物品的生成指令时,可以获取与生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳,以通过后续步骤S102至S103,生成与生成指令对应的虚拟物品。其中,不同的第一视频可以为相互之间没有关系、各自独立的视频,或者,不同的第一视频可以为从同一个原始视频中分割得到的不同的视频片段。In order to generate diversified virtual items, related personnel can issue generation instructions for different virtual items. Of course, the issuer of virtual item generation instructions is not limited to related personnel. For example, when a predetermined trigger is detected, the device automatically It is also reasonable to generate instructions for generating virtual items. In addition, since one generation instruction is used to generate a virtual item, and one first video is used to provide a picture and sound of a virtual item, one first video corresponds to one generation instruction. Correspondingly, when the generation instruction of the virtual item is detected, the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction can be acquired, so as to generate the corresponding generation instruction through the subsequent steps S102 to S103. Virtual item. Wherein, the different first videos may be independent videos that are not related to each other, or the different first videos may be different video segments obtained by segmentation from the same original video.
其中,第一视频具体可以包含第一图像数据和音频数据,第一图像数据提供第一视频的画面,音频数据提供第一视频的声音。因此,举例而言,第一图像数据具体可以是第一视频的图像帧队列,音频数据具体可以是第一视频的音频包队列。示例性的,第一图像数据和音频数据的获取方式具体可以包括:利用预设的模型,读取第一视频,得到图像帧队列以及音频包队列, 分别作为第一图像数据和音频数据。其中,预设的模型为能够进行视频采集以及解码的模型。示例性的,预设的模型可以为FFMPEG(Fast Forward MPEG,具有视频采集功能、视频格式转换以及视频抓图等功能的工具),此时,可以利用FFMPEG对第一视频进行读取,从而得到第一图像数据和音频数据。The first video may specifically include first image data and audio data, the first image data provides a picture of the first video, and the audio data provides a sound of the first video. Therefore, for example, the first image data may specifically be an image frame queue of the first video, and the audio data may specifically be an audio packet queue of the first video. Exemplarily, the method for acquiring the first image data and audio data may specifically include: using a preset model to read the first video to obtain an image frame queue and an audio packet queue, which are used as the first image data and audio data, respectively. Among them, the preset model is a model capable of video capture and decoding. Exemplarily, the preset model may be FFMPEG (Fast Forward MPEG, a tool with functions such as video capture, video format conversion, and video capture). At this time, FFMPEG can be used to read the first video to obtain First image data and audio data.
并且,播放时间戳为用于保证第一图像数据和音频数据的内容同步的标识。举例而言,播放时间戳可以包括1秒和2秒,第一图像数据中视频帧VF1的播放时间戳为1秒,视频帧VF2的播放时间戳为2秒;音频数据中音频包AP1的播放时间戳为1秒,音频包AP2的播放时间戳为2秒。此时,可以按照播放时间戳对第一图像数据和音频数据进行同步播放,以保证第一图像数据和音频数据的内容同步:在0-1秒时,对播放时间戳均为1秒的视频帧VF1和音频包AP1进行同步播放,在1-2秒时,对播放时间戳均为2秒的视频帧VF2和音频包AP2进行同步播放。以此为基础,可以获取第一视频的播放时间戳,以便在后续步骤S103中按照所获取的播放时间戳,对虚拟物品的画面数据以及音频数据进行同步播放,得到生成指令对应的虚拟物品。由此,可以通过虚拟物品的画面数据以及音频数据的同步播放,保证所生成的虚拟物品的画面效果与声音效果同步。In addition, the playback time stamp is an identifier used to ensure that the content of the first image data and the audio data are synchronized. For example, the playback timestamp may include 1 second and 2 seconds, the playback timestamp of the video frame VF1 in the first image data is 1 second, and the playback timestamp of the video frame VF2 is 2 seconds; the playback of the audio package AP1 in the audio data The time stamp is 1 second, and the playback time stamp of the audio packet AP2 is 2 seconds. At this time, the first image data and audio data can be played synchronously according to the playback time stamp to ensure that the content of the first image data and audio data is synchronized: at 0-1 second, the playback time stamp is both 1 second. Frame VF1 and audio package AP1 are played synchronously. At 1-2 seconds, the video frame VF2 and audio package AP2 with a playback time stamp of 2 seconds are played synchronously. Based on this, the playback timestamp of the first video can be acquired, so that in the subsequent step S103, the screen data and audio data of the virtual item are synchronously played according to the acquired playback timestamp to obtain the virtual item corresponding to the generation instruction. In this way, the screen data and audio data of the virtual item can be played synchronously to ensure that the screen effect and sound effect of the generated virtual item are synchronized.
S102,按照预设的关于虚拟物品的外形信息,调整第一图像数据,得到调整后的第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据。S102: Adjust the first image data according to the preset shape information about the virtual item to obtain the adjusted first image data, and transcode the adjusted first image data to obtain the screen data of the virtual item.
其中,预设的关于虚拟物品的外形信息为关于虚拟物品外形的信息,具体可以是多种的。示例性的,预设的关于虚拟物品的外形信息可以包括预设的虚拟物品的尺寸、颜色、亮度以及形状等等关于虚拟物品外形的信息中的至少一种。相应的,按照预设的关于虚拟物品的外形信息,对第一图像数据进行的调整,具体可以是:将第一图像数据所包括的视频帧的外观参数,调整为与预设的关于虚拟物品的外形信息相同。例如,预设的关于虚拟物品的外形信息为尺寸S1、颜色C1、亮度L1以及形状F1,那么,可以将第一图像数据所包括的视频帧的尺寸、颜色、亮度以及形状等等参数的参数值,调整为尺寸S1、颜色C1、亮度L1以及形状F1。Wherein, the preset shape information about the virtual item is information about the shape of the virtual item, which may be of various types. Exemplarily, the preset shape information about the virtual item may include at least one of the preset size, color, brightness and shape of the virtual item and other information about the shape of the virtual item. Correspondingly, the adjustment of the first image data according to the preset shape information about the virtual item may specifically be: adjusting the appearance parameters of the video frame included in the first image data to be the same as the preset about the virtual item The shape information is the same. For example, if the preset shape information about the virtual item is size S1, color C1, brightness L1, and shape F1, then the size, color, brightness, shape and other parameters of the video frame included in the first image data can be set Values are adjusted to size S1, color C1, brightness L1, and shape F1.
并且,为了便于在后续步骤S103中利用第一视频的播放时间戳实现虚拟 物品的画面效果与声音效果同步,可以对调整后的第一图像数据进行转码,以得到虚拟物品的画面数据,此时,虚拟物品的画面数据相当于没有声音效果的视频。在具体应用中,对调整后的第一图像数据进行转码相当于将调整后的第一图像数据编码为没有声音效果的视频,具体可以是多种的。示例性的,可以采用帧间压缩的编码方式,对调整后的第一图像数据进行转码,或者,可以采用帧内压缩的编码方式,对调整后的第一图像数据进行转码。任何可以对调整后的第一图像数据进行转码的方式,均可用于本申请,本实施例对此不作限制。In addition, in order to facilitate the use of the playback timestamp of the first video in the subsequent step S103 to synchronize the picture effect and sound effect of the virtual item, the adjusted first image data may be transcoded to obtain the picture data of the virtual item. At this time, the screen data of the virtual item is equivalent to a video without sound effects. In a specific application, transcoding the adjusted first image data is equivalent to encoding the adjusted first image data into a video without sound effects, and there may be multiple specific types. Exemplarily, an inter-frame compression encoding method may be used to transcode the adjusted first image data, or an intra-frame compression encoding method may be adopted to transcode the adjusted first image data. Any method that can transcode the adjusted first image data can be used in this application, and this embodiment does not limit this.
其中,视频的两个连续图像帧之间很可能存在具有很大相关性的冗余数据,因此,帧间压缩可以通过比较时间轴上不同图像帧之间的数据实施压缩,提高压缩比,降低对数据处理资源的占用量。帧内压缩在压缩某一图像帧时,仅考虑本图像帧的数据,不考虑本图像帧与相邻图像帧之间的冗余信息,与静态图像的压缩类似,压缩比相对而言较低,对数据处理资源的占用量相对而言较多。因此,对于需要降低占用数据处理资源的应用场景,例如直播中虚拟礼物的生成,可以采用帧间压缩获得虚拟物品,有利于降低对数据处理资源的占用量。Among them, there is likely to be redundant data with great correlation between two consecutive image frames of the video. Therefore, inter-frame compression can be implemented by comparing data between different image frames on the time axis to increase the compression ratio and reduce The amount of data processing resources occupied. Intra-frame compression, when compressing an image frame, only considers the data of the image frame, and does not consider the redundant information between the image frame and adjacent image frames. It is similar to the compression of static images, and the compression ratio is relatively low. , The occupation of data processing resources is relatively large. Therefore, for application scenarios that need to reduce the occupation of data processing resources, such as the generation of virtual gifts in a live broadcast, inter-frame compression can be used to obtain virtual items, which is beneficial to reduce the occupation of data processing resources.
S103,按照播放时间戳,对虚拟物品的画面数据以及音频数据进行同步播放,得到生成指令对应的虚拟物品。S103: Synchronously play the screen data and audio data of the virtual item according to the play timestamp to obtain the virtual item corresponding to the generation instruction.
由于播放时间戳可以保证第一图像数据和音频数据的内容同步,因此,按照播放时间戳对虚拟物品的画面数据以及音频数据进行同步播放,得到生成指令对应的虚拟物品,可以保证所生成的虚拟物品的画面效果与声声音效果同步。Since the playback time stamp can ensure the synchronization of the content of the first image data and audio data, the screen data and audio data of the virtual item are played synchronously according to the playback time stamp to obtain the virtual item corresponding to the generation instruction, which can ensure that the generated virtual item The picture effects of the items are synchronized with the sound effects.
示例性的,按照播放时间戳,对虚拟物品的画面数据以及音频数据进行同步播放,得到生成指令对应的虚拟物品,具体可以包括:将虚拟物品的画面数据发送给图像显示装置,以使得图像显示装置从虚拟物品的画面数据中,选择与播放时间戳对应的数据进行展示;并且,同步将音频数据发送给音频输出装置,以使得音频输出装置播放音频数据中与播放时间戳对应的数据。举例而言,第0秒至第10秒的播放时间戳为T1,具体的,可以将时长10S或者数字1或者数字0等等标识作为时间戳T1,用于标记时间段第0秒至第 10秒。虚拟物品的画面数据中,与播放时间戳T1对应的数据是第0秒至第10秒的视频帧VF0,在音频数据中与播放时间戳T1对应的数据是视频帧VF0的音频包AP0。在图像显示装置开始展示视频帧VF0的同时,音频播放装置可以开始播放音频包AP0。Exemplarily, according to the playback time stamp, the screen data and audio data of the virtual item are played synchronously to obtain the virtual item corresponding to the generation instruction, which may specifically include: sending the screen data of the virtual item to the image display device, so that the image is displayed The device selects data corresponding to the playback time stamp from the screen data of the virtual item for display; and synchronously sends the audio data to the audio output device, so that the audio output device plays the data corresponding to the playback time stamp in the audio data. For example, the playback timestamp from the 0th to the 10th second is T1. Specifically, the duration 10S or the number 1 or the number 0 can be identified as the timestamp T1 to mark the 0th to 10th time period. second. In the screen data of the virtual item, the data corresponding to the playback time stamp T1 is the video frame VF0 from the 0th second to the 10th second, and the data corresponding to the playback time stamp T1 in the audio data is the audio packet AP0 of the video frame VF0. While the image display device starts to display the video frame VF0, the audio playback device may start to play the audio package AP0.
本申请实施例提供的方案中,服务器在检测到虚拟物品的生成指令时,通过获取生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳,可以实现按照预设的关于虚拟物品的外形信息,调整第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;进而按照播放时间戳,对虚拟物品的画面数据以及音频数据进行播放,得到生成指令对应的虚拟物品。本方案中,生成虚拟物品所需的画面数据和音频数据为从音画同步的同一视频中提取的内容,且在虚拟物品生成时,按照播放时间对虚拟物品的画面数据和音频数据进行播放所生成,这样,可以保证虚拟物品的声音效果和画面效果的同步。可见,通过本方案,可以实现保证生成的虚拟物品的多样化展示效果中,音画同步的效果。In the solution provided by the embodiment of the present application, when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item The shape information of the item, the first image data is adjusted, and the adjusted first image data is transcoded to obtain the picture data of the virtual item; and then the picture data and audio data of the virtual item are played according to the playback timestamp to obtain Generate virtual items corresponding to the command. In this solution, the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
如图2所示,本申请另一实施例的虚拟物品的生成方法的流程,该方法可以包括:As shown in FIG. 2, the flow of the method for generating virtual items according to another embodiment of the present application, the method may include:
S201,在检测到虚拟物品的生成指令时,获取生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳。S201: When a generation instruction of a virtual item is detected, obtain the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction.
S201与本申请图1实施例的S101为相同的步骤,在此不再赘述,详见本申请图1实施例的描述。S201 is the same step as S101 in the embodiment of FIG. 1 of this application, and will not be repeated here. For details, refer to the description of the embodiment of FIG. 1 of this application.
S202,按照预设的关于虚拟物品的外形信息,调整第一图像数据,得到调整后的第一图像数据。S202: Adjust the first image data according to preset shape information about the virtual item to obtain adjusted first image data.
上述S202与本申请图1实施例的S102中调整第一图像数据的步骤为相同的步骤,在此不再赘述,详见本申请图1实施例的描述。The step of adjusting the first image data in S202 is the same as the step of adjusting the first image data in S102 of the embodiment of FIG. 1 of this application, which will not be repeated here.
S203,获取画面颜色为透明颜色的第二视频的第二图像数据;第二图像数据提供第二视频的画面。S203: Acquire second image data of the second video whose screen color is a transparent color; the second image data provides a screen of the second video.
在具体应用中,可以在虚拟物品的画面中设置透明的部分,以保证虚拟物品的展示效果更加真实立体。举例而言,对于某一虚拟物品:虚拟礼物汽 车而言,可以将汽车本身以外的画面颜色设置为透明颜色,从而展示的虚拟物品的画面内容为汽车本身,不会存在大片黑色或者白色等与汽车本身无关的内容。为此,可以利用与透明颜色相关的数据对虚拟物品中需要具有透明效果的区域进行遮罩。其中,与透明颜色相关的数据可以是画面颜色为透明颜色的第二视频的第二图像数据,第二图像数据提供第二视频的画面,或者,可以是画面颜色为透明颜色的图像。In specific applications, a transparent part can be set in the screen of the virtual item to ensure that the display effect of the virtual item is more realistic and three-dimensional. For example, for a virtual item: a virtual gift car, the color of the screen other than the car itself can be set to a transparent color, so that the screen content of the virtual item displayed is the car itself, and there will be no large black or white areas. Content that has nothing to do with the car itself. For this reason, the data related to the transparent color can be used to mask the area of the virtual item that needs to have a transparent effect. Wherein, the data related to the transparent color may be second image data of the second video whose screen color is a transparent color, and the second image data provides a picture of the second video, or may be an image whose screen color is a transparent color.
第二图像数据与第一图像数据均为视频提供画面,区别在于第二图像数据的画面颜色为透明颜色,且提供第二视频的画面。因此,将第二视频的第二图像数据作为与透明颜色相关的数据时,可以利用与获取第一图像数据相似的处理逻辑,区别在于将处理对象更换为第二视频以及第二图像数据,无需增加单独的处理逻辑。例如,可以利用获得第一图像数据的预设的模型,读取第二视频,得到第二图像数据。并且,步骤S203可以在步骤S202之前或者之后执行。The second image data and the first image data are both video providing pictures, and the difference is that the picture color of the second image data is a transparent color, and the second video picture is provided. Therefore, when the second image data of the second video is used as the data related to the transparent color, the processing logic similar to that of the first image data can be used. The difference is that the processing object is replaced with the second video and the second image data. Add separate processing logic. For example, a preset model for obtaining the first image data can be used to read the second video to obtain the second image data. Moreover, step S203 can be performed before or after step S202.
S204,按照预设的关于虚拟物品的外形信息,调整第二图像数据,得到调整后的第二图像数据。S204: Adjust the second image data according to preset shape information about the virtual item to obtain adjusted second image data.
步骤S204与本申请图1实施例的S102中调整第一图像数据的过程相似,区别在于S204中调整的对象为第二图像数据。并且,当步骤S203在步骤S202之前执行时,步骤S204可以在步骤S202之前执行,或者与步骤S202同时执行。其中,同时调整两种图像数据可以提高效率。对于相同内容在此不再赘述,详见本申请图1实施例的描述。Step S204 is similar to the process of adjusting the first image data in S102 of the embodiment of FIG. 1 of the present application, except that the object of adjustment in S204 is the second image data. Moreover, when step S203 is executed before step S202, step S204 may be executed before step S202, or simultaneously with step S202. Among them, adjusting two kinds of image data at the same time can improve efficiency. The same content will not be repeated here. For details, please refer to the description of the embodiment in FIG. 1 of the present application.
S205,获取调整后的第一图像数据中属于透明区域的透明位置。S205: Acquire a transparent position belonging to a transparent area in the adjusted first image data.
在具体应用中,对于某一虚拟物品而言,该虚拟物品需要设置为透明颜色的透明部分的位置,即调整后的第一图像数据中属于透明区域的透明位置。因此,可以获取调整后的第一图像数据中属于透明区域的透明位置,用于在后续步骤S206中进行遮罩以得到具有透明颜色区域的遮罩后的图像数据。并且,示例性的,透明位置具体可以是在调整后的第一图像数据的二维坐标系中,透明区域的二维坐标。In a specific application, for a certain virtual item, the virtual item needs to be set to the position of the transparent part of the transparent color, that is, the transparent position belonging to the transparent area in the adjusted first image data. Therefore, the transparent position belonging to the transparent area in the adjusted first image data can be obtained, and used for masking in the subsequent step S206 to obtain the masked image data with the transparent color area. Also, for example, the transparent position may specifically be the two-dimensional coordinates of the transparent area in the two-dimensional coordinate system of the adjusted first image data.
其中,调整后的第一图像数据中属于透明区域的透明位置的获取方式可以是多种的。Wherein, there may be multiple ways to obtain the transparent position belonging to the transparent area in the adjusted first image data.
示例性的,在一种实现方式中,调整后的第一图像数据的形状和尺寸是预设的关于虚拟物品的外形信息,为固定的,并且虚拟物品的非透明区域的位置是预先设置好的,透明区域为调整后的第一图像数据中除非透明区域以外的区域。因此,可以预先根据非透明区域的位置,以及调整后的第一图像数据的形状和尺寸,将调整后的第一图像数据中与非透明区域的位置不同的位置存储为透明位置。相应的,可以直接读取预存的透明位置。Exemplarily, in an implementation manner, the adjusted shape and size of the first image data are preset shape information about the virtual item, which is fixed, and the position of the non-transparent area of the virtual item is preset , The transparent area is the area other than the non-transparent area in the adjusted first image data. Therefore, according to the position of the non-transparent area and the adjusted shape and size of the first image data, a position different from the position of the non-transparent area in the adjusted first image data can be stored as a transparent position in advance. Correspondingly, the pre-stored transparent position can be directly read.
示例性的,在另一种实现方式中,可以读取虚拟物品的非透明区域的位置,并从调整后的第一图像数据中确定与非透明区域的位置不同的位置,作为透明位置。Exemplarily, in another implementation manner, the position of the non-transparent area of the virtual item may be read, and a position different from the position of the non-transparent area may be determined from the adjusted first image data as the transparent position.
任何可以获取调整后的第一图像数据中属于透明区域的透明位置的方式,均可用于本申请,本实施例对此不作限制。Any method that can obtain the transparent position belonging to the transparent area in the adjusted first image data can be used in this application, and this embodiment does not limit this.
S206,利用调整后的第二图像数据,将调整后的第一图像数据中的透明位置遮罩为透明颜色,得到遮罩后的图像数据。S206: Using the adjusted second image data, mask the transparent position in the adjusted first image data into a transparent color to obtain masked image data.
调整后的第二图像数据和调整后的第一图像按像素逐位对应,具有相同的尺寸和形状,并且,调整后的第二图像数据为透明颜色。因此,可以利用调整后的第二图像数据将调整后的第一图像数据中的透明位置遮罩为透明颜色,得到遮罩后的图像数据。在具体应用中,利用调整后的第二图像数据,将调整后的第一图像数据中的透明位置遮罩为透明颜色,得到遮罩后的图像数据的方式,可以是多种的。下面以可选实施例的形式进行具体说明。The adjusted second image data corresponds to the adjusted first image bit by bit, and has the same size and shape, and the adjusted second image data is a transparent color. Therefore, the adjusted second image data can be used to mask the transparent position in the adjusted first image data into a transparent color to obtain masked image data. In a specific application, the adjusted second image data is used to mask the transparent position in the adjusted first image data into a transparent color, and there are multiple ways to obtain the masked image data. The following is a specific description in the form of an alternative embodiment.
在一种可选的实施方式中,上述步骤S206:利用调整后的第二图像数据,将调整后的第一图像数据中的透明位置遮罩为透明颜色,得到遮罩后的图像数据,具体可以包括如下步骤:In an alternative embodiment, the above step S206: using the adjusted second image data, mask the transparent position in the adjusted first image data to a transparent color to obtain the masked image data, specifically It can include the following steps:
在调整后的第一图像数据中的透明位置,展示调整后的第二图像数据,得到遮罩后的图像数据。At the transparent position in the adjusted first image data, the adjusted second image data is displayed to obtain the masked image data.
其中,在调整后的第一图像数据中的透明位置,展示调整后的第二图像数据,具体指:在调整后的第一图像数据所包括的各视频帧中的透明位置,展示调整后的第二图像数据。并且,调整后的第一图像数据所包括的每一视频帧中的透明位置,可以展示第二图像数据所包括的一帧视频帧。Wherein, displaying the adjusted second image data at the transparent position in the adjusted first image data specifically refers to: displaying the adjusted transparent position in each video frame included in the adjusted first image data The second image data. In addition, the transparent position in each video frame included in the adjusted first image data can display one video frame included in the second image data.
遮罩后的图像数据包含调整后的第二图像数据和调整后的第一图像数 据,其中,调整后的第二图像数据的显示效果为透明颜色;调整后的第一图像数据的显示效果为虚拟物品的画面内容本身和需要设置为透明颜色的区域。因此,可以在调整后的第一图像数据中的透明位置,展示调整后的第二图像数据,从而将调整后的第一图像数据中需要设置为透明颜色的区域以调整后的第二图像数据显示透明颜色,得到遮罩后的图像数据。The masked image data includes adjusted second image data and adjusted first image data, where the display effect of the adjusted second image data is a transparent color; the display effect of the adjusted first image data is The screen content of the virtual item itself and the area that needs to be set to a transparent color. Therefore, the adjusted second image data can be displayed at the transparent position in the adjusted first image data, so that the area in the adjusted first image data that needs to be set to a transparent color for the adjusted second image data The transparent color is displayed, and the masked image data is obtained.
在具体应用中,在调整后的第一图像数据中的透明位置,展示调整后的第二图像数据,得到遮罩后的图像数据的方式,具体可以包括:将调整后的第一图像数据和调整后的第二图像数据拷贝给显示模块的缓存,以使得显示模块在显示第一图像数据的同时,同步在调整后的第一图像数据中的透明位置显示调整后的第二图像数据,实现双显示。举例而言,在Android(安卓)操作系统中,可以将调整后的第一图像数据和调整后的第二图像数据拷贝给显示模块的缓存,以使得显示模块在NativeWindow(本地窗口)中,进行双显示。In a specific application, displaying the adjusted second image data at the transparent position in the adjusted first image data to obtain the masked image data may specifically include: combining the adjusted first image data with The adjusted second image data is copied to the buffer of the display module, so that while the display module displays the first image data, it simultaneously displays the adjusted second image data in a transparent position in the adjusted first image data to achieve Dual display. For example, in the Android (Android) operating system, the adjusted first image data and the adjusted second image data can be copied to the cache of the display module, so that the display module is in the NativeWindow (local window). Dual display.
在本可选实施例中,遮罩后的图像数据的获取可以通过对调整后的第二图像和调整后的第一图像的双显示实现,无需经过复杂的图像通道填充和渲染过程,因此,具有实现过程相对而言简单的优势,可以提高虚拟物品的生成效率。In this optional embodiment, the acquisition of the masked image data can be achieved by dual display of the adjusted second image and the adjusted first image, without the need for complex image channel filling and rendering processes. Therefore, It has the advantage of a relatively simple implementation process, and can improve the efficiency of generating virtual items.
在另一种可选的实施方式中,上述步骤S206:利用调整后的第二图像数据,将调整后的第一图像数据中的透明位置遮罩为透明颜色,得到遮罩后的图像数据,具体可以包括如下步骤:In another alternative embodiment, the above step S206: using the adjusted second image data, mask the transparent position in the adjusted first image data to a transparent color to obtain the masked image data, Specifically, it can include the following steps:
将调整后的第一图像数据的透明位置作为透明通道;Use the adjusted transparent position of the first image data as a transparent channel;
将调整后的第二图像数据中处于透明位置的像素,填充至透明通道中,得到透明通道数据;Filling the pixels at the transparent position in the adjusted second image data into the transparent channel to obtain the transparent channel data;
将调整后的第一图像数据中除处于透明位置的像素以外的像素,作为非透明通道数据;Taking pixels other than the pixels in the transparent position in the adjusted first image data as non-transparent channel data;
对调整后的第一图像数据中的透明通道数据,以及非透明通道数据进行渲染,得到遮罩后的图像数据。Rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain the masked image data.
其中,调整后的第一图像数据所包括的视频帧中各像素,在调整后的第一图像数据中视频帧的位置,属于调整后的第一图像数据的透明位置或者非 透明位置;在此基础上,当透明位置的像素具有透明颜色时,该位置的画面效果为透明效果。因此,可以将调整后的第一图像数据的透明位置作为透明通道,以便在后续步骤中为透明通道填充具有透明颜色的像素,实现透明效果。并且,调整后的第二图像数据与调整后的第一图像数据的像素逐位对应,因此,可以将调整后的第二图像数据中,处于透明位置的像素填充至透明通道中,得到透明通道数据,从而得到包含透明通道和非透明通道的调整后的第一图像数据。在此基础上,为了得到在透明位置具有透明颜色、且能够显示虚拟物品的画面内容的遮罩后的图像数据,可以按照调整后的第一图像数据中像素的分布位置,对透明通道数据和非透明通道数据进行渲染。Wherein, each pixel in the video frame included in the adjusted first image data, the position of the video frame in the adjusted first image data belongs to the transparent position or the non-transparent position of the adjusted first image data; here Basically, when the pixel at the transparent position has a transparent color, the picture effect at that position is a transparent effect. Therefore, the adjusted transparent position of the first image data can be used as the transparent channel, so that the transparent channel can be filled with pixels with transparent colors in subsequent steps to achieve a transparent effect. In addition, the adjusted second image data corresponds to the pixels of the adjusted first image data bit by bit. Therefore, pixels in the transparent position in the adjusted second image data can be filled into the transparent channel to obtain the transparent channel Data, thereby obtaining adjusted first image data including a transparent channel and a non-transparent channel. On this basis, in order to obtain the masked image data that has a transparent color at the transparent position and can display the screen content of the virtual item, the transparent channel data and the transparent channel data can be adjusted according to the pixel distribution position in the adjusted first image data. Non-transparent channel data is rendered.
示例性的,对调整后的第一图像数据中的透明通道数据,以及非透明通道数据进行渲染,得到遮罩后的图像数据,具体可以包括:将调整后的第一图像数据中的透明通道数据,以及非透明通道数据生成Texture(纹理)数据,利用OpenGL(Open Graphics Library,开放图形库)渲染Texture数据,得到遮罩后的图像数据。另外,进行渲染的执行主体具体可以是GPU(Graphics Processing Unit,图像处理器),以提高遮罩后的图像的获取效率。Exemplarily, rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain the masked image data may specifically include: changing the transparent channel in the adjusted first image data The texture data is generated from the data and the non-transparent channel data, and the Texture data is rendered using OpenGL (Open Graphics Library) to obtain the masked image data. In addition, the execution subject for rendering may specifically be a GPU (Graphics Processing Unit, image processor) to improve the efficiency of acquiring the masked image.
在本可选实施例中,遮罩后的图像数据的获取是将调整后的第二图像的像素填充至调整后的第一图像的透明通道中,从而对调整后的第一图像中的透明通道数据,以及非透明通道数据进行渲染,得到遮罩后的图像数据。由于可以针对性的渲染具有不同显示效果的透明通道数据和非透明通道数据,因此,与不经过渲染的遮罩后的图像数据的获取方式相比,相当于对进行二次渲染,可以提升后续基于遮罩后的图像数据得到的虚拟物品的显示质量。In this optional embodiment, the acquisition of the masked image data is to fill the adjusted pixels of the second image into the transparency channel of the adjusted first image, so as to make the adjustment transparent to the first image. The channel data and the non-transparent channel data are rendered to obtain the masked image data. Since transparent channel data and non-transparent channel data with different display effects can be rendered in a targeted manner, compared with the way of obtaining image data after masking without rendering, it is equivalent to performing secondary rendering, which can improve subsequent The display quality of the virtual item obtained based on the masked image data.
S207,对遮罩后的图像数据进行转码,得到虚拟物品的画面数据。S207: Transcoding the masked image data to obtain screen data of the virtual item.
上述步骤S207与本申请图1实施例的S102中对调整后的第一图像数据进行转码,得到虚拟物品的画面数据的过程相似,区别在于S207中转码的对象为遮罩后的图像数据;并且此时虚拟物品的画面数据为将需要设置为透明颜色的透明区域遮罩为透明颜色的画面数据。对于相同内容在此不再赘述,详见本申请图1实施例的描述。The foregoing step S207 is similar to the process of transcoding the adjusted first image data in S102 of the embodiment of FIG. 1 of the present application to obtain the image data of the virtual item, except that the object of the transcoding in S207 is the masked image data; And at this time, the screen data of the virtual item is screen data for masking the transparent area that needs to be set to a transparent color to a transparent color. The same content will not be repeated here. For details, please refer to the description of the embodiment in FIG. 1 of the present application.
S208,按照播放时间戳,对虚拟物品的画面数据以及音频数据进行同步播放。S208: Synchronously play the screen data and audio data of the virtual item according to the play timestamp.
S208与本申请图1实施例的S103为相似的步骤,区别在于S208中虚拟物品的画面数据为将需要设置为透明颜色的透明区域遮罩为透明颜色的画面数据,可以提高所生成的虚拟物品的立体感和真实感。对于相同部分在此不再赘述,详见本申请图1实施例的描述。S208 is a similar step to S103 in the embodiment of FIG. 1 of the present application. The difference is that the screen data of the virtual item in S208 is the screen data that masks the transparent area that needs to be set to a transparent color as a transparent color, which can improve the generated virtual item The sense of three-dimensionality and realism. The same parts will not be repeated here. For details, please refer to the description of the embodiment in FIG. 1 of this application.
在上述图2实施例中,通过获取第二视频的第二图像数据,在不增加单独的对第二图像的处理逻辑的情况下,对调整后的第一图像数据的透明位置进行遮罩,得到遮罩后的图像,进而利用遮罩后的图像,获得与生存指令对应的虚拟物品,从而实现虚拟物品的透明位置的透明效果,可以提高虚拟物品的立体感和真实感。In the embodiment of FIG. 2 described above, by acquiring the second image data of the second video, without adding a separate processing logic for the second image, masking the adjusted transparent position of the first image data, Obtain the masked image, and then use the masked image to obtain the virtual item corresponding to the survival instruction, so as to realize the transparent effect of the transparent position of the virtual item, and improve the three-dimensional and real sense of the virtual item.
可选的,在上述步骤S206:对遮罩后的图像数据进行转码,得到获得虚拟物品画面数据之前,本申请实施例提供的虚拟物品的生成方法,还可以包括如下步骤:Optionally, in the foregoing step S206: transcoding the masked image data to obtain the virtual item screen data, the method for generating virtual items provided in the embodiment of the present application may further include the following steps:
获取第三图像数据,以及第三图像数据与遮罩后的图像数据之间的位置关系;第三图像数据用于作为虚拟物品的特定元素;Acquire the third image data and the positional relationship between the third image data and the masked image data; the third image data is used as a specific element of the virtual item;
按照所获取的位置关系,在遮罩后的图像数据中添加第三图像数据,得到特效图像数据;According to the acquired position relationship, add third image data to the masked image data to obtain special effect image data;
相应的,上述步骤S206:对遮罩后的图像数据进行转码,得到虚拟物品的画面数据,具体可以包括:对特效图像数据进行转码,得到虚拟物品的画面数据。Correspondingly, the above step S206: transcoding the masked image data to obtain the screen data of the virtual item may specifically include: transcoding the special effect image data to obtain the screen data of the virtual item.
其中,第三图像数据具体可以为矢量图像。虚拟物品的特定元素可以是多种的。示例性的,特定元素可以是用户头像、用户输入的文字以及特效等等。第三图像数据与遮罩后的图像数据之间的位置关系的获取可以是多种的。示例性的,该位置关系可以是固定的,因此,可以直接读取预存的位置关系。举例而言,位置关系可以是在遮罩后的图像数据的左上角,或者右上角等等,预先存储该位置关系,以便在获取特效图像数据时读取。或者,示例性的,该位置关系可以是从预设的第三图像数据的种类与添加位置的对应关系中,查找所获取的第三图像的种类对应的添加位置,作为第三图像数据与遮罩后的图像数据之间的位置关系。举例而言,第三图像数据的种类为用户信息时,如用户头像和用户输入的文字时,对应的添加位置可以为透明位置,或者虚 拟礼物的边界位置等等不遮挡虚拟礼物的位置。第三图像数据的种类为特效时,如下雪特效时,对应的添加位置可以为透明位置或者不遮挡虚拟礼物的指定位置。Wherein, the third image data may specifically be a vector image. The specific elements of a virtual item can be of various types. Exemplarily, the specific element may be a user avatar, text input by the user, special effects, and so on. The acquisition of the positional relationship between the third image data and the masked image data may be multiple. Exemplarily, the position relationship may be fixed, and therefore, the pre-stored position relationship can be directly read. For example, the positional relationship may be the upper left corner or the upper right corner of the masked image data, etc., and the positional relationship is stored in advance so as to be read when the special effect image data is acquired. Or, for example, the positional relationship may be to search for the added position corresponding to the acquired type of the third image from the preset correspondence between the type of the third image data and the adding position, as the third image data and the shadow The positional relationship between the image data after the mask. For example, when the type of the third image data is user information, such as a user avatar and text input by the user, the corresponding adding position may be a transparent position, or a virtual gift boundary position, etc., which does not block the virtual gift position. When the type of the third image data is a special effect, such as a snow special effect, the corresponding adding position may be a transparent position or a designated position that does not block the virtual gift.
任何第三图像数据与遮罩后的图像数据之间的位置关系的获取方式,均可用于本申请,本实施例对此不作限制。Any method for acquiring the positional relationship between the third image data and the masked image data can be used in this application, and this embodiment does not limit this.
在此基础上,由于位置关系能够表明第三图像数据在遮罩后的图像数据中的添加位置,因此,按照位置关系,在遮罩后的图像数据中添加第三图像数据,可以得到特效图像数据。具体的,可以根据位置关系所表明的添加位置,将第三图像数据添加在遮罩后的图像数据的该添加位置处,得到特效图像数据。举例而言,第三图像数据为用户头像,位置关系为第三图像数据在遮罩后的图像数据的左上角。此时,可以在遮罩后的图像数据的左上角添加用户头像,得到具有透明效果以及包含用户头像的特效图像数据。On this basis, since the positional relationship can indicate the position of the third image data in the masked image data, according to the positional relationship, the third image data is added to the masked image data to obtain a special effect image data. Specifically, according to the adding position indicated by the positional relationship, the third image data may be added at the adding position of the masked image data to obtain the special effect image data. For example, the third image data is a user avatar, and the position relationship is that the third image data is at the upper left corner of the masked image data. At this time, a user avatar can be added to the upper left corner of the masked image data to obtain special effect image data with transparent effects and containing the user avatar.
相应的,可以对特效图像数据进行转码,得到虚拟物品的画面数据。本步骤与本申请图1实施例的S102中对调整后的第一图像数据进行转码,得到虚拟物品的画面数据的过程相似,区别在于本步骤中转码的对象为特效图像数据。对于相同内容在此不再赘述,详见本申请图1实施例的描述。Correspondingly, the special effect image data can be transcoded to obtain the picture data of the virtual item. This step is similar to the process of transcoding the adjusted first image data in S102 of the embodiment of FIG. 1 of the present application to obtain the image data of the virtual item, except that the object of the transcoding in this step is special effect image data. The same content will not be repeated here. For details, please refer to the description of the embodiment in FIG. 1 of the present application.
在本可选实施例中,通过为虚拟物品添加特定元素,可以增加虚拟物品所表达的内容的丰富程度,以及提高展示效果的多样化程度。并且,特定元素采用第三图像数据的形式,因此,在同样为图像数据的遮罩后的图像数据中添加第三图像数据的形式的特定元素,相对而言较为便捷。In this optional embodiment, by adding specific elements to the virtual item, the richness of the content expressed by the virtual item can be increased, and the diversification of the display effect can be improved. In addition, the specific element takes the form of the third image data. Therefore, it is relatively convenient to add the specific element in the form of the third image data to the masked image data that is also image data.
可选的,上述位置关系为第二像素矩阵的元素与第一像素矩阵的元素之间的对应关系;第二像素矩阵为第三图像数据中像素的矩阵;第一像素矩阵为遮罩后的图像数据中像素的像素矩阵;Optionally, the above-mentioned positional relationship is the corresponding relationship between the elements of the second pixel matrix and the elements of the first pixel matrix; the second pixel matrix is the matrix of pixels in the third image data; the first pixel matrix is the masked Pixel matrix of pixels in image data;
相应的,上述按照位置关系,在遮罩后的图像数据中添加第三图像数据,得到特效图像数据的步骤,具体可以包括:Correspondingly, the above step of adding third image data to the masked image data according to the position relationship to obtain the special effect image data may specifically include:
分别将遮罩后的图像数据和第三图像数据,转化为第一矩阵和第二矩阵;Respectively transform the masked image data and the third image data into a first matrix and a second matrix;
在第一矩阵中,按照位置关系,添加第二矩阵中的元素,获得第三矩阵;In the first matrix, according to the position relationship, add elements in the second matrix to obtain the third matrix;
将第三矩阵转化为图像数据,得到特效图像数据。Convert the third matrix into image data to obtain special effect image data.
其中,第三图像数据中像素的矩阵:第二像素矩阵,是将第三图像数据 的每个像素作为矩阵的元素,并将第三图像数据的每个像素在第三图像数据中的位置作为矩阵的元素在矩阵中的位置,得到的矩阵。类似的,第一像素矩阵,是将遮罩后的图像数据的每个像素作为矩阵的元素,并将遮罩后的图像数据的每个像素在遮罩后的图像数据中的位置作为元素在矩阵中的位置,得到的矩阵。因此,可以将第二像素矩阵的元素与第一像素矩阵的元素之间的对应关系作为第三图像数据和遮罩后的图像数据之间的位置关系。Among them, the matrix of the pixels in the third image data: the second pixel matrix takes each pixel of the third image data as an element of the matrix, and uses the position of each pixel of the third image data in the third image data as The position of the elements of the matrix in the matrix, the resulting matrix. Similarly, the first pixel matrix takes each pixel of the masked image data as an element of the matrix, and uses the position of each pixel of the masked image data in the masked image data as an element in the The position in the matrix, the resulting matrix. Therefore, the correspondence between the elements of the second pixel matrix and the elements of the first pixel matrix can be taken as the positional relationship between the third image data and the masked image data.
在此基础上,为了按照位置关系,在遮罩后的图像数据中添加第三图像数据,得到特效图像数据,需要将遮罩后的图像数据转化为第一矩阵,将第三图像数据转化为第二矩阵,从而在第一矩阵中,按照位置关系,添加第二矩阵中的元素,获得第三矩阵;以及将第三矩阵转化为图像数据,得到特效图像数据。示例性的,位置关系为第二矩阵中的元素S11至S36,逐位对应第一矩阵中左上角的元素F11至F36,因此,可以在第一矩阵中元素F11至F36元素的位置处添加第二矩阵中的元素S11至S36,获得第三矩阵。On this basis, in order to add third image data to the masked image data according to the position relationship to obtain special effect image data, it is necessary to convert the masked image data into the first matrix and the third image data into The second matrix, so that in the first matrix, add elements in the second matrix according to the position relationship to obtain a third matrix; and convert the third matrix into image data to obtain special effect image data. Exemplarily, the positional relationship is that the elements S11 to S36 in the second matrix correspond to the elements F11 to F36 in the upper left corner of the first matrix bit by bit. Therefore, the first matrix can be added to the positions of the elements F11 to F36 in the first matrix. The elements S11 to S36 in the second matrix obtain the third matrix.
在本可选实施例中,作为特定元素的第三图像数据是按像素位置添加的,与传统的以绘画的方式添加相比,相对而言降低了添加时的复杂程度和需要处理的数据量,可以减少对数据处理资源的消耗。In this optional embodiment, the third image data as a specific element is added according to the pixel position. Compared with the traditional adding by drawing, the complexity of adding and the amount of data to be processed are relatively reduced. , Can reduce the consumption of data processing resources.
相应于上述方法实施例,本申请一实施例还提供了虚拟物品的生成装置。Corresponding to the foregoing method embodiment, an embodiment of the present application also provides a virtual item generation device.
如图3所示,本申请一实施例提供的虚拟物品的生成装置,应用于服务器,该装置可以包括:As shown in FIG. 3, the virtual item generation device provided by an embodiment of the present application is applied to a server, and the device may include:
数据获取模块301,用于在检测到虚拟物品的生成指令时,获取所述生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳;其中,所述第一图像数据提供所述第一视频的画面;所述音频数据提供所述第一视频的声音;所述播放时间戳为用于保证所述第一图像数据和所述音频数据的内容同步的标识;The data acquisition module 301 is configured to acquire the first image data, audio data, and playback time stamp of the first video corresponding to the generation instruction when the generation instruction of the virtual item is detected; wherein the first image data provides The picture of the first video; the audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure that the content of the first image data and the audio data are synchronized;
画面数据生成模块302,用于按照预设的关于虚拟物品的外形信息,调整所述第一图像数据,得到调整后的第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;The screen data generation module 302 is configured to adjust the first image data according to preset shape information about the virtual item to obtain adjusted first image data, and transcode the adjusted first image data to obtain Screen data of virtual items;
虚拟物品展示模块303,用于按照所述播放时间戳,对所述虚拟物品的画 面数据以及所述音频数据进行同步播放,得到所述生成指令对应的虚拟物品。The virtual item display module 303 is configured to synchronously play the screen data and the audio data of the virtual item according to the playback timestamp to obtain the virtual item corresponding to the generation instruction.
本申请实施例提供的方案中,服务器在检测到虚拟物品的生成指令时,通过获取生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳,可以实现按照预设的关于虚拟物品的外形信息,调整第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;进而按照播放时间戳,对虚拟物品的画面数据以及音频数据进行播放,得到生成指令对应的虚拟物品。本方案中,生成虚拟物品所需的画面数据和音频数据为从音画同步的同一视频中提取的内容,且在虚拟物品生成时,按照播放时间对虚拟物品的画面数据和音频数据进行播放所生成,这样,可以保证虚拟物品的声音效果和画面效果的同步。可见,通过本方案,可以实现保证生成的虚拟物品的多样化展示效果中,音画同步的效果。In the solution provided by the embodiment of the present application, when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item The shape information of the item, the first image data is adjusted, and the adjusted first image data is transcoded to obtain the picture data of the virtual item; and then the picture data and audio data of the virtual item are played according to the playback timestamp to obtain Generate virtual items corresponding to the command. In this solution, the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
如图4所示,本申请另一实施例提供的虚拟物品的生成装置,应用于服务器,该装置可以包括:As shown in Fig. 4, the device for generating virtual items provided by another embodiment of the present application is applied to a server, and the device may include:
数据获取模块401,用于在检测到虚拟物品的生成指令时,获取所述生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳;其中,所述第一图像数据提供所述第一视频的画面;所述音频数据提供所述第一视频的声音;所述播放时间戳为用于保证所述第一图像数据和所述音频数据的内容同步的标识;获取画面颜色为透明颜色的第二视频的第二图像数据;所述第二图像数据提供所述第二视频的画面;The data acquisition module 401 is configured to acquire the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction when the generation instruction of the virtual item is detected; wherein the first image data provides The picture of the first video; the audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure that the content of the first image data and the audio data are synchronized; the color of the acquired picture is Second image data of a second video in a transparent color; the second image data provides a picture of the second video;
画面数据生成模块402,包括:遮罩子模块4021和转码子模块4022;The screen data generation module 402 includes: a mask sub-module 4021 and a transcoding sub-module 4022;
所述遮罩子模块4021,用于按照预设的关于虚拟物品的外形信息,调整所述第二图像数据,得到调整后的第二图像数据;获取所述调整后的第一图像数据中属于透明区域的透明位置;利用所述调整后的第二图像数据,将所述调整后的第一图像数据中的所述透明位置遮罩为透明颜色,得到遮罩后的图像数据;The mask sub-module 4021 is configured to adjust the second image data according to preset shape information about the virtual item to obtain adjusted second image data; obtain the adjusted first image data that is transparent The transparent position of the region; using the adjusted second image data to mask the transparent position in the adjusted first image data into a transparent color to obtain the masked image data;
所述转码子模块4022,用于对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据;The transcoding sub-module 4022 is used to transcode the masked image data to obtain screen data of virtual items;
虚拟物品展示模块403,用于按照所述播放时间戳,对所述虚拟物品的画面数据以及所述音频数据进行同步播放,得到所述生成指令对应的虚拟物品。The virtual item display module 403 is configured to synchronously play the screen data and the audio data of the virtual item according to the playback time stamp to obtain the virtual item corresponding to the generation instruction.
可选的,所述遮罩子模块4021,具体用于:Optionally, the mask submodule 4021 is specifically used for:
在所述调整后的第一图像数据中的所述透明位置,展示所述调整后的第二图像数据,得到遮罩后的图像数据。In the transparent position in the adjusted first image data, the adjusted second image data is displayed to obtain the masked image data.
可选的,所述遮罩子模块4021,具体用于:Optionally, the mask submodule 4021 is specifically used for:
将所述调整后的第一图像数据的所述透明位置作为透明通道;Using the transparent position of the adjusted first image data as a transparent channel;
将所述调整后的第二图像数据中,处于所述透明位置的像素,填充至所述透明通道中,得到透明通道数据;Filling the pixels in the transparent position in the adjusted second image data into the transparent channel to obtain transparent channel data;
将所述调整后的第一图像数据中,除处于所述透明位置的像素以外的像素,作为非透明通道数据;Taking pixels other than the pixels in the transparent position in the adjusted first image data as non-transparent channel data;
对所述调整后的第一图像数据中的所述透明通道数据,以及所述非透明通道数据进行渲染,得到遮罩后的图像数据。Rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain masked image data.
可选的,所述数据获取模块401,具体用于:Optionally, the data acquisition module 401 is specifically configured to:
在所述转码子模块4022对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据之前,获取第三图像数据,以及所述第三图像数据与所述遮罩后的图像数据之间的位置关系;所述第三图像数据用于作为虚拟物品的特定元素;Before the transcoding submodule 4022 transcodes the masked image data to obtain the screen data of the virtual item, acquire third image data, as well as the third image data and the masked image data The positional relationship between; the third image data is used as a specific element of the virtual item;
按照所述位置关系,在所述遮罩后的图像数据中添加所述第三图像数据,得到特效图像数据;Adding the third image data to the masked image data according to the position relationship to obtain special effect image data;
相应的,所述转码子模块4022,具体用于:Correspondingly, the transcoding submodule 4022 is specifically used for:
对所述特效图像数据进行转码,获得虚拟物品。Transcoding the special effect image data to obtain virtual items.
可选的,所述位置关系为第二像素矩阵的元素与第一像素矩阵的元素之间的对应关系;所述第二像素矩阵为所述第三图像数据中像素的像素矩阵;所述第一像素矩阵为所述遮罩后的图像数据中像素的像素矩阵;Optionally, the positional relationship is a correspondence relationship between elements of a second pixel matrix and elements of a first pixel matrix; the second pixel matrix is a pixel matrix of pixels in the third image data; A pixel matrix is the pixel matrix of pixels in the masked image data;
所述数据获取模块401,具体用于:The data acquisition module 401 is specifically used for:
分别将所述遮罩后的图像数据和所述第三图像数据,转化为第一矩阵和第二矩阵;Respectively transforming the masked image data and the third image data into a first matrix and a second matrix;
在所述第一矩阵中,按照所述位置关系,添加所述第二矩阵中的元素,获得第三矩阵;In the first matrix, add elements in the second matrix according to the position relationship to obtain a third matrix;
将所述第三矩阵转化为图像数据,得到特效图像数据。The third matrix is converted into image data to obtain special effect image data.
相应于上述实施例,本申请实施例还提供了一种电子设备,如图5所示,该设备可以包括:Corresponding to the foregoing embodiment, an embodiment of the present application also provides an electronic device. As shown in FIG. 5, the device may include:
处理器501、通信接口502、存储器503和通信总线504,其中,处理器501,通信接口502,存储器通503过通信总线504完成相互间的通信;A processor 501, a communication interface 502, a memory 503, and a communication bus 504, wherein the processor 501, the communication interface 502, and the memory communicate with each other through the communication bus 504 through 503;
存储器503,用于存放计算机程序;The memory 503 is used to store computer programs;
处理器501,用于执行上述存储器503上所存放的计算机程序时,实现上述实施例中任一虚拟物品的生成方法的步骤。The processor 501 is configured to implement the steps of any virtual item generation method in the foregoing embodiment when executing the computer program stored in the foregoing memory 503.
可以理解的是,本申请图5实施例中的电子设备具体可以为与互联网相关的客户端所对应的服务器。It is understandable that the electronic device in the embodiment of FIG. 5 of the present application may specifically be a server corresponding to a client related to the Internet.
本申请实施例提供的方案中,服务器在检测到虚拟物品的生成指令时,通过获取生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳,可以实现按照预设的关于虚拟物品的外形信息,调整第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;进而按照播放时间戳,对虚拟物品的画面数据以及音频数据进行播放,得到生成指令对应的虚拟物品。本方案中,生成虚拟物品所需的画面数据和音频数据为从音画同步的同一视频中提取的内容,且在虚拟物品生成时,按照播放时间对虚拟物品的画面数据和音频数据进行播放所生成,这样,可以保证虚拟物品的声音效果和画面效果的同步。可见,通过本方案,可以实现保证生成的虚拟物品的多样化展示效果中,音画同步的效果。In the solution provided by the embodiment of the present application, when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item The shape information of the item, the first image data is adjusted, and the adjusted first image data is transcoded to obtain the picture data of the virtual item; and then the picture data and audio data of the virtual item are played according to the playback timestamp to obtain Generate virtual items corresponding to the command. In this solution, the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
上述存储器可以包括RAM(Random Access Memory,随机存取存储器),也可以包括NVM(Non-Volatile Memory,非易失性存储器),例如至少一个磁盘存储器。可选的,存储器还可以是至少一个位于远离于上述处理器的存储装置。The foregoing memory may include RAM (Random Access Memory, random access memory), and may also include NVM (Non-Volatile Memory, non-volatile memory), such as at least one disk storage. Optionally, the memory may also be at least one storage device located far away from the foregoing processor.
上述处理器可以是通用处理器,包括CPU(Central Processing Unit,中央处理器)、NP(Network Processor,网络处理器)等;还可以是DSP(Digital Signal Processor,数字信号处理器)、ASIC(Application Specific Integrated Circuit,专用集成电路)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。The above-mentioned processor may be a general-purpose processor, including CPU (Central Processing Unit), NP (Network Processor, network processor), etc.; it may also be DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array, field programmable gate array) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
本申请一实施例提供的计算机可读存储介质,包含于电子设备,该计算机可读存储介质内存储有计算机程序,该计算机程序被处理器执行时,实现上述实施例中任一虚拟物品的生成方法的步骤。The computer-readable storage medium provided by an embodiment of the present application is included in an electronic device. The computer-readable storage medium stores a computer program. When the computer program is executed by a processor, it realizes the generation of any virtual item in the above-mentioned embodiments. Method steps.
本申请实施例提供的方案中,服务器在检测到虚拟物品的生成指令时,通过获取生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳,可以实现按照预设的关于虚拟物品的外形信息,调整第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;进而按照播放时间戳,对虚拟物品的画面数据以及音频数据进行播放,得到生成指令对应的虚拟物品。本方案中,生成虚拟物品所需的画面数据和音频数据为从音画同步的同一视频中提取的内容,且在虚拟物品生成时,按照播放时间对虚拟物品的画面数据和音频数据进行播放所生成,这样,可以保证虚拟物品的声音效果和画面效果的同步。可见,通过本方案,可以实现保证生成的虚拟物品的多样化展示效果中,音画同步的效果。In the solution provided by the embodiment of the present application, when the server detects the generation instruction of the virtual item, by acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction, it is possible to implement a preset virtual item The shape information of the item, the first image data is adjusted, and the adjusted first image data is transcoded to obtain the picture data of the virtual item; and then the picture data and audio data of the virtual item are played according to the playback timestamp to obtain Generate virtual items corresponding to the command. In this solution, the screen data and audio data required to generate the virtual item are content extracted from the same video synchronized with audio and video, and when the virtual item is generated, the screen data and audio data of the virtual item are played according to the playback time. In this way, it can ensure the synchronization of the sound effect and the screen effect of the virtual item. It can be seen that through this solution, the effect of ensuring the synchronization of audio and video in the diversified display effects of the generated virtual items can be achieved.
在本申请提供的又一实施例中,还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述实施例中任一所述的虚拟物品的生成方法。In another embodiment provided by the present application, a computer program product containing instructions is also provided, which when running on a computer, causes the computer to execute the method for generating virtual items described in any of the above embodiments.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、DSL(Digital Subscriber Line,数字运维人员线)或无线(例如:红外线、无线电、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、 光介质(例如:DVD(Digital Versatile Disc,数字通用光盘))、或者半导体介质(例如:SSD(Solid State Disk,固态硬盘))等。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware or any combination thereof. When implemented by software, it can be implemented in the form of a computer program product in whole or in part. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center. Transmission to another website, computer, server, or data center via wired (such as coaxial cable, optical fiber, DSL (Digital Subscriber Line)) or wireless (such as infrared, radio, microwave, etc.). The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media. The available medium may be a magnetic medium, (for example, a floppy disk) , Hard disk, magnetic tape), optical media (for example: DVD (Digital Versatile Disc)), or semiconductor media (for example: SSD (Solid State Disk, solid state drive)), etc.
在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。In this article, relational terms such as first and second are only used to distinguish one entity or operation from another entity or operation, and do not necessarily require or imply any such existence between these entities or operations. The actual relationship or order. Moreover, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or device that includes a series of elements includes not only those elements, but also includes Other elements of, or also include elements inherent to this process, method, article or equipment. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other same elements in the process, method, article, or equipment including the element.
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置和电子设备实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。The various embodiments in this specification are described in a related manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the device and electronic device embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiments.
以上所述仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。The above are only preferred embodiments of the present application, and are not used to limit the protection scope of the present application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of this application are all included in the protection scope of this application.
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本申请保护的范围之内。The above are only the preferred embodiments of this application and are not intended to limit this application. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included in this application Within the scope of protection.
Claims (14)
- 一种虚拟物品的生成方法,其特征在于,应用于服务器,所述方法包括:A method for generating virtual items is characterized by being applied to a server, and the method includes:在检测到虚拟物品的生成指令时,获取所述生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳;其中,所述第一图像数据提供所述第一视频的画面;所述音频数据提供所述第一视频的声音;所述播放时间戳为用于保证所述第一图像数据和所述音频数据的内容同步的标识;When the generation instruction of the virtual item is detected, acquiring the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction; wherein the first image data provides a picture of the first video; The audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure synchronization of the content of the first image data and the audio data;按照预设的关于虚拟物品的外形信息,调整所述第一图像数据,得到调整后的第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;Adjusting the first image data according to preset shape information about the virtual item to obtain adjusted first image data, and transcoding the adjusted first image data to obtain the screen data of the virtual item;按照所述播放时间戳,对所述虚拟物品的画面数据以及所述音频数据进行同步播放,得到所述生成指令对应的虚拟物品。According to the playback time stamp, the screen data and the audio data of the virtual item are played synchronously to obtain the virtual item corresponding to the generation instruction.
- 根据权利要求1所述的方法,其特征在于,在所述对调整后的第一图像数据进行转码,得到虚拟物品的画面数据的步骤之前,所述方法还包括:The method according to claim 1, characterized in that, before the step of transcoding the adjusted first image data to obtain the screen data of the virtual item, the method further comprises:获取画面颜色为透明颜色的第二视频的第二图像数据;所述第二图像数据提供所述第二视频的画面;Acquiring second image data of a second video whose screen color is a transparent color; the second image data provides a screen of the second video;按照预设的关于虚拟物品的外形信息,调整所述第二图像数据,得到调整后的第二图像数据;Adjust the second image data according to preset shape information about the virtual item to obtain adjusted second image data;所述对调整后的第一图像数据进行转码,得到虚拟物品的画面数据的步骤,包括:The step of transcoding the adjusted first image data to obtain the screen data of the virtual item includes:获取所述调整后的第一图像数据中属于透明区域的透明位置;Acquiring a transparent position belonging to a transparent area in the adjusted first image data;利用所述调整后的第二图像数据,将所述调整后的第一图像数据中的所述透明位置遮罩为透明颜色,得到遮罩后的图像数据;Masking the transparent position in the adjusted first image data to a transparent color by using the adjusted second image data to obtain masked image data;对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据。Transcoding the masked image data to obtain the image data of the virtual item.
- 根据权利要求2所述的方法,其特征在于,所述利用所述调整后的第二图像数据,将所述调整后的第一图像数据中的所述透明位置遮罩为透明颜色,得到遮罩后的图像数据步骤,包括:2. The method according to claim 2, wherein the adjusted second image data is used to mask the transparent position in the adjusted first image data with a transparent color to obtain a mask The image data steps after the mask include:在所述调整后的第一图像数据中的所述透明位置,展示所述调整后的第二图像数据,得到遮罩后的图像数据。In the transparent position in the adjusted first image data, the adjusted second image data is displayed to obtain the masked image data.
- 根据权利要求2所述的方法,其特征在于,所述利用所述调整后的第二图像数据,将所述调整后的第一图像数据中的所述透明位置遮罩为透明颜色,得到遮罩后的图像数据的步骤,包括:2. The method according to claim 2, wherein the adjusted second image data is used to mask the transparent position in the adjusted first image data with a transparent color to obtain a mask The steps for the image data behind the mask include:将所述调整后的第一图像数据的所述透明位置作为透明通道;Using the transparent position of the adjusted first image data as a transparent channel;将所述调整后的第二图像数据中,处于所述透明位置的像素,填充至所述透明通道中,得到透明通道数据;Filling the pixels in the transparent position in the adjusted second image data into the transparent channel to obtain transparent channel data;将所述调整后的第一图像数据中,除处于所述透明位置的像素以外的像素,作为非透明通道数据;Taking pixels other than the pixels in the transparent position in the adjusted first image data as non-transparent channel data;对所述调整后的第一图像数据中的所述透明通道数据,以及所述非透明通道数据进行渲染,得到遮罩后的图像数据。Rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain masked image data.
- 根据权利要求2所述的方法,其特征在于,在所述对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据的步骤之前,所述方法还包括:2. The method according to claim 2, wherein before the step of transcoding the masked image data to obtain the screen data of the virtual item, the method further comprises:获取第三图像数据,以及所述第三图像数据与所述遮罩后的图像数据之间的位置关系;所述第三图像数据用于作为虚拟物品的特定元素;Acquiring third image data and the positional relationship between the third image data and the masked image data; the third image data is used as a specific element of the virtual item;按照所述位置关系,在所述遮罩后的图像数据中添加所述第三图像数据,得到特效图像数据;Adding the third image data to the masked image data according to the position relationship to obtain special effect image data;所述对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据的步骤,包括:The step of transcoding the masked image data to obtain the image data of the virtual item includes:对所述特效图像数据进行转码,得到虚拟物品的画面数据。Transcoding the special effect image data to obtain the screen data of the virtual item.
- 根据权利要求5所述的方法,其特征在于,所述位置关系为第二像素矩阵的元素与第一像素矩阵的元素之间的对应关系;所述第二像素矩阵为所述第三图像数据中像素的像素矩阵;所述第一像素矩阵为所述遮罩后的图像数据中像素的像素矩阵;The method according to claim 5, wherein the positional relationship is the correspondence between the elements of the second pixel matrix and the elements of the first pixel matrix; the second pixel matrix is the third image data A pixel matrix of middle pixels; the first pixel matrix is a pixel matrix of pixels in the masked image data;所述按照所述位置关系,在所述遮罩后的图像数据中添加所述第三图像数据,得到特效图像数据的步骤,包括:The step of adding the third image data to the masked image data according to the position relationship to obtain special effect image data includes:分别将所述遮罩后的图像数据和所述第三图像数据,转化为第一矩阵和第二矩阵;Respectively transforming the masked image data and the third image data into a first matrix and a second matrix;在所述第一矩阵中,按照所述位置关系,添加所述第二矩阵中的元素,获得第三矩阵;In the first matrix, add elements in the second matrix according to the position relationship to obtain a third matrix;将所述第三矩阵转化为图像数据,得到特效图像数据。The third matrix is converted into image data to obtain special effect image data.
- 一种虚拟物品的生成装置,其特征在于,应用于服务器,所述装置包括:A device for generating virtual items is characterized in that it is applied to a server, and the device includes:数据获取模块,用于在检测到虚拟物品的生成指令时,获取所述生成指令对应的第一视频的第一图像数据、音频数据以及播放时间戳;其中,所述第一图像数据提供所述第一视频的画面;所述音频数据提供所述第一视频的声音;所述播放时间戳为用于保证所述第一图像数据和所述音频数据的内容同步的标识;The data acquisition module is configured to acquire the first image data, audio data, and playback timestamp of the first video corresponding to the generation instruction when the generation instruction of the virtual item is detected; wherein, the first image data provides the The picture of the first video; the audio data provides the sound of the first video; the playback timestamp is an identifier used to ensure that the content of the first image data and the audio data are synchronized;画面数据生成模块,用于按照预设的关于虚拟物品的外形信息,调整所述第一图像数据,得到调整后的第一图像数据,并对调整后的第一图像数据进行转码,得到虚拟物品的画面数据;The screen data generation module is used to adjust the first image data according to the preset shape information about the virtual item to obtain the adjusted first image data, and to transcode the adjusted first image data to obtain the virtual Picture data of the item;虚拟物品展示模块,用于按照所述播放时间戳,对所述虚拟物品的画面数据以及所述音频数据进行同步播放,得到所述生成指令对应的虚拟物品。The virtual item display module is configured to synchronously play the screen data and the audio data of the virtual item according to the playback time stamp to obtain the virtual item corresponding to the generation instruction.
- 根据权利要求7所述的装置,其特征在于,所述数据获取模块,具体用于:The device according to claim 7, wherein the data acquisition module is specifically configured to:在所述对调整后的第一图像数据进行转码,得到虚拟物品的画面数据之前,获取画面颜色为透明颜色的第二视频的第二图像数据;所述第二图像数据提供所述第二视频的画面;Before transcoding the adjusted first image data to obtain the screen data of the virtual item, obtain the second image data of the second video whose screen color is a transparent color; the second image data provides the second The picture of the video;所述画面数据生成模块,包括:遮罩子模块和转码子模块;The picture data generation module includes: a mask sub-module and a transcoding sub-module;所述遮罩子模块,用于按照预设的关于虚拟物品的外形信息,调整所述第二图像数据,得到调整后的第二图像数据;获取所述调整后的第一图像数据中属于透明区域的透明位置;利用所述调整后的第二图像数据,将所述调整后的第一图像数据中的所述透明位置遮罩为透明颜色,得到遮罩后的图像数据;The mask sub-module is configured to adjust the second image data according to preset shape information about the virtual item to obtain adjusted second image data; obtain the adjusted first image data belonging to a transparent area The transparent position of; using the adjusted second image data, mask the transparent position in the adjusted first image data to a transparent color to obtain the masked image data;所述转码子模块,用于对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据。The transcoding sub-module is used to transcode the masked image data to obtain the screen data of the virtual item.
- 根据权利要求8所述的装置,其特征在于,所述遮罩子模块,具体用于:The device according to claim 8, wherein the mask sub-module is specifically used for:在所述调整后的第一图像数据中的所述透明位置,展示所述调整后的第 二图像数据,得到遮罩后的图像数据。In the transparent position in the adjusted first image data, the adjusted second image data is displayed to obtain the masked image data.
- 根据权利要求8所述的装置,其特征在于,所述遮罩子模块,具体用于:The device according to claim 8, wherein the mask sub-module is specifically used for:将所述调整后的第一图像数据的所述透明位置作为透明通道;Using the transparent position of the adjusted first image data as a transparent channel;将所述调整后的第二图像数据中,处于所述透明位置的像素,填充至所述透明通道中,得到透明通道数据;Filling the pixels in the transparent position in the adjusted second image data into the transparent channel to obtain transparent channel data;将所述调整后的第一图像数据中,除处于所述透明位置的像素以外的像素,作为非透明通道数据;Taking pixels other than the pixels in the transparent position in the adjusted first image data as non-transparent channel data;对所述调整后的第一图像数据中的所述透明通道数据,以及所述非透明通道数据进行渲染,得到遮罩后的图像数据。Rendering the transparent channel data and the non-transparent channel data in the adjusted first image data to obtain masked image data.
- 根据权利要求8所述的装置,其特征在于,所述数据获取模块,具体用于:The device according to claim 8, wherein the data acquisition module is specifically configured to:在所述转码子模块对所述遮罩后的图像数据进行转码,得到虚拟物品的画面数据之前,获取第三图像数据,以及所述第三图像数据与所述遮罩后的图像数据之间的位置关系;所述第三图像数据用于作为虚拟物品的特定元素;Before the transcoding submodule transcodes the masked image data to obtain the screen data of the virtual item, obtains third image data, and the difference between the third image data and the masked image data The positional relationship between; the third image data is used as a specific element of the virtual item;按照所述位置关系,在所述遮罩后的图像数据中添加所述第三图像数据,得到特效图像数据;Adding the third image data to the masked image data according to the position relationship to obtain special effect image data;所述转码子模块,具体用于:The transcoding submodule is specifically used for:对所述特效图像数据进行转码,获得虚拟物品。Transcoding the special effect image data to obtain virtual items.
- 根据权利要求11所述的装置,其特征在于,所述位置关系为第二像素矩阵的元素与第一像素矩阵的元素之间的对应关系;所述第二像素矩阵为所述第三图像数据中像素的像素矩阵;所述第一像素矩阵为所述遮罩后的图像数据中像素的像素矩阵;The device according to claim 11, wherein the positional relationship is a correspondence between elements of the second pixel matrix and elements of the first pixel matrix; the second pixel matrix is the third image data A pixel matrix of middle pixels; the first pixel matrix is a pixel matrix of pixels in the masked image data;所述数据获取模块,具体用于:The data acquisition module is specifically used for:分别将所述遮罩后的图像数据和所述第三图像数据,转化为第一矩阵和第二矩阵;Respectively transforming the masked image data and the third image data into a first matrix and a second matrix;在所述第一矩阵中,按照所述位置关系,添加所述第二矩阵中的元素,获得第三矩阵;In the first matrix, add elements in the second matrix according to the position relationship to obtain a third matrix;将所述第三矩阵转化为图像数据,得到特效图像数据。The third matrix is converted into image data to obtain special effect image data.
- 一种电子设备,其特征在于,包括处理器、通信接口、存储器和通信总线,其中,处理器,通信接口,存储器通过总线完成相互间的通信;存储器,用于存放计算机程序;处理器,用于执行存储器上所存放的程序,实现如权利要求1-6任一项所述的方法步骤。An electronic device characterized by comprising a processor, a communication interface, a memory, and a communication bus. The processor, the communication interface, and the memory communicate with each other through the bus; the memory is used to store computer programs; the processor is used The program stored in the memory is executed to realize the method steps according to any one of claims 1-6.
- 一种计算机可读存储介质,其特征在于,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-6任一项所述的方法步骤。A computer-readable storage medium, characterized in that a computer program is stored in the storage medium, and when the computer program is executed by a processor, the method steps according to any one of claims 1-6 are realized.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910578523.7A CN110213640B (en) | 2019-06-28 | 2019-06-28 | Virtual article generation method, device and equipment |
CN201910578523.7 | 2019-06-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020258907A1 true WO2020258907A1 (en) | 2020-12-30 |
Family
ID=67795510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/077034 WO2020258907A1 (en) | 2019-06-28 | 2020-02-27 | Virtual article generation method, apparatus and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110213640B (en) |
WO (1) | WO2020258907A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110213640B (en) * | 2019-06-28 | 2021-05-14 | 香港乐蜜有限公司 | Virtual article generation method, device and equipment |
CN112348969B (en) * | 2020-11-06 | 2023-04-25 | 北京市商汤科技开发有限公司 | Display method and device in augmented reality scene, electronic equipment and storage medium |
US11769289B2 (en) * | 2021-06-21 | 2023-09-26 | Lemon Inc. | Rendering virtual articles of clothing based on audio characteristics |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007082485A1 (en) * | 2006-01-21 | 2007-07-26 | Tencent Technology (Shenzhen) Co., Ltd. | System and method for creating interactive video image |
CN106713988A (en) * | 2016-12-09 | 2017-05-24 | 福建星网视易信息系统有限公司 | Beautifying method and system for virtual scene live |
CN108769826A (en) * | 2018-06-22 | 2018-11-06 | 广州酷狗计算机科技有限公司 | Live media stream acquisition methods, device, terminal and storage medium |
CN110213640A (en) * | 2019-06-28 | 2019-09-06 | 香港乐蜜有限公司 | Generation method, device and the equipment of virtual objects |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102289339B (en) * | 2010-06-21 | 2013-10-30 | 腾讯科技(深圳)有限公司 | Method and device for displaying expression information |
CN101950405B (en) * | 2010-08-10 | 2012-05-30 | 浙江大学 | Video content-based watermarks adding method |
CN102663785B (en) * | 2012-03-29 | 2014-12-10 | 上海华勤通讯技术有限公司 | Mobile terminal and image processing method thereof |
JP6080249B2 (en) * | 2012-09-13 | 2017-02-15 | 富士フイルム株式会社 | Three-dimensional image display apparatus and method, and program |
EP2976749A4 (en) * | 2013-03-20 | 2016-10-26 | Intel Corp | Avatar-based transfer protocols, icon generation and doll animation |
CN105338410A (en) * | 2014-07-07 | 2016-02-17 | 乐视网信息技术(北京)股份有限公司 | Method and device for displaying barrage of video |
KR20160064328A (en) * | 2014-11-27 | 2016-06-08 | 정승화 | Apparatus and method for supporting special effects with motion cartoon systems |
CN106303653A (en) * | 2016-08-12 | 2017-01-04 | 乐视控股(北京)有限公司 | A kind of image display method and device |
CN110114815B (en) * | 2016-12-22 | 2022-09-06 | 麦克赛尔株式会社 | Projection type image display device and image display method used for the same |
CN107027046B (en) * | 2017-04-13 | 2020-03-10 | 广州华多网络科技有限公司 | Audio and video processing method and device for assisting live broadcast |
CN107169872A (en) * | 2017-05-09 | 2017-09-15 | 北京龙杯信息技术有限公司 | Method, storage device and terminal for generating virtual present |
CN108174227B (en) * | 2017-12-27 | 2020-12-22 | 广州酷狗计算机科技有限公司 | Virtual article display method and device and storage medium |
CN108093307B (en) * | 2017-12-29 | 2021-01-01 | 广州酷狗计算机科技有限公司 | Method and system for acquiring playing file |
CN109413338A (en) * | 2018-09-28 | 2019-03-01 | 北京戏精科技有限公司 | A kind of method and system of scan picture |
CN109300180A (en) * | 2018-10-18 | 2019-02-01 | 看见故事(苏州)影视文化发展有限公司 | A kind of 3D animation method and calculate producing device |
CN109191549B (en) * | 2018-11-14 | 2023-11-10 | 广州酷狗计算机科技有限公司 | Method and device for displaying animation |
-
2019
- 2019-06-28 CN CN201910578523.7A patent/CN110213640B/en active Active
-
2020
- 2020-02-27 WO PCT/CN2020/077034 patent/WO2020258907A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007082485A1 (en) * | 2006-01-21 | 2007-07-26 | Tencent Technology (Shenzhen) Co., Ltd. | System and method for creating interactive video image |
CN106713988A (en) * | 2016-12-09 | 2017-05-24 | 福建星网视易信息系统有限公司 | Beautifying method and system for virtual scene live |
CN108769826A (en) * | 2018-06-22 | 2018-11-06 | 广州酷狗计算机科技有限公司 | Live media stream acquisition methods, device, terminal and storage medium |
CN110213640A (en) * | 2019-06-28 | 2019-09-06 | 香港乐蜜有限公司 | Generation method, device and the equipment of virtual objects |
Also Published As
Publication number | Publication date |
---|---|
CN110213640A (en) | 2019-09-06 |
CN110213640B (en) | 2021-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106611435B (en) | Animation processing method and device | |
WO2020258907A1 (en) | Virtual article generation method, apparatus and device | |
US11418832B2 (en) | Video processing method, electronic device and computer-readable storage medium | |
WO2018045927A1 (en) | Three-dimensional virtual technology based internet real-time interactive live broadcasting method and device | |
WO2020077854A1 (en) | Video generation method and device, electronic device and computer storage medium | |
CN106303289B (en) | Method, device and system for fusion display of real object and virtual scene | |
WO2022048097A1 (en) | Single-frame picture real-time rendering method based on multiple graphics cards | |
WO2020103218A1 (en) | Live stream processing method in webrtc and stream pushing client | |
CN111899322B (en) | Video processing method, animation rendering SDK, equipment and computer storage medium | |
US9224156B2 (en) | Personalizing video content for Internet video streaming | |
CN106713988A (en) | Beautifying method and system for virtual scene live | |
US12069321B2 (en) | Data model for representation and streaming of heterogeneous immersive media | |
CN111669646A (en) | Method, device, equipment and medium for playing transparent video | |
CN111464828A (en) | Virtual special effect display method, device, terminal and storage medium | |
US11468629B2 (en) | Methods and apparatus for handling occlusions in split rendering | |
WO2022024780A1 (en) | Information processing device, information processing method, video distribution method, and information processing system | |
WO2012071844A1 (en) | Method and device for generating and playing multimedia animation | |
CN110769241B (en) | Video frame processing method and device, user side and storage medium | |
CN112153472A (en) | Method and device for generating special picture effect, storage medium and electronic equipment | |
CN113301425A (en) | Video playing method, video playing device and electronic equipment | |
US20220210520A1 (en) | Online video data output method, system, and cloud platform | |
TW202240550A (en) | Cloud rendering of texture map | |
CN111179386A (en) | Animation generation method, device, equipment and storage medium | |
WO2023207516A1 (en) | Live streaming video processing method and apparatus, electronic device, and storage medium | |
US20240275832A1 (en) | Method and apparatus for providing performance content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20832956 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20832956 Country of ref document: EP Kind code of ref document: A1 |