CN115396688B - Multi-person interactive network live broadcast method and system based on virtual scene - Google Patents

Multi-person interactive network live broadcast method and system based on virtual scene Download PDF

Info

Publication number
CN115396688B
CN115396688B CN202211341211.2A CN202211341211A CN115396688B CN 115396688 B CN115396688 B CN 115396688B CN 202211341211 A CN202211341211 A CN 202211341211A CN 115396688 B CN115396688 B CN 115396688B
Authority
CN
China
Prior art keywords
rendering
scene
watching
live broadcast
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211341211.2A
Other languages
Chinese (zh)
Other versions
CN115396688A (en
Inventor
胡晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Playbroadcast Mutual Entertainment Technology Co ltd
Original Assignee
Beijing Playbroadcast Mutual Entertainment Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Playbroadcast Mutual Entertainment Technology Co ltd filed Critical Beijing Playbroadcast Mutual Entertainment Technology Co ltd
Priority to CN202211341211.2A priority Critical patent/CN115396688B/en
Publication of CN115396688A publication Critical patent/CN115396688A/en
Application granted granted Critical
Publication of CN115396688B publication Critical patent/CN115396688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a multi-person interactive network live broadcast method and a system based on a virtual scene, wherein the method comprises the following steps: constructing a virtual scene based on live broadcast equipment; determining a default viewing position of each viewing device, and acquiring a first virtual position matched with the default viewing position from a virtual scene; recording the moving track of the watching position of each watching device, and matching a corresponding virtual watching track to the corresponding watching device according to the moving track of the watching position; when a live broadcast interaction request corresponding to a watching user exists at the current moment, determining a rendering mode, a corresponding voice object and a corresponding video object according to the live broadcast interaction request; and performing first rendering on a voice track of the voice object in the virtual watching track and performing second rendering on a video track of the video object in the virtual watching track based on a required rendering mode, so as to realize multi-person interactive network live broadcast. The user interaction experience is met, and an effective basis is provided for realizing multi-user interactive network live broadcast.

Description

Multi-person interactive network live broadcast method and system based on virtual scene
Technical Field
The invention relates to the technical field of network interaction, in particular to a multi-person interactive network live broadcast method and system based on a virtual scene.
Background
On a common network interaction platform, for example, on a platform such as a tremble platform and a fast hand platform, interaction between a user and a live broadcast is generally realized in a chat mode, a gift sending mode or a voice access mode in a live broadcast room of the user, and common multi-user interaction is voice or video interaction, which cannot meet the interaction experience of the user;
and the virtual scene can enable the interactive user to realize live interaction in different ways at different positions.
Therefore, the invention provides a multi-user interactive network live broadcast method and a system based on a virtual scene.
Disclosure of Invention
The invention provides a multi-user interactive network live broadcast method and system based on a virtual scene, which are used for rendering by recording virtual watching tracks of different watching devices and determining tracks of voice objects and video objects in different requests, thereby improving the live broadcast interactive effect, meeting the user interactive experience and providing an effective basis for realizing multi-user interactive network live broadcast.
The invention provides a multi-person interactive network live broadcast method based on a virtual scene, which comprises the following steps:
step 1: constructing a virtual scene based on live broadcast equipment;
and 2, step: determining a default viewing position of each viewing device, and acquiring a first virtual position matched with the default viewing position from the virtual scene;
and 3, step 3: recording the moving track of the watching position of each watching device, and matching a corresponding virtual watching track to the corresponding watching device according to the moving track of the watching position;
and 4, step 4: when a live broadcast interaction request corresponding to a watching user exists at the current moment, determining a rendering mode, a corresponding voice object and a corresponding video object according to the live broadcast interaction request;
and 5: and performing first rendering on the voice track of the voice object in the virtual watching track and performing second rendering on the video track of the video object in the virtual watching track based on the rendering-needed mode, so as to realize multi-user interactive network live broadcasting.
Preferably, the constructing of the virtual scene based on the live device includes:
determining unit position distribution of each scene unit in the live broadcast equipment;
determining a rendering scene of each scene unit according to the unit position distribution;
determining a scene gap between adjacent rendered scenes, and judging that the adjacent rendered scenes do not need to be optimized when the scene gaps are fitted;
otherwise, determining a first coverage angle of a first scene and a second coverage angle of a second scene in the adjacent scenes, and determining a first adjustable angle of the first scene and a second adjustable angle of the second scene;
when the first adjustable angle and the second adjustable angle are both 0, acquiring a first edge picture in a first scene connected with the scene gap and acquiring a second edge picture in a second scene connected with the scene gap;
constructing a first transition picture based on the first edge picture and the second edge picture, and storing the first transition picture in a newly added first scene unit;
when the first adjustable angle is 0 and the second adjustable angle is not 0, carrying out angle expansion adjustment on a second scene, and if no scene gap exists after the angle adjustment, keeping the adjusted second scene unchanged;
if a new scene gap still exists after the angle adjustment, determining whether a transition scene needs to be acquired according to a first ratio of the new scene gap to a preset scene gap;
when the first ratio is smaller than a preset ratio, acquiring edge colors from the first scene and the second scene after angle expansion adjustment, and filling the scene gaps with intermediate colors;
when the first ratio is larger than or equal to a preset ratio, acquiring a second transition picture, and storing the second transition picture in a newly added second scene unit;
and constructing to obtain the virtual scene based on the scene adjusting structure.
Preferably, the constructing a first transition picture based on the first edge picture and the second edge picture comprises:
constructing a first picture matrix of the first edge picture, and constructing a second picture matrix of the second edge picture;
extracting a last column vector of the first picture matrix and a first column vector of the second picture matrix;
matching the last column of vectors with the same row position elements in the first column of vectors to respectively obtain the pixel difference of two elements in each matching combination;
when the pixel difference is 0, rendering the transition line in the corresponding matching combination according to the pixel values of the two corresponding elements;
when the pixel difference is not 0, judging whether the pixel difference is in a mean rendering range, if so, obtaining the pixel average value of two elements in the corresponding matching combination to render the corresponding transition line;
if the pixel difference is not within the rendering range of the mean value, extracting a row vector of a first element and a row vector of a second element in the corresponding matching pair, performing transition analysis on the two row vectors based on a vector analysis model to obtain a rendering pixel value, and rendering a corresponding transition line according to the rendering pixel value;
obtaining an initial picture based on a rendering result;
and carrying out pixel smoothing processing on the initial picture to construct a transition picture.
Preferably, determining a default viewing position of each viewing device, and acquiring a first virtual position matching the default viewing position from the virtual scene includes:
capturing whether the watching equipment enters a virtual scene or not, and if so, regarding a position corresponding to the entering moment as a default watching position;
and establishing a position relation between the default viewing position and each virtual position in the virtual scene, and acquiring a first virtual position matched with the default viewing position.
Preferably, the recording the movement track of the viewing position of each viewing device, and matching the corresponding virtual viewing track to the corresponding viewing device according to the movement track of the viewing position includes:
recording a movement track of a watching position of the corresponding watching equipment based on the displacement sensor;
and acquiring a virtual watching track consistent with the movement track of the watching position based on the watching mapping relation between the watching equipment and the virtual scene.
Preferably, when a live broadcast interaction request corresponding to a watching user exists at the current moment, determining a rendering mode, a corresponding voice object and a corresponding video object according to the live broadcast interaction request includes:
determining the request quantity of the live interaction requests existing at the current moment;
when the request number exceeds the upper limit of the live broadcast connection number, crawling information browsing records of user accounts of the watching equipment matched with the live broadcast interaction request;
determining account habits of corresponding user accounts according to the crawling result;
determining the closeness degree of the corresponding live broadcast content according to the account habit;
sorting the tightness degrees from large to small, and screening the watching devices with the same upper limit of the live broadcast connection quantity from all the live broadcast interaction requests for connection;
performing request analysis on each live broadcast interaction request connected with the viewing equipment, and determining a rendering mode needing to be matched with a request analysis result from a result-mode database;
simultaneously, request splitting is carried out on each live broadcast interaction request connected with the watching equipment, and a request object matched with each splitting request is determined from a splitting-object database;
determining the rendering condition of the mode needing to be rendered, matching the rendering condition with the request objects one by one, and matching corresponding rendering content to each request object;
wherein the request object includes: voice objects and video objects.
Preferably, the first rendering of the voice track of the voice object in the virtual viewing track based on the required rendering mode includes:
determining voice rendering content corresponding to each voice object;
predicting a first appearance moment and a voice track of each voice object based on a virtual scene based on a voice object prediction model;
determining a first rendering thread according to voice rendering content of a corresponding voice object, and setting a trigger point of a first appearance moment on the first rendering thread;
capturing an actual rendering result which is rendered on the matched voice track in real time according to the corresponding rendering thread, comparing the actual rendering result with a preset rendering result, and determining a first rendering difference;
optimizing the voice track based on the first rendering difference.
Preferably, the second rendering of the video track of the video object in the virtual view track includes:
determining video rendering content corresponding to each video object;
predicting a second appearance moment of each video object based on the virtual scene and a video track based on the video object prediction model;
determining a second rendering thread according to the video rendering content of the corresponding video object, and setting a trigger point of a second occurrence moment on the second rendering thread;
capturing an actual rendering result of real-time rendering on the matched video track according to the corresponding rendering thread, comparing the actual rendering result with a preset rendering result, and determining a second rendering difference;
optimizing the video track based on the second rendering difference.
Preferably, determining the closeness degree of the corresponding live content according to the account habit includes:
according to the live broadcast label and the content information of the live broadcast content, obtaining live broadcast bias;
determining habit deviation of corresponding viewing equipment according to the habit of the account;
determining a first tight factor set of each habit bias and all live broadcast biases;
Figure 303355DEST_PATH_IMAGE001
wherein,
Figure 5732DEST_PATH_IMAGE002
indicating the ith habit bias
Figure 305126DEST_PATH_IMAGE003
A first set of tight factors of;
Figure 485572DEST_PATH_IMAGE004
indicating the ith habit bias
Figure 616339DEST_PATH_IMAGE003
With jth live bias
Figure 224038DEST_PATH_IMAGE005
The similarity of (2); sim represents a similarity symbol; m1 represents the total number of live broadcast deviation;
calculating the compactness degree Y according to all the first compactness factor sets;
Figure 10728DEST_PATH_IMAGE006
wherein,
Figure 526023DEST_PATH_IMAGE007
indicating the ith habit bias
Figure 714559DEST_PATH_IMAGE003
With all live broadcast bias
Figure 289897DEST_PATH_IMAGE005
The maximum similarity of (c);
Figure 322139DEST_PATH_IMAGE008
indicating the ith habit bias
Figure 578808DEST_PATH_IMAGE003
N1 represents the number of corresponding habit biases.
The invention provides a multi-person interactive network live broadcast system based on a virtual scene, which comprises:
the scene construction module is used for constructing a virtual scene based on the live broadcast equipment;
the position determining module is used for determining a default viewing position of each viewing device and acquiring a first virtual position matched with the default viewing position from the virtual scene;
the track determining module is used for recording the moving track of the watching position of each watching device and matching a corresponding virtual watching track to the corresponding watching device according to the moving track of the watching position;
the object determining module is used for determining a rendering mode, a corresponding voice object and a corresponding video object according to a live broadcast interaction request when the live broadcast interaction request corresponding to a watching user exists at the current moment;
and the rendering module is used for performing first rendering on the voice track of the voice object in the virtual watching track and performing second rendering on the video track of the video object in the virtual watching track based on the rendering-needed mode, so that multi-user interactive network live broadcasting is realized.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart of a multi-user interactive live webcast method based on a virtual scene in an embodiment of the present invention;
fig. 2 is a structural diagram of a multi-user interactive network live broadcast system based on a virtual scene in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it should be understood that they are presented herein only to illustrate and explain the present invention and not to limit the present invention.
The invention provides a multi-person interactive network live broadcast method based on a virtual scene, which comprises the following steps as shown in figure 1:
step 1: constructing a virtual scene based on live broadcast equipment;
step 2: determining a default viewing position of each viewing device, and acquiring a first virtual position matched with the default viewing position from the virtual scene;
and step 3: recording the moving track of the watching position of each watching device, and matching a corresponding virtual watching track to the corresponding watching device according to the moving track of the watching position;
and 4, step 4: when a live broadcast interaction request corresponding to a watching user exists at the current moment, determining a rendering mode, a corresponding voice object and a corresponding video object according to the live broadcast interaction request;
and 5: and performing first rendering on the voice track of the voice object in the virtual watching track and performing second rendering on the video track of the video object in the virtual watching track based on the rendering-needed mode, so as to realize multi-user interactive network live broadcasting.
In this embodiment, the virtual scene is implemented based on a computer simulation system, and in the method, the virtual scene is based on an interactive scene created by simulating live broadcasting.
In this embodiment, the default viewing position refers to a position at the moment when the viewing device enters the live broadcast room when the viewing user enters the live broadcast room, and the content seen by the viewing device corresponding to the default viewing position is the content at the a position in the specified virtual scene, so that the a position is used as the corresponding first virtual position, and the virtual content viewed in the process of moving the viewing device is also different, so that the related virtual viewing track can be obtained by matching.
In this embodiment, in the process of viewing live broadcast, if a live broadcast interaction request is sent based on a viewing device, at this time, the request needs to be split and parsed to determine a corresponding voice object, a video object, and a rendering mode.
In this embodiment, the required rendering mode is obtained based on database matching related to the parsing result, and the database includes the parsing result and the rendering mode matching with the parsing result, so that the corresponding rendering mode can be determined, and the rendering mode is for rendering the voice object and the video object, mainly to satisfy the experience effect of the user.
In this embodiment, the voice object refers to that, in the interaction process, the voice segments at different times adopt different timbre outputs and the like, which can be regarded as corresponding renderings, and the video object refers to that, in the interaction process, the frequency bands at different times adopt different illusion manners to be played, which can also be regarded as corresponding renderings.
In this embodiment, the voice rendering mainly focuses on the aspects of tone, dubbing, etc., for example, if a certain fixed action occurs in the video, the related dubbing reminder can be issued for the fixed action in a matching manner.
In this embodiment, video rendering mainly focuses on special effects and the like, for example, when a certain fixed motion appears in a video, an amplification playing process is performed, or a repeated playing process is performed.
In this embodiment, the voice track refers to a track that can implement effective rendering of the corresponding voice object, and the video track refers to a track that can implement effective rendering of the corresponding video object.
The beneficial effects of the above technical scheme are: the live broadcast interactive effect is improved by recording the virtual watching tracks of different watching devices and determining the tracks of the voice objects and the video objects in different requests for rendering, the user interactive experience is met, and an effective basis is provided for realizing multi-user interactive network live broadcast.
The invention provides a multi-person interactive network live broadcast method based on a virtual scene, which constructs the virtual scene based on live broadcast equipment and comprises the following steps:
determining unit position distribution of each scene unit in live broadcast equipment;
determining a rendering scene of each scene unit according to the unit position distribution;
determining a scene gap between adjacent rendered scenes, and judging that the adjacent rendered scenes do not need to be optimized when the scene gaps are fitted;
otherwise, determining a first coverage angle of a first scene and a second coverage angle of a second scene in the adjacent scenes, and determining a first adjustable angle of the first scene and a second adjustable angle of the second scene;
when the first adjustable angle and the second adjustable angle are both 0, acquiring a first edge picture in a first scene connected with the scene gap and acquiring a second edge picture in a second scene connected with the scene gap;
constructing a first transition picture based on the first edge picture and the second edge picture, and storing the first transition picture in a newly added first scene unit;
when the first adjustable angle is 0 and the second adjustable angle is not 0, carrying out angle expansion adjustment on the second scene, and if no scene gap exists after the angle adjustment, keeping the adjusted second scene unchanged;
if a new scene gap still exists after the angle adjustment, determining whether the acquisition of the transition scene is needed or not according to a first ratio of the new scene gap to a preset scene gap;
when the first ratio is smaller than a preset ratio, acquiring edge colors from the first scene and the second scene after angle expansion adjustment, and filling the scene gaps with intermediate colors;
when the first ratio is larger than or equal to a preset ratio, acquiring a second transition picture, and storing the second transition picture in a newly added second scene unit;
and constructing to obtain the virtual scene based on the scene adjusting structure.
In this embodiment, the scene units are part of the hardware in the device, and are constructed based on the unit position distribution of different units in the process of constructing the virtual scene.
In this embodiment, each scene unit is preset to render a scene, for example, the scene unit 1 shoots the content of the area 1, shoots the stored content to render the scene, and further obtains a part of the corresponding live broadcast, so that the corresponding virtual scene can be obtained by combining the rendered scenes of all the scene units, and in the process of constructing the virtual scene, since the coverage range of each scene unit may not be connected, the scene gap of the adjacent rendered scenes is analyzed.
In this embodiment, in the process of analyzing the scene gap, if the scene gap is fit, that is, no gap exists between adjacent rendered scenes, it is not necessary to optimize the adjacent rendered scenes.
In this embodiment, if a gap exists between adjacent rendered scenes, the coverage angles and the adjustment angles of the two scenes need to be analyzed to determine the intermediate transition picture, and in the process of determining the adjustable angle, it needs to determine whether the corresponding coverage angle is the maximum coverage angle, and if so, it is determined that the adjustable angle is 0.
In this embodiment, for example, the cell 1 and the cell 2 are disposed adjacently, the first scene is determined by the cell 1, the second scene is determined by the cell 2, the first edge picture is a picture corresponding to the last 5 columns of pixels in the scene, and the second edge picture is a picture corresponding to the first 5 columns of pixels in the scene, so that the first transition picture may be determined by the picture determining module, and the picture determining module is trained based on a combination of different columns of pixels and a transition picture between the two as a sample, so that the first transition picture may be obtained, and a newly added first scene cell is disposed between the cell 1 and the cell 2, so as to fill a gap.
In this embodiment, a case where one adjustable angle is 0 and another adjustable angle is not 0 is analyzed to determine a ratio of a gap still to be adjusted to a preset gap, so as to determine whether a transition scene, that is, a scene in which the gap is filled, needs to be acquired.
In this embodiment, the preset ratio is preset, the edge color refers to the color of the last row of pixel points in the first scene and the color of the first row of pixel points in the second scene, and the average value is obtained through the pixel values of the two rows of pixel points to obtain the middle color, so as to fill the gap.
In this embodiment, the second transition picture means that the padding is realized based on a similar manner to the manner in which the first transition picture is acquired.
The beneficial effects of the above technical scheme are: the gaps are analyzed by determining the scene gaps between adjacent rendered scenes, and the gaps are filled in different modes by determining the adjustable angle, so that reasonable rendering of the scenes is realized, and an effective basis is provided for subsequent live broadcast interaction.
The invention provides a multi-person interactive network live broadcast method based on a virtual scene, which constructs a first transition picture based on a first edge picture and a second edge picture and comprises the following steps:
constructing a first picture matrix of the first edge picture, and constructing a second picture matrix of the second edge picture;
extracting a last column vector of the first picture matrix and a first column vector of the second picture matrix;
matching the last column of vectors with the same row position elements in the first column of vectors to respectively obtain the pixel difference of two elements in each matching combination;
when the pixel difference is 0, rendering the transition line in the corresponding matching combination according to the pixel values of the two corresponding elements;
when the pixel difference is not 0, judging whether the pixel difference is in an averaging rendering range, if so, obtaining the pixel average value of two elements in the corresponding matching combination to render the corresponding transition line;
if the pixel difference is not in the rendering range of the mean value, extracting a row vector of a first element and a row vector of a second element in the corresponding matching pair, performing transition analysis on the two row vectors based on a vector analysis model to obtain a rendering pixel value, and rendering the corresponding transition line according to the rendering pixel value;
obtaining an initial picture based on a rendering result;
and carrying out pixel smoothing processing on the initial picture to construct a transition picture.
In this embodiment, the first picture matrix is obtained, for example, by corresponding to the last 5 columns of elements, and the second picture matrix is obtained, for example, by corresponding to the first 5 columns of elements.
In this embodiment, the last column vector refers to the last column in the first picture matrix, the first column vector refers to the first column in the second picture matrix, wherein the position matching value is formed by the elements at the same position between the last column vector and the first column vector, and the number of row elements included in the last column vector and the first column vector is the same, so the elements corresponding to the same row position can form a matching combination.
In this embodiment, the pixel difference refers to the pixel difference between two elements, and the transition line refers to the line connecting the position of element 1 to the position of element 2 in the matching combination, rendering the line.
In this embodiment, the rendering range of the average value is preset, for example, at [0,25], mainly to determine whether the difference is too large, and thus determine which rendering method is suitable.
In this embodiment, the row vector refers to a row vector in the corresponding picture matrix.
In this embodiment, the vector analysis model is trained in advance, and is obtained by training a sample based on different vector combinations and pixel values corresponding to the vectors, and the rendering vectors matched for the different vector combinations, so that a rendering pixel value can be obtained, and an initial picture can be obtained.
In this embodiment, the pixel smoothing process mainly refers to processing the pixel values of each connection line to make the pixel values thereof closer.
The beneficial effects of the above technical scheme are: by constructing the matrix and respectively extracting and matching different column vectors, reasonable rendering of transition lines is realized by adopting a mode corresponding to different pixel differences, the rationality of transition pictures is ensured, and an effective basis is provided for subsequent scene rendering and live broadcast interaction.
The invention provides a multi-person interactive network live broadcast method based on a virtual scene, which is used for determining a default viewing position of each viewing device and acquiring a first virtual position matched with the default viewing position from the virtual scene, and comprises the following steps:
capturing whether the watching equipment enters a virtual scene or not, and if so, regarding a position corresponding to the entering moment as a default watching position;
and establishing a position relation between the default viewing position and each virtual position in the virtual scene, and acquiring a first virtual position matched with the default viewing position.
The beneficial effects of the above technical scheme are: by determining the default viewing position and the first virtual position, an effective basis is provided for the viewing device to match the rendering track, and the virtual experience effect is indirectly improved.
The invention provides a multi-user interactive network live broadcast method based on virtual scenes, which records the moving track of the watching position of each watching device and matches a corresponding virtual watching track to a corresponding watching device according to the moving track of the watching position, and comprises the following steps:
recording a movement track of a watching position of the corresponding watching equipment based on the displacement sensor;
and acquiring a virtual watching track consistent with the movement track of the watching position based on the watching mapping relation between the watching equipment and the virtual scene.
The beneficial effects of the above technical scheme are: through obtaining the virtual orbit of watching, be convenient for carry out reasonable rendering to the track, improve virtual experience effect.
The invention provides a multi-user interactive network live broadcast method based on a virtual scene, which determines a rendering mode, a corresponding voice object and a corresponding video object according to a live broadcast interactive request when the live broadcast interactive request of a corresponding watching user exists at the current moment, and comprises the following steps:
determining the request quantity of the live interaction requests existing at the current moment;
when the request number exceeds the upper limit of the live broadcast connection number, crawling information browsing records of user accounts of the watching equipment matched with the live broadcast interaction request;
determining account habits of corresponding user accounts according to the crawling result;
determining the closeness degree of the corresponding live broadcast content according to the account habit;
sorting the tightness degrees from large to small, and screening the watching devices with the same upper limit of the live broadcast connection quantity from all the live broadcast interaction requests for connection;
performing request analysis on each live broadcast interaction request connected with the viewing equipment, and determining a rendering mode needing to be matched with a request analysis result from a result-mode database;
meanwhile, request splitting is carried out on each live broadcast interaction request connected with the watching equipment, and a request object matched with each splitting request is determined from a splitting-object database;
determining the rendering condition of the mode needing to be rendered, matching the rendering condition with the request objects one by one, and matching corresponding rendering content to each request object;
wherein the request object includes: voice objects and video objects.
In this embodiment, the number of accesses existing at the same time may be counted, and therefore, after counting the number of requests, the number of requests may be compared with the upper limit of the number of live connections, so as to filter which requests may be successfully connected.
In the embodiment, the habits of the account numbers are determined by crawling the browsing records, and then whether the habits are matched with the live broadcast content is determined, so that the account numbers are screened.
In this embodiment, the result-pattern database is inclusive of different parsing results and rendering patterns that match the parsing results.
In this embodiment, the split-object database is comprised of different split results and objects that match the split results.
In this embodiment, the matching of the rendering condition with the request object is mainly to render the request object to ensure the reliability of virtual rendering.
In this embodiment, the voice objects may be different voice segments, voice signals matched for different action behaviors, and the like, and the video objects may be different live content images, and the like.
The beneficial effects of the above technical scheme are: by determining the number of the requests and comparing the number with the upper limit, the requests can be effectively screened according to account habits and compactness, and by acquiring rendering modes and request objects, the objects can be effectively rendered, and virtual experience is improved.
The invention provides a multi-person interactive network live broadcast method based on a virtual scene, which carries out first rendering on a voice track of a voice object in a virtual watching track based on a rendering-needed mode and comprises the following steps:
determining voice rendering content corresponding to each voice object;
predicting a first appearance moment and a voice track of each voice object based on a virtual scene based on a voice object prediction model;
determining a first rendering thread according to voice rendering content of a corresponding voice object, and setting a trigger point of a first appearance moment on the first rendering thread;
capturing an actual rendering result which is rendered on the matched voice track in real time according to the corresponding rendering thread, comparing the actual rendering result with a preset rendering result, and determining a first rendering difference;
optimizing the voice track based on the first rendering difference.
In this embodiment, the speech object prediction model is trained in advance, and is trained for the sample based on different speech objects and the occurrence moments of different speech tracks.
In this embodiment, the content of the voice rendering is the rendering of voice for some specific behaviors, the rendering of some voice segments, and so on.
In this embodiment, the first rendering thread refers to a manner of rendering speech in which rendering is performed step by step.
In this embodiment, the first rendering difference is a difference between actual and standard, which facilitates the optimization of the track.
The beneficial effects of the above technical scheme are: by determining the voice rendering content and predicting according to the model, the rendering difference can be effectively determined, the optimization of the track is realized, and the virtual reality is ensured.
The invention provides a multi-person interactive network live broadcast method based on a virtual scene, which carries out second rendering on a video track of a video object in a virtual viewing track and comprises the following steps:
determining video rendering content corresponding to each video object;
predicting a second appearance moment of each video object based on the virtual scene and a video track based on the video object prediction model;
determining a second rendering thread according to the video rendering content of the corresponding video object, and setting a trigger point of a second appearance moment on the second rendering thread;
capturing an actual rendering result of real-time rendering on the matched video track according to the corresponding rendering thread, comparing the actual rendering result with a preset rendering result, and determining a second rendering difference;
optimizing the video track based on the second rendering difference.
In this embodiment, the video object prediction model is trained in advance, and is trained for the sample based on different video objects and the occurrence time of different video tracks.
In this embodiment, the video rendering content is video rendering for some specific behaviors, rendering for some video segments, and the like.
In this embodiment, the second rendering thread refers to a manner of rendering video, in which rendering is implemented step by step.
In this embodiment, the second rendering difference is a difference between the actual and the standard, which facilitates the optimization of the track.
The beneficial effects of the above technical scheme are: by determining the rendering content of the video and predicting according to the model, the rendering difference can be effectively determined, the optimization of the track is realized, and the virtual reality is ensured.
The invention provides a multi-user interactive network live broadcast method based on a virtual scene, which determines the closeness degree with corresponding live broadcast content according to account habit, and comprises the following steps:
according to the live broadcast label and the content information of the live broadcast content, obtaining live broadcast bias;
determining habit deviation of corresponding viewing equipment according to the habit of the account;
determining a first tight factor set of each habit bias and all live broadcast biases;
Figure 684167DEST_PATH_IMAGE001
wherein,
Figure 368089DEST_PATH_IMAGE002
indicating the ith habit bias
Figure 926109DEST_PATH_IMAGE003
A first set of compact factors;
Figure 986469DEST_PATH_IMAGE009
indicating the ith habit bias
Figure 946335DEST_PATH_IMAGE003
With j' th live bias
Figure 801159DEST_PATH_IMAGE005
The similarity of (2); sim represents a similarity symbol; m1 represents the total number of live broadcast deviation;
calculating the compactness degree Y according to all the first compactness factor sets;
Figure 112054DEST_PATH_IMAGE006
wherein,
Figure 710526DEST_PATH_IMAGE007
indicating the ith habit bias
Figure 728160DEST_PATH_IMAGE003
With all live broadcast bias
Figure 81781DEST_PATH_IMAGE005
The maximum similarity of (c);
Figure 552077DEST_PATH_IMAGE008
indicating the ith habit bias
Figure 16556DEST_PATH_IMAGE003
N1 represents the number of corresponding habit biases.
In this embodiment, live preferences refer to live types, such as entertainment, sales types, and the like.
The beneficial effects of the above technical scheme are: and determining a compact factor set based on habit deviation, and calculating to obtain a compact degree, thereby providing an effective screening basis for subsequent determination requests.
The invention provides a multi-person interactive network live broadcast system based on a virtual scene, as shown in figure 2, comprising:
the scene construction module is used for constructing a virtual scene based on the live broadcast equipment;
the position determining module is used for determining a default viewing position of each viewing device and acquiring a first virtual position matched with the default viewing position from the virtual scene;
the track determining module is used for recording the moving track of the watching position of each piece of watching equipment and matching the corresponding virtual watching track to the corresponding piece of watching equipment according to the moving track of the watching position;
the object determining module is used for determining a rendering mode, a corresponding voice object and a corresponding video object according to a live broadcast interaction request when the live broadcast interaction request corresponding to a watching user exists at the current moment is captured;
and the rendering module is used for performing first rendering on the voice track of the voice object in the virtual viewing track and performing second rendering on the video track of the video object in the virtual viewing track based on the rendering-required mode, so that multi-user interactive network live broadcast is realized.
The beneficial effects of the above technical scheme are: the virtual watching tracks of different watching devices are recorded, and the tracks of the voice objects and the video objects in different requests are determined for rendering, so that the live broadcast interactive effect is improved, the user interactive experience is met, and an effective basis is provided for realizing multi-user interactive network live broadcast.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A multi-person interactive network live broadcast method based on virtual scenes is characterized by comprising the following steps:
step 1: constructing a virtual scene based on live broadcast equipment;
step 2: determining a default viewing position of each viewing device, and acquiring a first virtual position matched with the default viewing position from the virtual scene;
and step 3: recording the moving track of the watching position of each watching device, and matching a corresponding virtual watching track to the corresponding watching device according to the moving track of the watching position;
and 4, step 4: when a live broadcast interaction request corresponding to a watching user exists at the current moment, determining a rendering mode, a corresponding voice object and a corresponding video object according to the live broadcast interaction request;
and 5: performing first rendering on a voice track of the voice object in the virtual watching track and performing second rendering on a video track of the video object in the virtual watching track based on the rendering-needed mode, so as to realize multi-user interactive network live broadcast;
wherein, step 4, include:
determining the request quantity of the live interaction requests existing at the current moment;
when the request number exceeds the upper limit of the live broadcast connection number, crawling information browsing records of user accounts of the watching equipment matched with the live broadcast interaction request;
determining account habits of corresponding user accounts according to the crawling result;
determining the closeness degree of the corresponding live broadcast content according to the account habit;
sorting the tightness degrees from large to small, and screening the watching devices with the same upper limit of the live broadcast connection quantity from all the live broadcast interaction requests for connection;
performing request analysis on each live broadcast interaction request connected with the viewing equipment, and determining a rendering mode needing to be matched with a request analysis result from a result-mode database;
simultaneously, request splitting is carried out on each live broadcast interaction request connected with the watching equipment, and a request object matched with each splitting request is determined from a splitting-object database;
determining the rendering condition of the mode needing to be rendered, matching the rendering condition with the request objects one by one, and matching corresponding rendering content to each request object;
wherein the request object includes: voice objects and video objects.
2. The multi-person interactive network live broadcasting method based on virtual scenes as claimed in claim 1, wherein the constructing of the virtual scenes based on live broadcasting equipment comprises:
determining unit position distribution of each scene unit in live broadcast equipment;
determining a rendering scene of each scene unit according to the unit position distribution;
determining a scene gap between adjacent rendered scenes, and judging that the adjacent rendered scenes do not need to be optimized when the scene gaps are fitted;
otherwise, determining a first coverage angle of a first scene and a second coverage angle of a second scene in the adjacent scenes, and determining a first adjustable angle of the first scene and a second adjustable angle of the second scene;
when the first adjustable angle and the second adjustable angle are both 0, acquiring a first edge picture in a first scene connected with the scene gap and acquiring a second edge picture in a second scene connected with the scene gap;
constructing a first transition picture based on the first edge picture and the second edge picture, and storing the first transition picture in a newly added first scene unit;
when the first adjustable angle is 0 and the second adjustable angle is not 0, carrying out angle expansion adjustment on the second scene, and if no scene gap exists after the angle adjustment, keeping the adjusted second scene unchanged;
if a new scene gap still exists after the angle adjustment, determining whether a transition scene needs to be acquired according to a first ratio of the new scene gap to a preset scene gap;
when the first ratio is smaller than a preset ratio, obtaining edge colors from the first scene and the second scene after angle expansion adjustment, and using the middle color as the filling of a scene gap;
when the first ratio is larger than or equal to a preset ratio, acquiring a second transition picture, and storing the second transition picture in a newly added second scene unit;
and constructing to obtain the virtual scene based on the scene adjusting structure.
3. The method as claimed in claim 2, wherein constructing a first transition picture based on the first edge picture and a second edge picture comprises:
constructing a first picture matrix of the first edge picture, and constructing a second picture matrix of the second edge picture;
extracting a last column vector of the first picture matrix and a first column vector of the second picture matrix;
matching the last column of vectors with the same row position elements in the first column of vectors to respectively obtain the pixel difference of two elements in each matching combination;
when the pixel difference is 0, rendering the transition line in the corresponding matching combination according to the pixel values of the two corresponding elements;
when the pixel difference is not 0, judging whether the pixel difference is in an averaging rendering range, if so, obtaining the pixel average value of two elements in the corresponding matching combination to render the corresponding transition line;
if the pixel difference is not within the rendering range of the mean value, extracting a row vector of a first element and a row vector of a second element in the corresponding matching pair, performing transition analysis on the two row vectors based on a vector analysis model to obtain a rendering pixel value, and rendering a corresponding transition line according to the rendering pixel value;
obtaining an initial picture based on a rendering result;
and carrying out pixel smoothing processing on the initial picture to construct a transition picture.
4. The method of claim 1, wherein determining a default viewing position of each viewing device and obtaining a first virtual position from the virtual scene matching the default viewing position comprises:
capturing whether the watching equipment enters a virtual scene or not, and if so, regarding a position corresponding to the entering moment as a default watching position;
and establishing a position relation between the default viewing position and each virtual position in the virtual scene, and acquiring a first virtual position matched with the default viewing position.
5. The multi-person interactive network live broadcasting method based on virtual scenes as claimed in claim 1, wherein recording the moving track of the viewing position of each viewing device and matching the corresponding virtual viewing track to the corresponding viewing device according to the moving track of the viewing position comprises:
recording a movement track of a watching position of the corresponding watching equipment based on the displacement sensor;
and acquiring a virtual watching track consistent with the movement track of the watching position based on the watching mapping relation between the watching equipment and the virtual scene.
6. The method of claim 1, wherein the first rendering of the voice track of the voice object in the virtual viewing track based on the rendering mode required comprises:
determining voice rendering content corresponding to each voice object;
predicting a first appearance moment and a voice track of each voice object based on a virtual scene based on a voice object prediction model;
determining a first rendering thread according to voice rendering content of a corresponding voice object, and setting a trigger point of a first appearance moment on the first rendering thread;
capturing an actual rendering result of real-time rendering on the matched voice track according to the corresponding rendering thread, and comparing the actual rendering result with a preset rendering result to determine a first rendering difference;
optimizing the voice track based on the first rendering difference.
7. The virtual scene-based multi-person interactive network live broadcasting method as claimed in claim 1, wherein the second rendering of the video track of the video object in the virtual viewing track comprises:
determining video rendering content corresponding to each video object;
predicting a second appearance moment of each video object based on the virtual scene and a video track based on the video object prediction model;
determining a second rendering thread according to the video rendering content of the corresponding video object, and setting a trigger point of a second appearance moment on the second rendering thread;
capturing an actual rendering result of real-time rendering on the matched video track according to the corresponding rendering thread, comparing the actual rendering result with a preset rendering result, and determining a second rendering difference;
optimizing the video track based on the second rendering difference.
8. The multi-person interactive network live broadcasting method based on the virtual scene as claimed in claim 1, wherein determining the closeness degree with the corresponding live broadcasting content according to the account habit comprises:
according to the live broadcast label and the content information of the live broadcast content, obtaining live broadcast bias;
determining habit deviation of corresponding viewing equipment according to the habit of the account;
determining a first tight factor set of each habit bias and all live broadcast biases;
Figure 851112DEST_PATH_IMAGE001
wherein,
Figure 142416DEST_PATH_IMAGE002
indicating the ith habit bias
Figure 19236DEST_PATH_IMAGE003
A first set of compact factors;
Figure 686978DEST_PATH_IMAGE004
indicating the ith habit bias
Figure 90277DEST_PATH_IMAGE003
With j' th live bias
Figure 677117DEST_PATH_IMAGE005
The similarity of (2); sim represents a similarity symbol; m1 represents the total number of live broadcast deviation;
calculating the compactness degree Y according to all the first compactness factor sets;
Figure 165867DEST_PATH_IMAGE006
wherein,
Figure 637299DEST_PATH_IMAGE007
indicating the ith habit bias
Figure 3427DEST_PATH_IMAGE003
With all live broadcast bias
Figure 902113DEST_PATH_IMAGE005
The maximum similarity of (2);
Figure 2793DEST_PATH_IMAGE008
indicating the ith habit bias
Figure 746758DEST_PATH_IMAGE003
N1 represents the number of corresponding habit biases.
9. A multi-person interactive network live broadcast system based on virtual scenes is characterized by comprising:
the scene construction module is used for constructing a virtual scene based on the live broadcast equipment;
the position determining module is used for determining a default viewing position of each viewing device and acquiring a first virtual position matched with the default viewing position from the virtual scene;
the track determining module is used for recording the moving track of the watching position of each piece of watching equipment and matching the corresponding virtual watching track to the corresponding piece of watching equipment according to the moving track of the watching position;
the object determining module is used for determining a rendering mode, a corresponding voice object and a corresponding video object according to a live broadcast interaction request when the live broadcast interaction request corresponding to a watching user exists at the current moment;
the rendering module is used for performing first rendering on a voice track of the voice object in the virtual watching track and performing second rendering on a video track of the video object in the virtual watching track based on the rendering-needed mode, so that multi-user interactive network live broadcast is realized;
the object determination module is to:
determining the request quantity of the live interaction requests existing at the current moment;
when the request number exceeds the upper limit of the live broadcast connection number, crawling information browsing records of user accounts of the watching equipment matched with the live broadcast interaction request;
determining account habits of corresponding user accounts according to the crawling result;
determining the closeness degree of the corresponding live broadcast content according to the account habit;
sorting the tightness degrees from large to small, and screening the watching devices with the same upper limit of the live broadcast connection quantity from all the live broadcast interaction requests for connection;
performing request analysis on each live broadcast interaction request connected with the viewing equipment, and determining a rendering mode needing to be matched with a request analysis result from a result-mode database;
meanwhile, request splitting is carried out on each live broadcast interaction request connected with the watching equipment, and a request object matched with each splitting request is determined from a splitting-object database;
determining the rendering condition of the mode needing to be rendered, matching the rendering condition with the request objects one by one, and matching corresponding rendering content to each request object;
wherein the request object includes: voice objects and video objects.
CN202211341211.2A 2022-10-31 2022-10-31 Multi-person interactive network live broadcast method and system based on virtual scene Active CN115396688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211341211.2A CN115396688B (en) 2022-10-31 2022-10-31 Multi-person interactive network live broadcast method and system based on virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211341211.2A CN115396688B (en) 2022-10-31 2022-10-31 Multi-person interactive network live broadcast method and system based on virtual scene

Publications (2)

Publication Number Publication Date
CN115396688A CN115396688A (en) 2022-11-25
CN115396688B true CN115396688B (en) 2022-12-27

Family

ID=84115087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211341211.2A Active CN115396688B (en) 2022-10-31 2022-10-31 Multi-person interactive network live broadcast method and system based on virtual scene

Country Status (1)

Country Link
CN (1) CN115396688B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117834949B (en) * 2024-03-04 2024-05-14 清华大学 Real-time interaction prerendering method and device based on edge intelligence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872575A (en) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 Live broadcasting method and apparatus based on virtual reality
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN110602517A (en) * 2019-09-17 2019-12-20 腾讯科技(深圳)有限公司 Live broadcast method, device and system based on virtual environment
WO2020022405A1 (en) * 2018-07-25 2020-01-30 株式会社ドワンゴ Three-dimensional content distribution system, three-dimensional content distribution method and computer program
CN112153400A (en) * 2020-09-22 2020-12-29 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN114071180A (en) * 2021-11-24 2022-02-18 上海哔哩哔哩科技有限公司 Live broadcast room display method and device
CN114466202A (en) * 2020-11-06 2022-05-10 中移物联网有限公司 Mixed reality live broadcast method and device, electronic equipment and readable storage medium
CN114938459A (en) * 2022-05-16 2022-08-23 完美世界征奇(上海)多媒体科技有限公司 Virtual live broadcast interaction method and device based on barrage, storage medium and equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105872575A (en) * 2016-04-12 2016-08-17 乐视控股(北京)有限公司 Live broadcasting method and apparatus based on virtual reality
WO2020022405A1 (en) * 2018-07-25 2020-01-30 株式会社ドワンゴ Three-dimensional content distribution system, three-dimensional content distribution method and computer program
CN109660818A (en) * 2018-12-30 2019-04-19 广东彼雍德云教育科技有限公司 A kind of virtual interactive live broadcast system
CN110602517A (en) * 2019-09-17 2019-12-20 腾讯科技(深圳)有限公司 Live broadcast method, device and system based on virtual environment
CN112153400A (en) * 2020-09-22 2020-12-29 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN114466202A (en) * 2020-11-06 2022-05-10 中移物联网有限公司 Mixed reality live broadcast method and device, electronic equipment and readable storage medium
CN114071180A (en) * 2021-11-24 2022-02-18 上海哔哩哔哩科技有限公司 Live broadcast room display method and device
CN114938459A (en) * 2022-05-16 2022-08-23 完美世界征奇(上海)多媒体科技有限公司 Virtual live broadcast interaction method and device based on barrage, storage medium and equipment

Also Published As

Publication number Publication date
CN115396688A (en) 2022-11-25

Similar Documents

Publication Publication Date Title
CN108322788B (en) Advertisement display method and device in live video
US11475666B2 (en) Method of obtaining mask frame data, computing device, and readable storage medium
US8665374B2 (en) Interactive video insertions, and applications thereof
CN107105315A (en) Live broadcasting method, the live broadcasting method of main broadcaster's client, main broadcaster's client and equipment
US20030098869A1 (en) Real time interactive video system
US20170048597A1 (en) Modular content generation, modification, and delivery system
US20180137363A1 (en) System for the automated analisys of a sporting match
CN109145784A (en) Method and apparatus for handling video
DE102020124815A1 (en) SYSTEM AND DEVICE FOR USER CONTROLLED VIRTUAL CAMERA FOR VOLUMETRIC VIDEO
EP2559237A1 (en) Platform-independent interactivity with media broadcasts
CN1750618A (en) Method of viewing audiovisual documents on a receiver, and receiver for viewing such documents
US9087380B2 (en) Method and system for creating event data and making same available to be served
CN107295362A (en) Live content screening technique, device, equipment and storage medium based on image
CN115396688B (en) Multi-person interactive network live broadcast method and system based on virtual scene
CN112287848A (en) Live broadcast-based image processing method and device, electronic equipment and storage medium
CN114915832A (en) Bullet screen display method and device and computer readable storage medium
CN113784108B (en) VR (virtual reality) tour and sightseeing method and system based on 5G transmission technology
Yu et al. Subjective and objective analysis of streamed gaming videos
CN110324653A (en) Game interaction exchange method and system, electronic equipment and the device with store function
Sabet et al. A novel objective quality assessment method for perceptually-coded cloud gaming video
CN114945097B (en) Video stream processing method and device
US20220224958A1 (en) Automatic generation of augmented reality media
CN108449362A (en) Interactive system based on virtual reality imaging
Tang et al. Optimizing synchronization of tennis professional league live broadcast based on wireless network planning
CN114079777B (en) Video processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant