CN114466202B - Mixed reality live broadcast method, apparatus, electronic device and readable storage medium - Google Patents

Mixed reality live broadcast method, apparatus, electronic device and readable storage medium Download PDF

Info

Publication number
CN114466202B
CN114466202B CN202011231625.0A CN202011231625A CN114466202B CN 114466202 B CN114466202 B CN 114466202B CN 202011231625 A CN202011231625 A CN 202011231625A CN 114466202 B CN114466202 B CN 114466202B
Authority
CN
China
Prior art keywords
live
video
characteristic
mixed reality
live video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011231625.0A
Other languages
Chinese (zh)
Other versions
CN114466202A (en
Inventor
熊壮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile IoT Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile IoT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile IoT Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011231625.0A priority Critical patent/CN114466202B/en
Publication of CN114466202A publication Critical patent/CN114466202A/en
Application granted granted Critical
Publication of CN114466202B publication Critical patent/CN114466202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a mixed reality live broadcast method, a device, an electronic device and a readable storage medium, wherein the method comprises the following steps: collecting live video of a live scene; rendering the live video to obtain a virtual video; and transmitting the live video and the virtual video to a cloud server through a first private slice network and a second private slice network respectively. According to the method, different types of videos in live broadcast are transmitted through different slice private networks, the essence of a transmission mode is changed, the transmission rate is improved, and meanwhile entertainment of live broadcast interaction is improved through a mixed reality live broadcast mode.

Description

Mixed reality live broadcast method, apparatus, electronic device and readable storage medium
Technical Field
The present invention relates to the field of communications, and in particular, to a mixed reality live broadcast method, apparatus, electronic device, and readable storage medium.
Background
Mixed Reality (MR) refers to a new visual environment created by combining the real and virtual worlds. Physical and digital objects coexist in the new visualization environment and interact in real time.
Nowadays, with the development of electronic information technology and the continuous progress of computer technology, people's daily entertainment activities are also becoming more and more abundant. Among them, live broadcasting is favored by a wide range of users because of its rich content selectivity and interactivity.
The current live broadcast method generally generates Virtual Reality (VR) images only through visual technology, and carries communication through 5G, but the live broadcast method is also applicable to a 4G network, and has the advantages of essentially the same transmission method, lower transmission rate and poorer entertainment of live broadcast interaction.
Disclosure of Invention
The invention provides a mixed reality live broadcast method, a device, electronic equipment and a readable storage medium, which are used for solving the problems of low transmission rate and poor entertainment of live broadcast interaction caused by unchanged nature of a live broadcast transmission mode.
According to a first aspect of the present invention, there is provided a mixed reality live broadcast method, applied to an electronic device, including: collecting live video of a live scene; rendering the live video to obtain a virtual video; and transmitting the live video and the virtual video to a cloud server through a first private slice network and a second private slice network respectively.
In some embodiments, in the step of rendering the live video to obtain a virtual video, the method comprises: analyzing each frame of live image of the live video; extracting characteristic values of motion trajectories of objects in the live images; and obtaining the virtual video according to the characteristic value.
In some embodiments, the step of extracting the feature value of the motion trail of the object in the live image includes: dividing grids of each frame of live image to acquire characteristic points; filtering invalid feature points; recording the effective feature points of each frame; acquiring a motion trail of the object based on time according to the effective feature points of each frame; and extracting the characteristic value based on the motion trail.
According to a second aspect of the present invention, there is provided a mixed reality live broadcast method, applied to a server, comprising: receiving live video and virtual video respectively transmitted by electronic equipment at an input end through a first private slice network and a second private slice network; and transmitting the live video and the virtual video to the electronic equipment of the output end.
According to a third aspect of the present invention, the present invention provides a mixed reality live broadcast method, applied to an electronic device, including: receiving live video and virtual video transmitted by a cloud server; and displaying live video pictures or virtual video pictures according to the live video and the virtual video.
In some embodiments, the virtual picture comprises at least one of: and the character model, the object model and the model of the user of the electronic equipment in the live broadcast picture.
In some embodiments, the method further comprises: receiving a switching input for the live video picture or the virtual video picture; and switching between the live view and the virtual view in response to the switching input.
According to a fourth aspect of the present invention, there is provided a mixed reality live broadcast apparatus, applied to an electronic device, including: the acquisition module is used for acquiring live video of a live scene; the rendering module is used for rendering the live video to obtain a virtual video; and the first transmission module is used for respectively transmitting the live video and the virtual video to the cloud server through the first slice private network and the second slice private network.
According to a fifth aspect of the present invention, there is provided a mixed reality live broadcast apparatus, applied to a server, comprising: the first receiving module is used for receiving live video and virtual video which are respectively transmitted by the electronic equipment at the input end through the first private slice network and the second private slice network; and the second transmission module is used for transmitting the live video and the virtual video to the electronic equipment at the output end.
According to a sixth aspect of the present invention, there is provided a mixed reality live broadcast apparatus, applied to an electronic device, including: the second receiving module is used for receiving the live video and the virtual video transmitted by the cloud server; and the display module is used for displaying live video pictures or virtual video pictures according to the live video and the virtual video.
According to a seventh aspect of the present invention there is provided an electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor implements the steps of the mixed reality live broadcast method as described above.
According to an eighth aspect of the present invention, there is provided a readable storage medium having stored thereon a program or instructions which when executed by a processor implements a mixed reality live broadcast method as described above.
Compared with the prior art, the invention has the beneficial effects that: according to the method, different types of videos in live broadcast are transmitted through different slice private networks, the essence of a transmission mode is changed, the transmission rate is improved, and meanwhile entertainment of live broadcast interaction is improved through a mixed reality live broadcast mode.
Drawings
Fig. 1 is a schematic step flow diagram of a mixed reality live broadcast method according to an embodiment of the present invention.
Fig. 2 is a schematic step flow diagram of another mixed reality live broadcast method according to an embodiment of the present invention.
Fig. 3 is a schematic step flow diagram of another mixed reality live broadcast method according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a mixed reality live broadcast device according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of another mixed reality live broadcast device according to an embodiment of the present invention.
Fig. 6 is a schematic structural diagram of still another mixed reality live broadcast device according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention.
As shown in fig. 1, the present invention provides a mixed reality live broadcast method applied to an electronic device, and the method includes steps S11 to S13.
Step S11, collecting live video of a live scene.
In the embodiment of the invention, optionally, the live video is obtained by shooting a live scene through a camera or a mobile phone and the like. The live video of the live scene comprises a host and a background environment where the host is located. The anchor may be a character or an article.
And step S12, rendering the live video to obtain a virtual video.
In the embodiment of the invention, optionally, the live video is rendered (e.g., the live video is reconstructed in an image engine) to form virtual data of the person and the article, and the virtual video is obtained according to the virtual data.
In some embodiments, optionally, step S12 specifically includes: step S121 to step S123.
Step S121, analyzing each frame of live image of the live video.
Step S122, extracting a characteristic value of a motion trail of the object in the live image. Wherein step S122 specifically includes steps S1221 to S1225.
Step S1221, meshing each frame of live image to acquire feature points.
In the embodiment of the invention, each frame of live image is divided into a plurality of scales, wherein the scales are image scales, and can be understood as resolution, generally 8 spatial scales, and sampling on the plurality of spatial scales can ensure that the sampled feature points cover all spatial positions and scales. And densely collecting the characteristic points on each scale in a grid dividing mode.
Step S1222, filtering the invalid feature points.
In the embodiment of the invention, the invalid characteristic points are some untraceable characteristic points lacking in change, and the characteristic points lower than a certain threshold value can be filtered by calculating the characteristic values of the pixel point autocorrelation matrix.
In step S1223, the valid feature points of each frame are recorded.
In the embodiment of the invention, the effective feature point of t frames is recorded as P t =(x t ,y t ) T+1 frames, the effective feature point is P t+1 =(x t+1 ,y t+1 )。
Step S1224, according to the effective feature points of each frame, obtaining the motion track of the object based on time.
In the embodiment of the invention, the position of a certain characteristic point on a plurality of continuous frames of live images forms a track.
Step S1225, extracting the feature value based on the motion trajectory.
In the embodiment of the invention, the subsequent feature point extraction is performed according to the motion trail, and after the feature point is extracted, the feature value of the feature point is calculated.
And step S123, obtaining the virtual video according to the characteristic value.
In the embodiment of the invention, a large number of motion tracks exist for a section of live video, each section of motion track corresponds to a group of characteristic values, and all the characteristic value groups are subjected to characteristic coding, so that a virtual video can be obtained.
And S13, respectively transmitting the live video and the virtual video to a cloud server through a first private slice network and a second private slice network.
In the embodiment of the present invention, optionally, the first slice private network and the second slice private network are both 5G slice private networks, so-called slicing, that is, a mode of on-demand networking, a plurality of virtual end-to-end networks are cut out on an infrastructure, and each network is logically isolated from an access network to a carrier network and then to a core network. Live video and virtual video are respectively transmitted through different slice private networks, so that the nature of a transmission mode is changed, and the transmission rate is improved. In addition, live video and virtual video are transmitted through the 5G base station and the core network.
As shown in fig. 2, the present invention provides another mixed reality live broadcasting method applied to a server, and the method includes steps S21 to S22.
Step S21, receiving live video and virtual video respectively transmitted by the electronic equipment at the input end through the first private slice network and the second private slice network.
Step S22, transmitting the live video and the virtual video to the electronic device at the output end.
In the embodiment of the invention, the server stores the data of live video and virtual video, such as the character of an input end and the virtual data of an article, and even the character data of an output end user. In addition, the server is not responsible for distributing live videos and virtual videos, and is also responsible for the live viewing function.
As shown in fig. 3, the present invention provides still another mixed reality live broadcasting method applied to an electronic device, and the method includes steps S31 to S32.
Step S31, receiving live video and virtual video transmitted by a cloud server.
In the embodiment of the invention, the live video and the virtual video can be received directly through the traditional network, and the live video and the virtual video can also be received through the slice private network.
And step S32, displaying live video pictures or virtual video pictures according to the live video and the virtual video.
In an embodiment of the present invention, the virtual picture includes at least one of the following: and the character model, the object model and the model of the user of the electronic equipment in the live broadcast picture.
In some embodiments, step S32 specifically includes steps S321 to S322.
Step S321, receiving a switching input for the live video picture or the virtual video picture.
Step S322, in response to the switching input, switching between the live view and the virtual view.
In the embodiment of the invention, the user of the electronic equipment watches the live video through the live video picture, and performs virtual operation such as package replacement in sales live video through the virtual picture, thereby improving entertainment of live video interaction.
As shown in fig. 4, the present invention provides a mixed reality live broadcast device, which includes an acquisition module 41, a rendering module 42, and a first transmission module 43.
The acquisition module 41 is used for acquiring live video of a live scene.
In the embodiment of the invention, live video is obtained by shooting live scenes through a camera or a mobile phone and the like. The live video of the live scene comprises a host and a background environment where the host is located. The anchor may be a character or an article.
The rendering module 42 is configured to render the live video to obtain a virtual video.
In the embodiment of the invention, the live video is rendered (e.g. the live video is reconstructed by an image engine) to form virtual data of people and objects, and the virtual video is obtained according to the virtual data.
Rendering module 42 analyzes each frame of live image of the live video; then extracting characteristic values of the motion trail of the object in the live image; and obtaining the virtual video according to the characteristic value.
Specifically, the feature values of the motion trail of the object in the live image are extracted as follows.
And meshing each frame of live image to acquire characteristic points. Each frame of live image is divided into a plurality of scales, generally 8 spatial scales, and sampling on the plurality of spatial scales can ensure that the sampled feature points cover all spatial positions and scales. And densely collecting the characteristic points on each scale in a grid dividing mode.
And filtering invalid feature points. Invalid feature points are untracked feature points lacking in variation, and feature points below a certain threshold value can be filtered by calculating feature values of the pixel point autocorrelation matrix.
The valid feature points for each frame are recorded. The effective feature point of t frame is recorded as P t =(x t ,y t ) T+1 frames, the effective feature point is P t+1 =(x t+1 ,y t+1 )。
And acquiring a motion track of the object based on time according to the effective characteristic points of each frame. The position of a certain characteristic point on a plurality of continuous frames of live images forms a track.
And extracting the characteristic value based on the motion trail. And extracting the subsequent characteristic points according to the motion trail, and calculating the characteristic values of the characteristic points after extracting the characteristic points. And (3) a large number of motion tracks exist for one section of live video, each section of motion track corresponds to one group of characteristic values, and the virtual video can be obtained by carrying out characteristic coding on all the characteristic value groups.
The first transmission module 43 is configured to transmit the live video and the virtual video to a cloud server through a first private slice network and a second private slice network, respectively.
In the embodiment of the invention, the first slice private network and the second slice private network are both 5G slice private networks, namely a slice-on-demand networking mode, a plurality of virtual end-to-end networks are cut out on an infrastructure, and each network is logically isolated from an access network to a bearing network and then to a core network. Live video and virtual video are respectively transmitted through different slice private networks, so that the nature of a transmission mode is changed, and the transmission rate is improved. In addition, live video and virtual video are transmitted through the 5G base station and the core network.
As shown in fig. 5, the present invention provides another mixed reality live broadcast device, which includes a first receiving module 51 and a second transmitting module 52.
The first receiving module 51 is configured to receive live video and virtual video respectively transmitted by an electronic device at an input end through the first private slice network and the second private slice network.
The second transmission module 52 is configured to transmit the live video and the virtual video to an electronic device at an output terminal.
In the embodiment of the invention, the server stores the data of live video and virtual video, such as the character of an input end and the virtual data of an article, and even the character data of an output end user. In addition, the server is not responsible for distributing live videos and virtual videos, and is also responsible for the live viewing function.
As shown in fig. 6, the present invention provides a mixed reality live broadcast device, which includes a second receiving module 61 and a display module 62.
The second receiving module 61 is configured to receive live video and virtual video transmitted by the cloud server.
In the embodiment of the invention, the live video and the virtual video can be received directly through the traditional network, and the live video and the virtual video can also be received through the slice private network.
The display module 62 is configured to display a live video frame or a virtual video frame according to the live video and the virtual video.
In an embodiment of the present invention, the virtual picture includes at least one of the following: and the character model, the object model and the model of the user of the electronic equipment in the live broadcast picture.
The display module 62 is further configured to receive a switching input for the live video picture or the virtual video picture; in response to the switching input, switching between the live view and the virtual view.
In the embodiment of the invention, the user of the electronic equipment watches the live video through the live video picture, and performs virtual operation such as package replacement in sales live video through the virtual picture, thereby improving entertainment of live video interaction.
Referring to fig. 7, the embodiment of the invention further provides an electronic device 700, where the electronic device 700 may be a mobile phone, a tablet, a computer, a server, or the like. As shown in fig. 7, the electronic device 700 includes a processor 701, a memory 702. The processor 701 is electrically connected to the memory 702.
The processor 701 is a control center of the electronic device 700, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or loading application programs stored in the memory 702, and calling data stored in the memory 702, thereby performing overall monitoring of the electronic device.
In this embodiment, the electronic device 700 is provided with a plurality of storage partitions, where the plurality of storage partitions include a system partition and a target partition, the processor 701 in the electronic device 700 loads instructions corresponding to the processes of one or more application programs into the memory 702 according to the following steps, and the processor 701 runs the application programs stored in the memory 702, so as to execute the respective processes of the embodiment of the mixed reality live broadcast method shown in fig. 1 and achieve the same technical effects, or to execute the respective processes of the embodiment of the mixed reality live broadcast method shown in fig. 2 and achieve the same technical effects, or to execute the respective processes of the embodiment of the mixed reality live broadcast method shown in fig. 3 and achieve the same technical effects.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions or by controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor. To this end, an embodiment of the present invention provides a readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the steps of any of the mixed reality live broadcast methods provided by the embodiments of the present invention.
Wherein the readable storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The instructions stored in the readable storage medium can execute the steps in any mixed reality live broadcast method provided by the embodiment of the present invention, so that the beneficial effects that can be achieved by any mixed reality live broadcast method provided by the embodiment of the present invention can be achieved, which are detailed in the previous embodiments and are not repeated here. The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
The invention has the beneficial effects that: according to the method, different types of videos in live broadcast are transmitted through different slice private networks, the essence of a transmission mode is changed, the transmission rate is improved, and meanwhile entertainment of live broadcast interaction is improved through a mixed reality live broadcast mode.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The above description of the embodiment of the present invention provides a mixed reality live broadcast method, system, readable storage medium and electronic device, and specific examples are applied to describe the principles and embodiments of the present invention, and the description of the above embodiment is only used to help understand the technical solution and core ideas of the present invention; those of ordinary skill in the art will appreciate that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (12)

1. A mixed reality live broadcast method, applied to an electronic device, comprising:
collecting live video of a live scene;
rendering the live video to obtain a virtual video; and
transmitting the live video and the virtual video to a cloud server through a first private slice network and a second private slice network respectively;
wherein, a large number of motion tracks exist for a live video, each motion track corresponds to a group of characteristic values, and all the characteristic value groups are subjected to characteristic coding to obtain a virtual video; the extracting the characteristic value of the motion trail of the object in the live video comprises the following steps:
dividing each frame of live image into a plurality of spatial scales, densely collecting characteristic points on each spatial scale in a grid dividing mode, and filtering invalid characteristic points, wherein the invalid characteristic points are untraced characteristic points which lack change.
2. The mixed reality live method of claim 1, wherein in the step of rendering the live video to obtain a virtual video, comprising:
analyzing each frame of live image of the live video;
extracting characteristic values of motion trajectories of objects in the live images; and
and obtaining the virtual video according to the characteristic value.
3. The mixed reality live broadcasting method of claim 2, wherein the step of extracting a characteristic value of a motion trajectory of an object in the live broadcasting image comprises:
dividing grids of each frame of live image to acquire characteristic points;
filtering invalid feature points;
recording the effective feature points of each frame;
acquiring a motion trail of the object based on time according to the effective feature points of each frame; and
and extracting the characteristic value based on the motion trail.
4. A mixed reality live broadcast method, applied to a server, comprising:
receiving live video and virtual video respectively transmitted by electronic equipment at an input end through a first private slice network and a second private slice network; and
transmitting the live video and the virtual video to electronic equipment at an output end;
wherein, a large number of motion tracks exist for a live video, each motion track corresponds to a group of characteristic values, and all the characteristic value groups are subjected to characteristic coding to obtain a virtual video; the extracting the characteristic value of the motion trail of the object in the live video comprises the following steps:
dividing each frame of live image into a plurality of spatial scales, densely collecting characteristic points on each spatial scale in a grid dividing mode, and filtering invalid characteristic points, wherein the invalid characteristic points are untraced characteristic points which lack change.
5. A mixed reality live broadcast method, applied to an electronic device, comprising:
receiving live video and virtual video transmitted by a cloud server; and
displaying live video pictures or virtual video pictures according to the live video and the virtual video;
wherein, a large number of motion tracks exist for a live video, each motion track corresponds to a group of characteristic values, and all the characteristic value groups are subjected to characteristic coding to obtain a virtual video; the extracting the characteristic value of the motion trail of the object in the live video comprises the following steps:
dividing each frame of live image into a plurality of spatial scales, densely collecting characteristic points on each spatial scale in a grid dividing mode, and filtering invalid characteristic points, wherein the invalid characteristic points are untraced characteristic points which lack change.
6. The mixed reality live broadcast method of claim 5, wherein the virtual video picture comprises at least one of: a character model, an item model, and a model of the electronic device user in the live video picture.
7. The mixed reality live broadcast method of claim 5, further comprising:
receiving a switching input for the live video picture or the virtual video picture; and
in response to the switching input, switching between the live video picture and the virtual video picture.
8. A mixed reality live broadcast apparatus, characterized by being applied to an electronic device, comprising:
the acquisition module is used for acquiring live video of a live scene;
the rendering module is used for rendering the live video to obtain a virtual video; and
the first transmission module is used for respectively transmitting the live video and the virtual video to the cloud server through a first private slice network and a second private slice network;
wherein, a large number of motion tracks exist for a live video, each motion track corresponds to a group of characteristic values, and all the characteristic value groups are subjected to characteristic coding to obtain a virtual video; the extracting the characteristic value of the motion trail of the object in the live video comprises the following steps:
dividing each frame of live image into a plurality of spatial scales, densely collecting characteristic points on each spatial scale in a grid dividing mode, and filtering invalid characteristic points, wherein the invalid characteristic points are untraced characteristic points which lack change.
9. A mixed reality live broadcast device, characterized by being applied to a server, comprising:
the first receiving module is used for receiving live video and virtual video which are respectively transmitted by the electronic equipment at the input end through the first private slice network and the second private slice network; and
the second transmission module is used for transmitting the live video and the virtual video to the electronic equipment at the output end;
wherein, a large number of motion tracks exist for a live video, each motion track corresponds to a group of characteristic values, and all the characteristic value groups are subjected to characteristic coding to obtain a virtual video; the extracting the characteristic value of the motion trail of the object in the live video comprises the following steps:
dividing each frame of live image into a plurality of spatial scales, densely collecting characteristic points on each spatial scale in a grid dividing mode, and filtering invalid characteristic points, wherein the invalid characteristic points are untraced characteristic points which lack change.
10. A mixed reality live broadcast apparatus, characterized by being applied to an electronic device, comprising:
the second receiving module is used for receiving the live video and the virtual video transmitted by the cloud server; and
the display module is used for displaying live video pictures or virtual video pictures according to the live video and the virtual video;
wherein, a large number of motion tracks exist for a live video, each motion track corresponds to a group of characteristic values, and all the characteristic value groups are subjected to characteristic coding to obtain a virtual video; the extracting the characteristic value of the motion trail of the object in the live video comprises the following steps:
dividing each frame of live image into a plurality of spatial scales, densely collecting characteristic points on each spatial scale in a grid dividing mode, and filtering invalid characteristic points, wherein the invalid characteristic points are untraced characteristic points which lack change.
11. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, the program or instruction when executed by the processor implementing the steps of the mixed reality live method of any one of claims 1 to 3, or the program or instruction when executed by the processor implementing the steps of the mixed reality live method of claim 4, or the program or instruction when executed by the processor implementing the steps of the mixed reality live method of any one of claims 5 to 7.
12. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions, which when executed by a processor, implements the steps of the mixed reality live method according to any of claims 1 to 3, or the steps of the mixed reality live method according to claim 4, or the steps of the mixed reality live method according to any of claims 5 to 7.
CN202011231625.0A 2020-11-06 2020-11-06 Mixed reality live broadcast method, apparatus, electronic device and readable storage medium Active CN114466202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011231625.0A CN114466202B (en) 2020-11-06 2020-11-06 Mixed reality live broadcast method, apparatus, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011231625.0A CN114466202B (en) 2020-11-06 2020-11-06 Mixed reality live broadcast method, apparatus, electronic device and readable storage medium

Publications (2)

Publication Number Publication Date
CN114466202A CN114466202A (en) 2022-05-10
CN114466202B true CN114466202B (en) 2023-12-12

Family

ID=81404717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011231625.0A Active CN114466202B (en) 2020-11-06 2020-11-06 Mixed reality live broadcast method, apparatus, electronic device and readable storage medium

Country Status (1)

Country Link
CN (1) CN114466202B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396688B (en) * 2022-10-31 2022-12-27 北京玩播互娱科技有限公司 Multi-person interactive network live broadcast method and system based on virtual scene

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201109590D0 (en) * 2011-06-08 2011-07-20 Cubicspace Ltd System for viewing and interacting with a virtual 3-D scene
CN106231317A (en) * 2016-09-29 2016-12-14 三星电子(中国)研发中心 Video processing, coding/decoding method and device, VR terminal, audio/video player system
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN106993195A (en) * 2017-03-24 2017-07-28 广州创幻数码科技有限公司 Virtual portrait role live broadcasting method and system
CN107845129A (en) * 2017-11-07 2018-03-27 深圳狗尾草智能科技有限公司 Three-dimensional reconstruction method and device, the method and device of augmented reality
CN108629301A (en) * 2018-04-24 2018-10-09 重庆大学 A kind of human motion recognition method based on moving boundaries dense sampling and movement gradient histogram
WO2019017579A1 (en) * 2017-07-21 2019-01-24 삼성전자주식회사 Display device, display method and display system
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
CN111200859A (en) * 2018-11-19 2020-05-26 华为技术有限公司 Network slice selection method, network equipment and terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11058950B2 (en) * 2019-03-15 2021-07-13 Sony Interactive Entertainment Inc. Methods and systems for spectating characters in virtual reality views

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201109590D0 (en) * 2011-06-08 2011-07-20 Cubicspace Ltd System for viewing and interacting with a virtual 3-D scene
CN106303555A (en) * 2016-08-05 2017-01-04 深圳市豆娱科技有限公司 A kind of live broadcasting method based on mixed reality, device and system
CN106231317A (en) * 2016-09-29 2016-12-14 三星电子(中国)研发中心 Video processing, coding/decoding method and device, VR terminal, audio/video player system
CN106993195A (en) * 2017-03-24 2017-07-28 广州创幻数码科技有限公司 Virtual portrait role live broadcasting method and system
WO2019017579A1 (en) * 2017-07-21 2019-01-24 삼성전자주식회사 Display device, display method and display system
CN107845129A (en) * 2017-11-07 2018-03-27 深圳狗尾草智能科技有限公司 Three-dimensional reconstruction method and device, the method and device of augmented reality
CN109963163A (en) * 2017-12-26 2019-07-02 阿里巴巴集团控股有限公司 Internet video live broadcasting method, device and electronic equipment
CN108629301A (en) * 2018-04-24 2018-10-09 重庆大学 A kind of human motion recognition method based on moving boundaries dense sampling and movement gradient histogram
CN111200859A (en) * 2018-11-19 2020-05-26 华为技术有限公司 Network slice selection method, network equipment and terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
5G网络中的移动VR应用;刘洁 等;《电信科学》;第34卷(第10期);全文 *
Data-Centric Video for Mixed Reality;Peter Gusev 等;《2019 28th International Conference on Computer Communication and Networks (ICCCN)》;全文 *
面向VR探视业务的5G SA医疗专网方案研究;黄山松 等;《电子技术应用》(第6期);全文 *

Also Published As

Publication number Publication date
CN114466202A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN108010037B (en) Image processing method, device and storage medium
CN105847718B (en) Live video barrage display methods based on scene Recognition and its display device
CN105791977B (en) Virtual reality data processing method, equipment and system based on cloud service
CN108632632B (en) Live webcast data processing method and device
CN104363475B (en) A kind of methods, devices and systems of spectators' packet associated
WO2019214371A1 (en) Image display method and generating method, device, storage medium and electronic device
CN108635863B (en) Live webcast data processing method and device
CN108880983B (en) Real-time voice processing method and device for virtual three-dimensional space
CN110740290A (en) Monitoring video previewing method and device
CN114466202B (en) Mixed reality live broadcast method, apparatus, electronic device and readable storage medium
CN113259764A (en) Video playing method, video playing device, electronic equipment and video playing system
CN115761090A (en) Special effect rendering method, device, equipment, computer readable storage medium and product
CN108881119A (en) A kind of methods, devices and systems of video concentration
CN116489424A (en) Live background generation method and device, electronic equipment and computer readable medium
CN113365130A (en) Live broadcast display method, live broadcast video acquisition method and related devices
CN115690664A (en) Image processing method and device, electronic equipment and storage medium
CN113014745B (en) Video image noise reduction method and device, storage medium and electronic equipment
CN113891057A (en) Video processing method and device, electronic equipment and storage medium
CN110166825B (en) Video data processing method and device and video playing method and device
CN110662099B (en) Method and device for displaying bullet screen
CN112261422A (en) Simulation remote live broadcast stream data processing method suitable for broadcasting and television field
CN117596373B (en) Method for information display based on dynamic digital human image and electronic equipment
CN112383788B (en) Live broadcast real-time image extraction system and method based on intelligent AI technology
CN117291810B (en) Video frame processing method, device, equipment and storage medium
CN112182299B (en) Method, device, equipment and medium for acquiring highlight in video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant