CN114401415B - Live broadcast control method, live broadcast control device, computer equipment and storage medium - Google Patents

Live broadcast control method, live broadcast control device, computer equipment and storage medium Download PDF

Info

Publication number
CN114401415B
CN114401415B CN202210044420.4A CN202210044420A CN114401415B CN 114401415 B CN114401415 B CN 114401415B CN 202210044420 A CN202210044420 A CN 202210044420A CN 114401415 B CN114401415 B CN 114401415B
Authority
CN
China
Prior art keywords
real scene
real
data
behavior data
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210044420.4A
Other languages
Chinese (zh)
Other versions
CN114401415A (en
Inventor
吴端艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210044420.4A priority Critical patent/CN114401415B/en
Publication of CN114401415A publication Critical patent/CN114401415A/en
Application granted granted Critical
Publication of CN114401415B publication Critical patent/CN114401415B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides a live broadcast control method, a live broadcast control device, computer equipment and a storage medium. The live broadcast control method is used for controlling live broadcast pictures showing the augmented reality AR live broadcast scene; the live broadcast control method comprises the following steps: acquiring first interaction behavior data between a real anchor in a first real scene and a first real scene object; generating second interaction behavior data between a virtual anchor to be fused into a second reality scene and a first three-dimensional model corresponding to a second reality scene object based on the first interaction behavior data; the first three-dimensional model is made of transparent materials; and generating live broadcast picture data of the virtual anchor interacting with the second real scene object in the second real scene based on the second interaction behavior data and the scene data of the second real scene.

Description

Live broadcast control method, live broadcast control device, computer equipment and storage medium
Technical Field
The present disclosure relates to the field of augmented reality (Augmented Reality, AR) technology, and in particular, to a live broadcast control method, apparatus, computer device, and storage medium.
Background
Under the scenes of virtual live broadcast and the like, the real person can control the virtual character to execute related actions, and render and display the virtual character. In some cases, when the virtual character is rendered and displayed, the virtual character can be displayed in the form of an AR, but since the virtual character does not actually exist in a real scene, generally in the presented AR scene, the virtual character cannot manipulate objects in the real scene, and in this way, the interactivity between the virtual character and the real scene is poor.
Disclosure of Invention
The embodiment of the disclosure at least provides a live broadcast control method, a live broadcast control device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides a live broadcast control method, configured to control a live broadcast picture displaying an augmented reality AR live broadcast scene; the live broadcast control method comprises the following steps: acquiring first interaction behavior data between a real anchor in a first real scene and a first real scene object; generating second interaction behavior data between a virtual anchor to be fused into a second reality scene and a first three-dimensional model corresponding to a second reality scene object based on the first interaction behavior data; the first three-dimensional model is made of transparent materials; and generating live broadcast picture data of the virtual anchor interacting with the second real scene object in the second real scene based on the second interaction behavior data and the scene data of the second real scene.
In an alternative embodiment, after generating the second interaction behavior data, the method further comprises: updating the first three-dimensional model in response to a state change of the first or second real scene object; wherein the updated first three-dimensional model is updated compared to the state data of the first three-dimensional model prior to the update.
In an alternative embodiment, acquiring first interaction behavior data between a real anchor in a first real scene and a first real scene object includes: acquiring first interaction behavior data of the real anchor for interaction between a first real scene tool and the first real scene object; the first interaction behavior data comprises first action data and first state data of the first real scene tool relative to the first real scene object; the generating the second interaction behavior data based on the first interaction behavior data includes: generating, based on the first interaction behavior data, the second interaction behavior data for the virtual anchor to interact with the first three-dimensional model using a first virtual scene tool; the second interaction behavior data includes second action data and second state data of the first virtual scene tool relative to the first three-dimensional model.
In an alternative embodiment, the obtaining the first interaction behavior data of the real anchor for interaction with the first real scene object using a first real scene tool includes: acquiring first interaction behavior data of a first real scene object from the first real scene object by using a first real scene tool by the real anchor; the generating the second interaction behavior data for the virtual anchor to interact with the first three-dimensional model using a first virtual scene tool includes: generating the virtual anchor, using a first virtual scene tool, and picking second interaction behavior data of a second three-dimensional model corresponding to a second real scene object from the second real scene object.
In an alternative embodiment, the method further comprises: acquiring third interaction behavior data of the real anchor for interacting the picked first real scene object with a third real scene object in the first real scene by using the first real scene tool; generating fourth interaction behavior data of the virtual anchor for interacting the second three-dimensional model with a third three-dimensional model corresponding to a fourth real scene object in the second real scene by using the first virtual scene tool based on the third interaction behavior data, and a state change special effect of the second three-dimensional model after interaction; generating second live broadcast picture data for interaction between the second three-dimensional model and the third three-dimensional model based on the fourth interaction behavior data and the state change special effect of the second three-dimensional model; and fusing the second live broadcast picture data with the scene data of the second real scene to generate fourth live broadcast picture data.
In an alternative embodiment, the first real scene object and the second real scene object are chaffy dish; the first virtual scene tool and the first real scene tool are tableware; the first real scene object and the second real scene object are food materials; the third real scene item and the fourth real scene item are dip-condiments.
In an alternative embodiment, the generating the second live view data of the virtual anchor interacting with the second real scene object in the second real scene includes: and generating special effect data corresponding to the second real scene object in response to the distance between the second three-dimensional model and the virtual camera corresponding to the AR live scene being smaller than a set threshold, and determining the second live picture data containing the special effect data.
In a second aspect, an embodiment of the present disclosure further provides a live broadcast control apparatus, including: the acquisition module is used for acquiring first interaction behavior data between a real anchor and a first real scene object in a first real scene; the first generation module is used for generating second interaction behavior data between a virtual anchor to be fused into the second reality scene and a first three-dimensional model corresponding to the second reality scene object based on the first interaction behavior data; the first three-dimensional model is made of transparent materials; and the second generation module is used for generating live broadcast picture data of the virtual anchor, which is interacted with the second real scene object in the second real scene, based on the second interaction behavior data and the scene data of the second real scene.
In an alternative embodiment, the first generating module is further configured to, after generating the second interaction behavior data: updating the first three-dimensional model in response to a state change of the first or second real scene object; wherein the updated first three-dimensional model is updated compared to the state data of the first three-dimensional model prior to the update.
In an alternative embodiment, the acquiring module is configured, when acquiring the first interaction behavior data between the real anchor and the first real scene object in the first real scene, to: acquiring first interaction behavior data of the real anchor for interaction between a first real scene tool and the first real scene object; the first interaction behavior data comprises first action data and first state data of the first real scene tool relative to the first real scene object; the first generation module is used for generating the second interaction behavior data based on the first interaction behavior data: generating, based on the first interaction behavior data, the second interaction behavior data for the virtual anchor to interact with the first three-dimensional model using a first virtual scene tool; the second interaction behavior data includes second action data and second state data of the first virtual scene tool relative to the first three-dimensional model.
In an alternative embodiment, the acquiring module is configured, when acquiring the first interaction behavior data of the real anchor for interaction with the first real scene object using a first real scene tool, to: acquiring first interaction behavior data of a first real scene object from the first real scene object by using a first real scene tool by the real anchor; the first generation module is used for generating the second interaction behavior data of the virtual anchor for interaction between a first virtual scene tool and the first three-dimensional model, wherein the second interaction behavior data is used for generating the second interaction behavior data of the virtual anchor for interaction between the first three-dimensional model and the first virtual scene tool: generating the virtual anchor, using a first virtual scene tool, and picking second interaction behavior data of a second three-dimensional model corresponding to a second real scene object from the second real scene object.
In an alternative embodiment, the obtaining module is further configured to: acquiring third interaction behavior data of the real anchor for interacting the picked first real scene object with a third real scene object in the first real scene by using the first real scene tool; generating fourth interaction behavior data of the virtual anchor for interacting the second three-dimensional model with a third three-dimensional model corresponding to a fourth real scene object in the second real scene by using the first virtual scene tool based on the third interaction behavior data, and a state change special effect of the second three-dimensional model after interaction; generating second live broadcast picture data for interaction between the second three-dimensional model and the third three-dimensional model based on the fourth interaction behavior data and the state change special effect of the second three-dimensional model; and fusing the second live broadcast picture data with the scene data of the second real scene to generate fourth live broadcast picture data.
In an alternative embodiment, the first real scene object and the second real scene object are chaffy dish; the first virtual scene tool and the first real scene tool are tableware; the first real scene object and the second real scene object are food materials; the third real scene item and the fourth real scene item are dip-condiments.
In an alternative embodiment, the obtaining module is configured to, when generating second live view data of the virtual anchor interacting with the second real scene object in the second real scene,: and generating special effect data corresponding to the second real scene object in response to the distance between the second three-dimensional model and the virtual camera corresponding to the AR live scene being smaller than a set threshold, and determining the second live picture data containing the special effect data.
In a third aspect, an optional implementation manner of the disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the machine-readable instructions stored in the memory, where the machine-readable instructions, when executed by the processor, perform the steps in the first aspect, or any possible implementation manner of the first aspect, when executed by the processor.
In a fourth aspect, an alternative implementation of the present disclosure further provides a computer readable storage medium having stored thereon a computer program which when executed performs the steps of the first aspect, or any of the possible implementation manners of the first aspect.
According to the live broadcast control method, the live broadcast control device, the computer equipment and the storage medium, after the first interaction behavior data between the real anchor and the first real scene object are obtained, the first interaction behavior data can be utilized to generate second interaction behavior data between the virtual anchor blended into the second real scene and the first three-dimensional model which is made of transparent materials and simulates the second real scene object, so that when the second interaction behavior data is blended into the second real scene, live broadcast picture data of interaction between the virtual anchor and the second real scene object in the second real scene can be presented, interaction control between the virtual role and the real scene object is achieved, and real interaction effect under the AR scene is enhanced.
The description of the effects of the live broadcast control apparatus, the computer device, and the computer-readable storage medium is referred to the description of the live broadcast control method, and is not repeated here.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 is a flowchart of a live control method provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a first reality scenario provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a real anchor when interacting with a first real scene object in a first real scene according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of rendering a live view displayed by using live view data according to an embodiment of the present disclosure;
Fig. 5 is a schematic diagram of a live control device according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a computer device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the disclosed embodiments generally described and illustrated herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
The virtual character rendering display can be controlled by the AR technology to be displayed on a scene picture of a real scene, and because the virtual character cannot actually control a real scene object in the real scene, for example, the virtual character cannot pick up, move and other operations on an article actually existing in the real scene, and the state of the article cannot be changed, the interactivity between the virtual character and the real scene object is lacking.
Based on the above-mentioned study, the embodiment of the present disclosure provides a live broadcast control method, in which in a first real scene, there are a real anchor that is actually present and a first real scene object; after the first interaction behavior data between the real anchor and the first real scene object is obtained, the first interaction behavior data can be utilized to generate second interaction behavior data which is integrated between the virtual anchor of the second real scene and a first three-dimensional model which is made of transparent materials and simulates the second real scene object; and fusing the second interaction behavior data with the second real scene data to display live broadcast picture data of the virtual anchor interacting with the second real scene object in the second real scene, thereby realizing interaction control between the virtual character and the real scene object.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the sake of understanding the present embodiment, first, a detailed description will be given of a live broadcast control method disclosed in the embodiments of the present disclosure, where an execution body of the live broadcast control method provided in the embodiments of the present disclosure is generally a computer device with a certain computing capability. Specifically, since the live control method provided by the embodiment of the present disclosure may be applied to AR scenes, the computer device may include, for example, but not limited to, a device that may support AR display, such as a mobile device such as a mobile phone, a tablet computer, and the like; or AR wearing devices, such as AR glasses. In some possible implementations, the live control method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
The live broadcast control method of the embodiment of the present disclosure is described in detail below. The live control method provided by the embodiment of the disclosure can be used for displaying live pictures of the AR live scene. Under a possible condition, when the live broadcast control method provided by the embodiment of the present disclosure is applied to a virtual live broadcast scene, a live broadcast picture can be displayed at a user end corresponding to a user watching the virtual live broadcast.
The AR live scene in the embodiments of the present disclosure may be 1) a live video scene based on a live video server, for example, after generating 3D video frame data (such as live video frame data of a virtual anchor interacting with a second real scene object in a second real scene described in the embodiments below), the live video frame data is directly pushed to each viewer through the live video server for display. In another case, 2) intra-Application live broadcast using a three-dimensional (three-dimensional) engine in an Application (APP), for example, the Application server may send 3D video data to be rendered (such as second interaction behavior data+scene data of a second real scene described in the embodiment below) to Application clients of respective users, and the Application clients may locally render the 3D video data using the 3D engine, thereby displaying a 3D video frame.
As shown in fig. 1, a flowchart of a live broadcast control method provided in an embodiment of the present disclosure mainly includes the following steps S101 to S103:
s101: acquiring first interaction behavior data between a real anchor in a first real scene and a first real scene object;
s102: generating second interaction behavior data between a virtual anchor to be fused into a second reality scene and a first three-dimensional model corresponding to a second reality scene object based on the first interaction behavior data; the first three-dimensional model is made of transparent materials;
s103: and generating live broadcast picture data of the virtual anchor interacting with the second real scene object in the second real scene based on the second interaction behavior data and the scene data of the second real scene.
With respect to S101, first, a first real scene, and a real anchor and a first real scene object in the first real scene will be described.
The real anchor comprises a person which exists in reality, and can move in the first reality scene besides the role work of live broadcasting. The first real scene comprises a real scene of a real anchor, such as a restaurant, a kitchen and the like. In addition, the first reality scene also comprises a first reality scene object which really exists, for example, the first reality scene object is a hot pot, so that a real anchor can interact with the first reality scene object through different action changes such as picking up, moving and the like. For example, a virtual anchor may clamp food in a hot pot.
Exemplary, referring to fig. 2, a schematic diagram of a first reality scenario provided by an embodiment of the disclosure is shown. In the first real scene is included a real anchor 21 and an item 22, which is a chafing dish, which may be the object of the first real scene and which is actually present. The soup stock is arranged in the hot pot 22.
In the first real scene, when the real anchor controls the first real scene object, corresponding first interaction behavior data between the real anchor and the first real scene object can be generated. Based on the obtained first interaction behavior data, second interaction behavior data when the virtual anchor performs interaction behavior in the second real scene between the first three-dimensional models corresponding to the second real scene objects in the second real scene can be generated. The first real scene object and the second real scene object are the same type of object, such as chafing dish.
When the first interactive behavior data is acquired, the behavior data of capturing the virtual anchor through the behavior capturing device and the state data of the first real scene object controlled by the virtual anchor can be acquired. Wherein the behavioral data includes captured motion data and/or audio data.
Here, the behavior data, that is, the data indicating a series of behaviors occurring in the virtual anchor captured by the various motion capture devices, may be data indicating a state change occurring in the first real scene object monitored by the monitoring device (image pickup device, various detection devices, etc.) controlling the real scene object, such as a water level change of the hot pot after the food in the hot pot is taken out by the clip, and the like. Here, the state change of the water level change will be specifically described below, and will not be described again.
The motion capture device for acquiring behavior data may specifically include a sensor device that senses motion of each part of the body of the virtual host, such as a motion capture glove, a motion capture helmet (for capturing facial expression motion), and a sound capture device (such as a microphone for capturing mouth sounds and a laryngeal microphone for capturing sound motion), etc. The behavior data includes motion data and/or audio data captured by the motion capture device for the virtual anchor. Thus, the action data of the virtual anchor can be generated by capturing the action, audio, and the like of the virtual anchor by the action capturing device. Or, the motion capture device may also include a camera, and the camera shoots the virtual anchor to obtain a video frame image, and performs semantic feature recognition of human motion on the video frame image, and may also determine behavior data of the virtual anchor accordingly.
Next, a specific manner of acquiring the first interaction behavior data in the embodiment of the present disclosure will be described.
In a specific implementation, when acquiring first interaction behavior data between a real anchor and a first real scene object in a first real scene, the following manner may be specifically adopted: acquiring first interaction behavior data of the real anchor for interaction between a first real scene tool and the first real scene object; the first interactive behavior data includes first action data and first state data of the first real scene tool relative to the first real scene object.
Here, the interaction between the real anchor and the first real scene object described above may also be assisted by the first real scene tool. Taking the example of interaction between the virtual anchor and the hotpot illustrated in the above example, including gripping food in the hotpot, in a first real scenario in a manner that is usual in life for eating the hotpot, the food in the hotpot is not directly gripped by hand, but is taken out of the hotpot by means of chopsticks, colander, etc. Tableware such as chopsticks and colander described herein, namely the first realistic scene tool used by the real anchor described above; the real anchor uses the interaction between the first real scene tool and the first real scene object, such as the real anchor handheld chopsticks including the description above, to grip food from the chafing dish.
For example, referring to fig. 3, a schematic diagram of a real anchor interacting with a first real scene object in a first real scene is provided in an embodiment of the disclosure. In this schematic view, a real anchor 31 holds chopsticks 32 and inserts them into a chafing dish 33.
Specifically, when the first interaction data is acquired, the acquired first interaction data specifically includes first interaction behavior data when a real anchor interacts with a hot pot by using tableware. Accordingly, the first interaction behavior data comprises first action data of a first real scene tool-tableware, relative to a first real scene object-chafing dish, such as action data when the tableware is inserted into the chafing dish; and first state data corresponding to the first real scene tool-tableware relative to the first real scene object-chafing dish, such as state data in which a part of the tableware is blocked due to the insertion of the tableware into the chafing dish.
Wherein the first state data described above is directed. Referring to fig. 3, after the real anchor 31 holds the chopsticks 32 and inserts the chafer 33, the portion of the chopsticks 32 filled with the square is not visible due to the shielding of the chafer 33 under the photographing view shown in fig. 3. Upon acquiring the first interaction behavior data, first state data may be obtained that thus reflects between the first real-scene tool and the first real-scene object.
Thus, the first interaction behavior data can be obtained.
For S102, after obtaining the first interaction behavior data between the real anchor and the first real scene object in the first real scene, the first interaction behavior data may be used to determine the second interaction behavior data. The second interaction behavior data is used for rendering and displaying interaction actions between the virtual anchor and the second real scene object.
With the above example in S101, if the first interaction behavior data includes interaction behavior data generated when the real anchor uses the first real scene tool to interact with the first real scene object, when generating second interaction behavior data based on the first interaction behavior data, the second interaction behavior data of the virtual anchor using the first virtual scene tool to interact with the first three-dimensional model may be generated based on the first interaction behavior data.
The first virtual scene tool corresponds to the first real scene tool, and the first virtual scene tool and the first real scene tool can be the same type of props, such as virtual tableware. If the handheld tableware is truly mastered in the first real scene and the tableware is inserted into the hot pot, the corresponding second interactive behavior data can render and display the handheld first virtual scene tool of the virtual mastering and insert the first virtual scene tool into the first three-dimensional model. The article actually existing in the second real scene comprises a second real scene object, the second real scene object is the same as the first real scene object in kind, and further, the placement position of the second real scene object in the second real scene is also the same as the placement position of the first real scene object in the first real scene. The first three-dimensional model includes a three-dimensional model corresponding to the second real scene object, which is made of transparent material, that is, when the three-dimensional model is overlaid on the second real scene object for rendering and displaying, the three-dimensional model cannot be seen by naked eyes, but the second interaction behavior data may reflect a shielding relationship between the first three-dimensional model and the virtual anchor/the first virtual scene tool, for example, when the chopsticks are located in the transparent three-dimensional model of the chafing dish, a part shielded by water in the chafing dish is invisible.
Corresponding to the first interaction behavior data, the second interaction behavior data comprises second action data and second state data of the first virtual scene tool relative to the first three-dimensional model. The second action data may be used to render an action that reveals that the virtual anchor holds the first virtual scene tool and inserts the first virtual scene tool into the first three-dimensional model. The second state data may reflect an occlusion state of the first three-dimensional model for the first virtual scene tool.
The second state data can improve the authenticity of the virtual anchor when the virtual anchor interacts with the second real scene object, and the method is particularly embodied in that the authenticity of the live broadcast picture rendered and displayed is higher when the virtual anchor and the first three-dimensional model are rendered and displayed. Specifically, the first three-dimensional model is made of transparent materials, so that shielding can not be generated on the second real scene object in the second real scene during rendering and displaying. However, since the second state data can reflect the occlusion relation of the second real scene object to the first virtual scene tool, when the first virtual scene tool is rendered and displayed, the effect that the second state data is inserted into the first three-dimensional model to be occluded can be displayed, which accords with occlusion logic existing between objects in the real world due to position change, and therefore the authenticity in rendering and displaying can be improved.
In another embodiment of the present disclosure, for S101, when acquiring first interaction behavior data of an interaction between a real anchor and a first real scene object using a first real scene tool, the real anchor may further acquire first interaction behavior data of a first real scene object from the first real scene object using the first real scene tool.
Here, the real anchor uses a first real scene tool, and the first real scene item that is retrievable from the first real scene object may include food material. The first interactive behavior data may also include third motion data reflecting the first real scene item relative to the first three-dimensional model, and third state data. Here, the third action data of the first real scene object with respect to the first three-dimensional model may, for example, characterize an action of the first real scene object lifted with respect to the first real scene object; the third state data may characterize a state of the first real scene item being occluded by the first real scene object to a state of progressively decreasing occlusion and eventually being picked up upwards as the first real scene tool is lifted.
Correspondingly, for the interaction between the virtual anchor and the first three-dimensional model, the method can also comprise the step that the virtual anchor picks 'food materials' from the first three-dimensional model by using a first virtual scene tool similarly. Since the virtual anchor and the first three-dimensional model are virtual and do not exist actually, the "food" described herein corresponds to the second real scene item, but is actually a virtual three-dimensional model, which is referred to herein as a second three-dimensional model.
Here, for a real anchor in the first real scene, the kinds of the food materials of the first real scene items that can be picked up may be rich, such as vegetables, bean products, meat products, and the like. If the interaction between the virtual anchor and the first three-dimensional model can truly simulate the interaction between the real anchor and the first real scene object in the first real scene as much as possible, the adopted second three-dimensional model can be in one-to-one correspondence with the type of the first real scene object, for example, the second three-dimensional model corresponding to vegetables, bean products, meat products and the like is prepared in advance. After the types of the first real scene objects picked up by the real anchor in the first real scene are determined, a second three-dimensional model corresponding to the first real scene objects can be determined from a plurality of predetermined second three-dimensional models, so that details of the real anchor and the first real scene objects in the first real scene can be reflected more truly during interaction.
After determining the first interaction behavior data, second interaction behavior data may also be generated. In particular, the virtual anchor may be generated to pick second interaction behavior data of a second three-dimensional model corresponding to a second real scene item from the second real scene object using a first virtual scene tool.
In this way, the first virtual scene tool used by the virtual anchor can be rendered and displayed by utilizing the second interaction behavior data, and the effect of the second three-dimensional model is picked up from the second real scene object. When the second real scene object is picked up, the state that the second real scene object is shielded, then the shielding gradually decreases, and is picked up along with the lifting of the first virtual scene tool can be displayed.
After determining the second interaction behavior data, the first three-dimensional model may also be updated in response to a state change of the first or second real scene object; wherein the updated first three-dimensional model is updated compared to the state data of the first three-dimensional model prior to the update.
Here, the state change occurring in the first real scene object or the second real scene object may include a change in the water level therein. For example, when the first real scene tool is inserted into the first real scene object, or the first real scene object is picked up by the first real scene tool, or the moisture in the first real scene object is evaporated over time, the water level in the first real scene changes. In this case, the first three-dimensional model may be updated, for example, according to a change in water level in the first real scene object, to determine a change in water level in the first three-dimensional model that may be rendered for display, so as to improve the authenticity of the rendered and displayed picture.
For S103, after the second interaction behavior data is obtained according to the above manner, live broadcast picture data of the virtual anchor interacting with the second real scene object in the second real scene may also be generated by using the obtained scene data of the second real scene.
Here, the image data of the second real scene may be acquired by means of image acquisition, and used as scene data of the second real scene. According to the embodiment described in S102 above, the second interaction behavior data may be used to render and display the interaction behavior between the virtual anchor to be incorporated into the second real scene and the first three-dimensional model, so that the interaction between the virtual anchor and the first three-dimensional model in the second real scene may be rendered and displayed in the live broadcast picture by using the second interaction behavior data and the scene data of the second real scene, thereby achieving the effect of AR display.
The manner of generating the live view data may include, but is not limited to, two specific manners described in the following (a) and (b):
(a) The method comprises the following steps And generating live broadcast picture data in a superposition mode.
In this manner, the second interactive behavior data and the scene data are each independent and are each part of the live view data. When the live broadcast picture is rendered and displayed by using the live broadcast picture data, the picture which is interacted with the first three-dimensional model by the virtual anchor and displayed by using the second interaction behavior data can be rendered and displayed; when the live image is rendered and displayed by utilizing the scene data, a second real scene object which really exists in the live image can be displayed. Because the first three-dimensional model corresponds to the second real scene object and is made of transparent materials, when two kinds of data are overlapped and displayed, the first three-dimensional model is aligned and displayed with the second real scene object, but is not visible by naked eyes, so that double images cannot occur, and the interaction effect between the virtual anchor and the second real scene object can be reflected.
(b) The method comprises the following steps And generating live broadcast picture data in a fusion mode.
In this manner, the second interaction behavior data and the scene data may be fused first, and the fusion result is live-broadcast picture data to be rendered and displayed. In the fusion process, the data in the live view image data are required to be subjected to an alignment operation, for example, the data corresponding to the first three-dimensional model in the live view image data are aligned with the data corresponding to the second real scene object in the scene data, and the first three-dimensional model is displayed on the live view image displayed in a rendering mode to be overlapped on the second real scene object. In this way, when rendering and displaying, the interactive effect between the virtual anchor and the second real scene object in the second real scene can be directly rendered and displayed, which is the same as the effect achieved in the above (a).
Exemplary, referring to fig. 4, a schematic diagram of rendering a displayed live view using live view data is provided in an embodiment of the present disclosure. In the live view, similar to fig. 3 described above, the second real scene object 41 in fig. 4 is the same type as the first real scene object 33 in fig. 3, and the first three-dimensional model corresponding to the second real scene object 41 is not displayed because it is a transparent material, and the virtual anchor 42 performs interaction with the second real scene object through the first virtual scene tool 43. Here, the virtual anchor 42 and the first virtual scene tool 43 are both unrealistically present, rendering the displayed image.
In another embodiment of the present disclosure, for a first real-scene item that is picked up in a first real-scene, it is also possible to interact with other real-scene items present in the first real-scene, such as dips. Other real scene items that may interact with the first real scene item, such as dips, are referred to herein as third real scene items; and the interactive behavior data generated by the interaction between the first real scene object and the third real scene object is called third interactive behavior data.
In a specific implementation, third interaction behavior data of the real anchor for interacting the picked first real scene item with a third real scene item in the first real scene using the first real scene tool may be obtained.
Here, the interaction behavior between the first real scene object and the third real scene object includes, on the one hand, action data taken out of the third real scene object again after the first real scene object is immersed in the third real scene object. On the other hand, the method can further comprise the step of dipping the first real scene object into the third real scene object, and then hanging state data of the third real scene object on the first real scene object, and state data of the third real scene object with less number.
Correspondingly, fourth interaction behavior data of the virtual anchor for interacting the second three-dimensional model with a third three-dimensional model corresponding to a fourth real scene object in the second real scene by using the first virtual scene tool and a state change special effect of the second three-dimensional model after interaction can be generated based on the third interaction behavior data.
Wherein the fourth real scene item is the same type of item that actually exists in the second real scene corresponding to the third real scene item, for example, the same as the third real scene item in the above example, and also includes dip, or includes a dip dish for placing dip. Correspondingly, the third three-dimensional model corresponding to the fourth real scene object comprises a dip model or a dip dish model.
For example, if the third three-dimensional model includes a dip model, controlling fourth interaction behavior data of the second three-dimensional model interacting with the third three-dimensional model includes inserting the second three-dimensional model into the third three-dimensional model and extracting, and updating the third three-dimensional model to indicate that dip therein is reduced. In addition, after the second three-dimensional model and the third three-dimensional model interact, the state change special effect of the second three-dimensional model may include, for example, the state change special effect of the dip-coating suspended in the second three-dimensional model, or further, the effect that the suspended dip-coating slowly flows down on the second three-dimensional model due to gravity is displayed.
After the fourth interaction behavior data and the state change special effect of the second three-dimensional model are obtained, second live broadcast picture data for interaction between the second three-dimensional model and the third three-dimensional model can be generated.
Here, when the second live view image data is acquired, special effect data corresponding to the second real scene object may be generated in response to the distance between the second three-dimensional model and the virtual camera corresponding to the AR live view scene being smaller than a set threshold, and the second live view image data including the special effect data may be determined.
Here, a virtual camera may be understood as a virtual camera for photographing a virtual scene. If the real anchor controls the virtual anchor to approach the second three-dimensional model to the virtual camera and determines that the distance between the second three-dimensional model and the virtual camera is smaller than a set threshold, for example, 10 cm, more details of the second three-dimensional model can be displayed, for example, a fog special effect is determined to be displayed, and according to the position of the second real scene object, fog special effect data of the corresponding position is correspondingly generated. With the thus determined mist special effect data, the second live view picture data can be further determined. In addition, after the second live view image data and the scene data of the second real scene are fused, fourth live view image data may be generated.
Here, the manner of performing the scene data fusion may refer to the manner described in the above (b), and will not be described herein. In this way, the obtained fourth live broadcast picture data can also display the behavior of the virtual anchor for controlling the interaction between the second three-dimensional model and the third three-dimensional model.
According to the live broadcast control method provided by the embodiment of the disclosure, after the first interaction behavior data between the real anchor and the first real scene object is obtained, the first interaction behavior data can be utilized to generate second interaction behavior data between the virtual anchor blended into the second real scene and the first three-dimensional model which is made of transparent materials and simulates the second real scene object, so that when the second interaction behavior data is blended into the second real scene, live broadcast picture data of interaction between the virtual anchor and the second real scene object in the second real scene can be presented, interaction control between the virtual role and the real scene object is realized, and the real interaction effect under the AR scene is enhanced.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiment of the present disclosure further provides a live broadcast control device corresponding to the live broadcast control method, and since the principle of solving the problem by the device in the embodiment of the present disclosure is similar to that of the virtual character control method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
As shown in fig. 5, a schematic diagram of a live broadcast control apparatus according to an embodiment of the present disclosure includes: an acquisition module 51, a first generation module 52, and a second generation module 53; wherein,
an obtaining module 51, configured to obtain first interaction behavior data between a real anchor in a first real scene and a first real scene object;
a first generation module 52, configured to generate second interaction behavior data between a virtual anchor to be incorporated into a second real scene and a first three-dimensional model corresponding to a second real scene object based on the first interaction behavior data; the first three-dimensional model is made of transparent materials;
the second generating module 53 is configured to generate live broadcast picture data of the virtual anchor interacting with the second real scene object in the second real scene based on the second interaction behavior data and scene data of the second real scene.
In an alternative embodiment, the first generating module 52 is further configured to, after generating the second interaction data: updating the first three-dimensional model in response to a state change of the first or second real scene object; wherein the updated first three-dimensional model is updated compared to the state data of the first three-dimensional model prior to the update.
In an alternative embodiment, the obtaining module 51 is configured, when obtaining the first interaction behavior data between the real anchor and the first real scene object in the first real scene, to: acquiring first interaction behavior data of the real anchor for interaction between a first real scene tool and the first real scene object; the first interaction behavior data comprises first action data and first state data of the first real scene tool relative to the first real scene object; the first generation module 52 is configured to, when generating the second interaction data based on the first interaction data: generating, based on the first interaction behavior data, the second interaction behavior data for the virtual anchor to interact with the first three-dimensional model using a first virtual scene tool; the second interaction behavior data includes second action data and second state data of the first virtual scene tool relative to the first three-dimensional model.
In an alternative embodiment, the obtaining module 51 is configured, when obtaining the first interaction behavior data of the real anchor for interaction with the first real scene object using a first real scene tool, to: acquiring first interaction behavior data of a first real scene object from the first real scene object by using a first real scene tool by the real anchor; the first generation module 52, when generating the second interaction behavior data for the virtual anchor to interact with the first three-dimensional model using a first virtual scene tool, is configured to: generating the virtual anchor, using a first virtual scene tool, and picking second interaction behavior data of a second three-dimensional model corresponding to a second real scene object from the second real scene object.
In an alternative embodiment, the obtaining module 51 is further configured to: acquiring third interaction behavior data of the real anchor for interacting the picked first real scene object with a third real scene object in the first real scene by using the first real scene tool; generating fourth interaction behavior data of the virtual anchor for interacting the second three-dimensional model with a third three-dimensional model corresponding to a fourth real scene object in the second real scene by using the first virtual scene tool based on the third interaction behavior data, and a state change special effect of the second three-dimensional model after interaction; generating second live broadcast picture data for interaction between the second three-dimensional model and the third three-dimensional model based on the fourth interaction behavior data and the state change special effect of the second three-dimensional model; and fusing the second live broadcast picture data with the scene data of the second real scene to generate fourth live broadcast picture data.
In an alternative embodiment, the first real scene object and the second real scene object are chaffy dish; the first virtual scene tool and the first real scene tool are tableware; the first real scene object and the second real scene object are food materials; the third real scene item and the fourth real scene item are dip-condiments.
In an alternative embodiment, the obtaining module 51 is configured to, when generating the second live view data of the virtual anchor interacting with the second real scene object in the second real scene,: and generating special effect data corresponding to the second real scene object in response to the distance between the second three-dimensional model and the virtual camera corresponding to the AR live scene being smaller than a set threshold, and determining the second live picture data containing the special effect data.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
The embodiment of the disclosure further provides a computer device, as shown in fig. 6, which is a schematic structural diagram of the computer device provided by the embodiment of the disclosure, including:
A processor 10 and a memory 20; the memory 20 stores machine readable instructions executable by the processor 10, the processor 10 being configured to execute the machine readable instructions stored in the memory 20, the machine readable instructions when executed by the processor 10, the processor 10 performing the steps of:
acquiring first interaction behavior data between a real anchor in a first real scene and a first real scene object; generating second interaction behavior data between a virtual anchor to be fused into a second reality scene and a first three-dimensional model corresponding to a second reality scene object based on the first interaction behavior data; the first three-dimensional model is made of transparent materials; and generating live broadcast picture data of the virtual anchor interacting with the second real scene object in the second real scene based on the second interaction behavior data and the scene data of the second real scene.
The memory 20 includes a memory 210 and an external memory 220; the memory 210 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 10 and data exchanged with the external memory 220 such as a hard disk, and the processor 10 exchanges data with the external memory 220 via the memory 210.
The specific execution process of the above instruction may refer to the steps of the control method described in the embodiments of the present disclosure, which is not described herein.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the live control method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, where instructions included in the program code may be used to perform the steps of the live control method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein in detail.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. The live broadcast control method is characterized by being used for controlling live broadcast pictures showing Augmented Reality (AR) live broadcast scenes; the live broadcast control method comprises the following steps:
acquiring first interaction behavior data between a real anchor in a first real scene and a first real scene object;
generating second interaction behavior data between a virtual anchor to be fused into a second reality scene and a first three-dimensional model corresponding to a second reality scene object based on the first interaction behavior data; the first three-dimensional model is made of transparent materials;
and generating live broadcast picture data of the virtual anchor interacting with the second real scene object in the second real scene based on the second interaction behavior data and the scene data of the second real scene.
2. The method of claim 1, wherein after generating the second interaction behavior data, the method further comprises:
updating the first three-dimensional model in response to a state change of the first or second real scene object; wherein the updated first three-dimensional model is updated compared to the state data of the first three-dimensional model prior to the update.
3. The method of claim 1, wherein obtaining first interaction behavior data between a real anchor in a first real scene and a first real scene object comprises:
acquiring first interaction behavior data of the real anchor for interaction between a first real scene tool and the first real scene object; the first interaction behavior data comprises first action data and first state data of the first real scene tool relative to the first real scene object;
the generating the second interaction behavior data based on the first interaction behavior data includes:
generating, based on the first interaction behavior data, the second interaction behavior data for the virtual anchor to interact with the first three-dimensional model using a first virtual scene tool; the second interaction behavior data includes second action data and second state data of the first virtual scene tool relative to the first three-dimensional model.
4. The method of claim 3, wherein the obtaining the first interaction behavior data for the real-world anchor to interact with the first real-world scene object using a first real-world scene tool comprises:
Acquiring first interaction behavior data of a first real scene object from the first real scene object by using a first real scene tool by the real anchor;
the generating the second interaction behavior data for the virtual anchor to interact with the first three-dimensional model using a first virtual scene tool includes:
generating the virtual anchor, using a first virtual scene tool, and picking second interaction behavior data of a second three-dimensional model corresponding to a second real scene object from the second real scene object.
5. The method according to claim 4, wherein the method further comprises:
acquiring third interaction behavior data of the real anchor for interacting the picked first real scene object with a third real scene object in the first real scene by using the first real scene tool;
generating fourth interaction behavior data of the virtual anchor for interacting the second three-dimensional model with a third three-dimensional model corresponding to a fourth real scene object in the second real scene by using the first virtual scene tool based on the third interaction behavior data, and a state change special effect of the second three-dimensional model after interaction;
Generating second live broadcast picture data for interaction between the second three-dimensional model and the third three-dimensional model based on the fourth interaction behavior data and the state change special effect of the second three-dimensional model;
and fusing the second live broadcast picture data with the scene data of the second real scene to generate fourth live broadcast picture data.
6. The method of claim 5, wherein the first real scene object and the second real scene object are chaffy dish; the first virtual scene tool and the first real scene tool are tableware; the first real scene object and the second real scene object are food materials; the third real scene item and the fourth real scene item are dip-condiments.
7. The method of claim 5, wherein the generating second live view data for the virtual host interacting with the second real scene object in the second real scene comprises:
and generating special effect data corresponding to the second real scene object in response to the distance between the second three-dimensional model and the virtual camera corresponding to the AR live scene being smaller than a set threshold, and determining the second live picture data containing the special effect data.
8. A live control apparatus, comprising:
the acquisition module is used for acquiring first interaction behavior data between a real anchor and a first real scene object in a first real scene;
the first generation module is used for generating second interaction behavior data between a virtual anchor to be fused into the second reality scene and a first three-dimensional model corresponding to the second reality scene object based on the first interaction behavior data; the first three-dimensional model is made of transparent materials;
and the second generation module is used for generating live broadcast picture data of the virtual anchor, which is interacted with the second real scene object in the second real scene, based on the second interaction behavior data and the scene data of the second real scene.
9. A computer device, comprising: a processor, a memory storing machine readable instructions executable by the processor for executing machine readable instructions stored in the memory, which when executed by the processor, perform the steps of the live control method of any of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when run by a computer device, performs the steps of the live control method according to any of claims 1 to 7.
CN202210044420.4A 2022-01-14 2022-01-14 Live broadcast control method, live broadcast control device, computer equipment and storage medium Active CN114401415B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210044420.4A CN114401415B (en) 2022-01-14 2022-01-14 Live broadcast control method, live broadcast control device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210044420.4A CN114401415B (en) 2022-01-14 2022-01-14 Live broadcast control method, live broadcast control device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114401415A CN114401415A (en) 2022-04-26
CN114401415B true CN114401415B (en) 2024-04-12

Family

ID=81230508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210044420.4A Active CN114401415B (en) 2022-01-14 2022-01-14 Live broadcast control method, live broadcast control device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114401415B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107750014A (en) * 2017-09-25 2018-03-02 迈吉客科技(北京)有限公司 One kind connects wheat live broadcasting method and system
CN111610998A (en) * 2020-05-26 2020-09-01 北京市商汤科技开发有限公司 AR scene content generation method, display method, device and storage medium
CN111899350A (en) * 2020-07-31 2020-11-06 北京市商汤科技开发有限公司 Augmented reality AR image presentation method and device, electronic device and storage medium
CN111897431A (en) * 2020-07-31 2020-11-06 北京市商汤科技开发有限公司 Display method and device, display equipment and computer readable storage medium
CN112135158A (en) * 2020-09-17 2020-12-25 重庆虚拟实境科技有限公司 Live broadcasting method based on mixed reality and related equipment
CN113808242A (en) * 2021-09-23 2021-12-17 广州虎牙科技有限公司 Image synthesis method and device and image processing equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9536352B2 (en) * 2014-03-27 2017-01-03 Intel Corporation Imitating physical subjects in photos and videos with augmented reality virtual objects
WO2019213111A1 (en) * 2018-04-30 2019-11-07 Meta View, Inc. System and method for presenting virtual content in an interactive space

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107750014A (en) * 2017-09-25 2018-03-02 迈吉客科技(北京)有限公司 One kind connects wheat live broadcasting method and system
CN111610998A (en) * 2020-05-26 2020-09-01 北京市商汤科技开发有限公司 AR scene content generation method, display method, device and storage medium
CN111899350A (en) * 2020-07-31 2020-11-06 北京市商汤科技开发有限公司 Augmented reality AR image presentation method and device, electronic device and storage medium
CN111897431A (en) * 2020-07-31 2020-11-06 北京市商汤科技开发有限公司 Display method and device, display equipment and computer readable storage medium
CN112135158A (en) * 2020-09-17 2020-12-25 重庆虚拟实境科技有限公司 Live broadcasting method based on mixed reality and related equipment
CN113808242A (en) * 2021-09-23 2021-12-17 广州虎牙科技有限公司 Image synthesis method and device and image processing equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
research opportunities on virtual and augmented reality : a survey;M. shangmugam;《IEEE ICSCAN》;全文 *
增强现实环境中的虚拟人物助手研究与实现;管煜祥;《中国优秀硕士论文电子期刊》;全文 *
虚拟仿真教学实验中心教与学互动设计与实现;张春明等;《 实验科学与技术 》;全文 *

Also Published As

Publication number Publication date
CN114401415A (en) 2022-04-26

Similar Documents

Publication Publication Date Title
CN112348969B (en) Display method and device in augmented reality scene, electronic equipment and storage medium
CN106170083B (en) Image processing for head mounted display device
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
US10380803B1 (en) Methods and systems for virtualizing a target object within a mixed reality presentation
US20180173404A1 (en) Providing a user experience with virtual reality content and user-selected, real world objects
CN109743892B (en) Virtual reality content display method and device
CN111080759B (en) Method and device for realizing split mirror effect and related product
CN108273265A (en) The display methods and device of virtual objects
CN108765270B (en) Virtual three-dimensional space tag binding method and device
CN109254650B (en) Man-machine interaction method and device
CN109598796A (en) Real scene is subjected to the method and apparatus that 3D merges display with dummy object
CN108668050B (en) Video shooting method and device based on virtual reality
KR20230022269A (en) Augmented reality data presentation method and apparatus, electronic device, and storage medium
CN108829468B (en) Three-dimensional space model skipping processing method and device
CN111679742A (en) Interaction control method and device based on AR, electronic equipment and storage medium
CN108880983B (en) Real-time voice processing method and device for virtual three-dimensional space
CN110302524A (en) Limbs training method, device, equipment and storage medium
CN111954045A (en) Augmented reality device and method
CN108765084B (en) Synchronous processing method and device for virtual three-dimensional space
CN114222076B (en) Face changing video generation method, device, equipment and storage medium
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN110544315A (en) control method of virtual object and related equipment
CN114401415B (en) Live broadcast control method, live broadcast control device, computer equipment and storage medium
CN112511815B (en) Image or video generation method and device
CN107204026B (en) Method and device for displaying animation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant