CN113996062A - Picture display method and device and electronic equipment - Google Patents

Picture display method and device and electronic equipment Download PDF

Info

Publication number
CN113996062A
CN113996062A CN202111350242.XA CN202111350242A CN113996062A CN 113996062 A CN113996062 A CN 113996062A CN 202111350242 A CN202111350242 A CN 202111350242A CN 113996062 A CN113996062 A CN 113996062A
Authority
CN
China
Prior art keywords
virtual
meeting place
picture
scene
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111350242.XA
Other languages
Chinese (zh)
Inventor
陈铭
刘柏
李均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111350242.XA priority Critical patent/CN113996062A/en
Publication of CN113996062A publication Critical patent/CN113996062A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface

Abstract

The invention provides a method, a device and electronic equipment for displaying a picture, which respond to the movement operation aiming at a virtual character in a virtual scene and control the virtual character to move in the virtual scene; the first target meeting place provided by the virtual scene comprises more than two virtual display models; and when the virtual role is positioned in the first target meeting place, displaying a first picture corresponding to the first data stream through the first virtual display model, and displaying a second picture corresponding to the second data stream through the second virtual display model. In the mode, the virtual meeting place provides a plurality of virtual display models, different pictures are displayed on different virtual display models, the contents displayed by the virtual display models are displayed on the graphical user interface, the pictures are not shielded, the user can also control the movement of the virtual character in the virtual scene, the contents displayed by the virtual display models are browsed from different angles and distances, the flexibility of immersion and meeting is enhanced, and the experience of the user is improved.

Description

Picture display method and device and electronic equipment
Technical Field
The invention relates to the technical field of immersive activity systems, in particular to a method and a device for displaying a picture and electronic equipment.
Background
In a live broadcast scene or an online conference scene, the content of the display screen of the terminal device is generally required to be transmitted to a target application in real time, so that the audience can watch the content shared by the anchor broadcast in real time through the display screen provided by the terminal device of the audience. However, for scenes with multiple video pictures, such as a main broadcast picture, a live broadcast picture, and the like, the multiple video pictures are usually displayed in a mixed manner in the same 2D plane through a display screen provided by a terminal device, which easily causes mutual occlusion between the video pictures, and other pictures may also occlude key information in the main video picture, resulting in poor experience of a user.
Disclosure of Invention
In view of this, an object of the present invention is to provide a method, an apparatus, and an electronic device for displaying pictures, so as to prevent the pictures displayed in a graphical user interface from being mutually blocked, and enable a user to browse contents displayed by a virtual display model from different angles and different distances, thereby enhancing the flexibility of immersion and meeting, and improving the experience of the user.
In a first aspect, an embodiment of the present invention provides a method for displaying a picture, where a graphical user interface is provided by a first terminal, and the graphical user interface displays a virtual scene, and the method includes: responding to the movement operation aiming at the virtual character in the virtual scene, and controlling the virtual character to move in the virtual scene; the virtual role is a role controlled by the first terminal; providing a first target meeting place through a virtual scene, wherein the first target meeting place comprises more than two virtual display models; when the virtual role is located in the first target meeting place, a first picture corresponding to the first data stream is displayed through a first virtual display model in the virtual display models, and a second picture corresponding to the second data stream is displayed through a second virtual display model in the virtual display models, wherein the first data stream and the second data stream are determined according to a first channel corresponding to the first target meeting place.
Further, the first data stream or the second data stream contains any one of the following data: screen data shared by the second terminal; anchor picture data corresponding to a first channel; and capturing picture data of the first target meeting place based on any angle.
Furthermore, a picture corresponding to the screen data shared by the second terminal is displayed through a main virtual display model in the virtual display models.
Further, the virtual display models in the first target meeting place display different picture contents.
Furthermore, more than two virtual display models are arranged in an arc shape in the first target meeting place.
Further, a master virtual display model in the virtual display models is a virtual display model located at a center position of all virtual display models in the first target meeting place.
Further, the method further comprises: configuring at least one chat scene area in a first target meeting place, and configuring corresponding chat room information for the chat scene area; responding to the movement operation aiming at the virtual character, and controlling the virtual character to move in the first target meeting place; when the virtual character moves to a target chat scene area preset in a first target meeting place, obtaining chat room information of the target chat scene area; and starting a voice call function for the chat object corresponding to the virtual character, and adding the chat object into the chat room corresponding to the target chat scene area according to the chat room information, so that the chat object in the chat room can chat.
Further, the method further comprises: and when the virtual character reaches a second target meeting place in the virtual scene, displaying a picture of a corresponding target data stream through a virtual display model in the second target meeting place, wherein the target data stream is determined according to a second channel corresponding to the second target meeting place.
Further, the method further comprises: responding to interesting interactive operation aiming at the virtual character, and controlling the virtual character to execute interesting action corresponding to the interesting interactive operation; wherein the interesting action is for attracting the user of the first terminal to pay attention to the graphical user interface.
Further, the method further comprises: responding to the triggering operation aiming at the speaking application control in the graphical user interface, generating a speaking application of the virtual role, and sending the speaking application to the third terminal; responding to the message of agreeing to speak fed back by the third terminal, and displaying a speaking control in the graphical user interface; responding to the triggering operation aiming at the speaking control, acquiring the speaking information of the virtual roles, and sending the speaking information to the terminals corresponding to all the virtual roles in the virtual scene.
Further, the method further comprises: and responding to the content switching operation aiming at the first virtual display model, acquiring a third data stream corresponding to the content switching operation, and updating the display picture of the first virtual display model into a picture corresponding to the third data stream.
In a second aspect, an embodiment of the present invention provides an apparatus for displaying a picture, where a graphical user interface is provided by a first terminal, and the graphical user interface displays a virtual scene, and the apparatus includes: the control module is used for responding to the movement operation aiming at the virtual character in the virtual scene and controlling the virtual character to move in the virtual scene; the virtual role is a role controlled by the first terminal; the meeting place module is used for providing a first target meeting place through a virtual scene, and the first target meeting place comprises more than two virtual display models; and the display module is used for displaying a first picture corresponding to the first data stream through a first virtual display model in the virtual display model and displaying a second picture corresponding to the second data stream through a second virtual display model in the virtual display model when the virtual character is positioned in the first target meeting place, wherein the first data stream and the second data stream are determined according to a first channel corresponding to the first target meeting place.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the method for displaying a screen according to any one of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a machine-readable storage medium, including a processor and a memory, where the memory stores machine-executable instructions capable of being executed by the processor, and the processor executes the machine-executable instructions to implement the method for displaying a picture according to any one of the first aspect.
The embodiment of the invention has the following beneficial effects:
the invention provides a method, a device and electronic equipment for displaying a picture, which respond to the movement operation aiming at a virtual character in a virtual scene and control the virtual character to move in the virtual scene; the first target meeting place provided by the virtual scene comprises more than two virtual display models; and when the virtual role is positioned in the first target meeting place, displaying a first picture corresponding to the first data stream through the first virtual display model, and displaying a second picture corresponding to the second data stream through the second virtual display model. In the mode, a user can control the virtual character to move in the meeting place of the virtual scene, the meeting place provides a plurality of virtual display models, different pictures can be displayed in different virtual display models, the pictures in the virtual display models and the models can be displayed on the graphical user interface, mutual shielding among all the pictures displayed in the graphical user interface is avoided, different picture effects can be displayed according to user requirements, the user can also control the virtual character to move in the virtual scene, the contents displayed by the virtual display models can be browsed from different angles and different distances, the flexibility of immersion and participation is enhanced, and the experience of the user is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating a method for displaying a frame according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a virtual scene according to an embodiment of the present invention;
fig. 3 is a schematic diagram of another virtual scene provided in the embodiment of the present invention;
fig. 4 is a schematic diagram of another virtual scene provided in the embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an apparatus for displaying images according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, in a live broadcast scene or an online conference scene, the content of a display screen of a terminal device needs to be transmitted to a target application in real time, so that a viewer can watch the content shared by an anchor broadcast in real time through the display screen provided by the terminal device of the viewer. However, for scenes with multiple video pictures, such as a main broadcast picture, a live broadcast picture, and the like, the multiple video pictures are usually displayed in a mixed manner in the same 2D plane through a display screen provided by a terminal device, which easily causes mutual occlusion between the video pictures, and other pictures may also occlude key information in the main video picture, resulting in poor experience of a user. For example, the lecturer shares a file, the student watches the file through the display screen provided by the terminal device, but there is a key content in the lower right corner of the file, but generally speaking, the lower right corner can display the camera picture of the lecturer, and the camera picture can block the key content, so that the student watching the file cannot see the content wanted to watch the file. Based on the above, the technology can be applied to the electronic equipment with the immersive activity system.
To facilitate understanding of the embodiment, a detailed description will be first given of a method for displaying a screen, which is disclosed in the embodiment of the present invention, in which a graphical user interface is provided through a first terminal, the graphical user interface displays a virtual scene, where the graphical user interface may be a graphical user interface of an immersive activity system, and the virtual scene includes a plurality of virtual display models.
At present Metaverse (metase) comes from an old science fiction 'avalanche' which constructs a new world called Metaverse. This is a new world which is not in the real world, is parallel to the real world and is always on-line. The immersive activity system described in this embodiment is equivalent to the immersive activity system described above in a computer network, and the immersive activity system can provide a three-dimensional virtual scene, can support multiple people to be online simultaneously, and can perform online conferences, news conferences, exhibition activities, online classes, live broadcasts, and the like in the virtual scene.
As shown in fig. 1, the method comprises the steps of:
step S102, responding to the movement operation aiming at the virtual character in the virtual scene, and controlling the virtual character to move in the virtual scene; the virtual role is a role controlled by the first terminal;
the virtual scene refers to a three-dimensional scene, and may be a three-dimensional virtual scene provided by the immersive activity system, that is, the virtual scene may include virtual characters corresponding to a plurality of users or virtual characters controlled by a plurality of terminals. Each virtual character can move or otherwise operate within the virtual scene.
The virtual scene generally includes virtual characters with multiple identities, such as a virtual character with an instructor identity, a virtual character with a host identity, and a virtual character with a common audience. The virtual character may be a general audience in a virtual scene. The movement operation includes a position movement operation, it can be understood that the virtual character view angle changes after the position of the virtual character moves, and the movement operation corresponds to actions including walking and running, for example, controlling the virtual character to move in the virtual scene, and changing the position of the virtual character in the virtual scene. When the virtual character is controlled to execute the action corresponding to the control operation, the view angle of the virtual character is changed, for example, the user in reality can walk from the left side to the right side of the display board, and then the object in the virtual scene, such as the picture played in each virtual display model, can be viewed from different angles.
Specifically, a user at the first terminal may input a movement instruction through a mouse or a keyboard to control the virtual character to move in the virtual scene, or may click the virtual character in a touch manner to control the virtual character to move in the virtual scene. The first terminal has different products, and the corresponding control modes are usually different.
Step S104, providing a first target meeting place through a virtual scene, wherein the first target meeting place comprises more than two virtual display models;
the virtual scene is a three-dimensional scene, a first target meeting place provided by the virtual scene is also a 3D-form meeting place, and the first target meeting place can be an online meeting, a news release meeting, an exhibition event, an online classroom, live broadcast and the like. Moreover, the virtual scene can also provide a plurality of meeting places at the same time. The virtual display models are 3D models having a function of displaying pictures, and each virtual display model includes a display screen. And the position in each virtual exhibition model meeting place is usually different, the size and the shape of the virtual exhibition model can be the same or different, and the virtual exhibition model can be set according to the actual meeting place requirements.
For example, the first target venue includes three virtual exhibition models; a virtual presentation model may be placed in the center of the first target venue and then placed on each of the left and right sides of the center.
And S106, when the virtual character is in the first target meeting place, displaying a first picture corresponding to the first data stream through a first virtual display model in the virtual display model, and displaying a second picture corresponding to the second data stream through a second virtual display model in the virtual display model, wherein the first data stream and the second data stream are determined according to a first channel corresponding to the first target meeting place.
The first data stream or the second data stream may be data corresponding to a screen shared by the second terminal corresponding to the virtual character specified in the scene, may be data corresponding to all screens displayed in the display screen of the second terminal, or may be data corresponding to a certain file, presentation, or screen (such as a game screen) displayed in the display screen of the second terminal. Similarly, the first screen or the second screen may be a live broadcast screen of a main broadcast, a file such as a presentation of an instructor, a camera screen of the main broadcast or the instructor, or a live screen of a virtual scene. The first screen and the second screen are different screens, for example, the first screen is a live broadcast screen of a main broadcast, and the second screen is a camera screen of the main broadcast.
The first channel is channel information preset in the first target meeting place, and the first terminal can obtain a data stream corresponding to the first channel through the first channel. Generally, when a virtual character joins a first target meeting place, a server directly sends channel information of a first channel corresponding to the first target meeting place to a terminal corresponding to the virtual character in the meeting place, for example, to a main broadcast and a viewer in the meeting place. When the main broadcasting starts to broadcast directly, the first terminal determines and obtains a first data stream and a second data stream according to a first channel corresponding to a first target meeting place, then a first picture corresponding to the first data stream is displayed in a first virtual display model, and a second picture corresponding to the second data stream is displayed in a second virtual display model.
In actual implementation, the first picture and the second picture are respectively displayed in different virtual display models, so that a user corresponding to the virtual character can control the virtual character to move, the moved first target meeting place picture is displayed on the graphical user interface, the first picture and the second picture in the moved first target meeting place picture are specifically controlled not to be shielded, namely, the graphical user interface can display a plurality of pictures which are not shielded, and the user can simultaneously watch the two pictures which are not shielded through the terminal. Of course, a plurality of virtual display models can be provided, and a user can watch a plurality of pictures which are not shielded from each other. And in the first target meeting place, the user can see the virtual display model from multiple angles, and can have multiple choices according to which one needs to see and which one needs to see.
Fig. 2 is a schematic diagram of a graphical user interface in which a virtual character is located at a middle position in front of a first virtual display model, and a user can view three virtual display models that are not obstructed from each other. It can be understood that the display effect of the virtual display model seen by the virtual character in the first target meeting place is consistent with the display effect of the virtual display model displayed by the graphical user interface.
In addition, when the virtual character moves to a position close to the first virtual display model, as shown in the graphical user interface shown in fig. 3, the pictures in the first virtual display model and the second virtual display model in the virtual scene are clearer, and the display effect of the first virtual display model and the second virtual display model displayed in the graphical user interface is consistent with the display effect of the virtual display model seen by the virtual character in the first target meeting place. The user can control own role in the virtual scene, and watch the presentation shared by the lecturer screen or the live broadcast of the main broadcast more immersive, just as watching the live broadcast or listening to the lecture in the real world.
The embodiment of the invention provides a picture display method, which responds to the movement operation of a virtual character in a virtual scene and controls the virtual character to move in the virtual scene; the first target meeting place provided by the virtual scene comprises more than two virtual display models; and when the virtual role is positioned in the first target meeting place, displaying a first picture corresponding to the first data stream through the first virtual display model, and displaying a second picture corresponding to the second data stream through the second virtual display model. In the mode, a user can control the virtual character to move in the meeting place of the virtual scene, the meeting place provides a plurality of virtual display models, different pictures can be displayed in different virtual display models, the pictures in the virtual display models and the models can be displayed on the graphical user interface, mutual shielding among all the pictures displayed in the graphical user interface is avoided, different picture effects can be displayed according to user requirements, the user can also control the virtual character to move in the virtual scene, the contents displayed by the virtual display models can be browsed from different angles and different distances, the flexibility of immersion and participation is enhanced, and the experience of the user is improved.
It can be understood that, in the virtual scene, the virtual character can be controlled to freely move in the virtual scene, so that the virtual character can watch the virtual display model in the virtual scene from different angles, and the scene pictures watched by the virtual character from different angles are displayed on the graphical user interface, which means that the user can watch each picture from different angles, and the angle can be controlled so that the user can watch all the pictures which are not shielded from each other.
The following describes data specifically included in the first data stream and the second data stream, where the first data stream or the second data stream includes any of the following data: screen data shared by the second terminal; anchor picture data corresponding to a first channel; and capturing picture data of the first target meeting place based on any angle.
The screen data shared by the second terminal may be data displayed in a screen of the second terminal, for example, a presentation shared by a lecturer screen, and a live view shared by a main play screen (e.g., a game view). During actual implementation, a user of the second terminal can send the screen data corresponding to the sharing screen of the second terminal to the first terminal through operation of the sharing screen, and then displays a picture corresponding to the screen data in the corresponding virtual display model.
The anchor picture data corresponding to the first channel may be data corresponding to an anchor picture acquired by a camera of the second terminal. The picture data of the first target meeting place captured based on any angle may be panoramic picture data of the first target meeting place, or picture data of the first target meeting place with a specified angle.
The method provides multi-channel video stream data, including screen data shared by the second terminal; anchor picture data corresponding to a first channel; based on the picture data of the first target meeting place captured at any angle, a user can watch a plurality of different pictures, and the pictures are not shielded from each other, so that the live broadcast effect of the meeting place is enriched.
In order to further improve the display effect of the picture in the meeting place, in one possible implementation manner, the virtual display model generally includes a main virtual display model, where the main virtual display model displays the screen data shared by the second terminal. Then, the virtual display model usually further includes a plurality of secondary virtual display models, and the secondary virtual display models usually display a picture corresponding to the main picture data corresponding to the first channel and a picture corresponding to the picture data of the first target meeting place captured at any angle.
By displaying the picture corresponding to the screen data shared by the second terminal in the main virtual display model, the user can be concerned about the picture corresponding to the screen data shared by the second terminal, and the user can distinguish the contents of the main picture and the auxiliary picture.
In addition, the screen contents displayed by the virtual exhibition model in the first target meeting place are different from each other. For example, the first virtual display model displays a picture corresponding to screen data shared by the second terminal, the second virtual display model displays a picture corresponding to anchor picture data corresponding to the first channel, and the third virtual display model displays a picture corresponding to picture data of the first target meeting place captured at any angle. Generally, the first virtual display model for displaying the picture corresponding to the screen data shared by the second terminal is located between the second virtual display model and the third virtual display model. To the user to view the primary screen sharing content.
Different picture contents are displayed through different virtual display models, a plurality of virtual display models and pictures in the models can be displayed on the graphical user interface, mutual shielding among the pictures displayed in the graphical user interface is avoided, different picture effects can be displayed according to user requirements, and user experience is improved.
In order to further improve the display effect of each virtual display model in the meeting place, in one possible implementation manner, more than two virtual display models are arranged in an arc shape in the first target meeting place. For example, three virtual display models, one main virtual display model is located in the middle of the arc, and two auxiliary virtual display models are located on two sides of the arc. The virtual display models are arranged in the arc shape, so that the display effect of the picture can be improved, and the picture displayed by the virtual display models can be watched at different angles at different positions in the arc shape by virtual characters.
In addition, in order to further improve the experience and immersion of the user, the main virtual exhibition model in the virtual exhibition models is a virtual exhibition model located at the center of all the virtual exhibition models in the first target meeting place.
It should be noted that, in this embodiment, the authority of each virtual character in the first target meeting place provided by the virtual scene may be set, and the virtual character having the authority to share the screen is usually an instructor or an anchor. In addition, channel information of a first channel of a first target meeting place and a user identifier of a virtual role with sharing authority are also set in advance, and the terminal sends data streams according to the channel information or acquires the data streams according to the channel information and the user identifier through an audio and video component.
Specifically, the anchor of the second terminal may click a screen sharing control of the gui, and select a picture that the anchor wants to share, such as a presentation, a picture displayed on a current screen, and the like. And then, according to the shared picture, outputting the shared screen data of the second terminal through the audio and video component, wherein the server side distributes the identifier of the virtual role of the instructor to all terminals, the terminals corresponding to the audiences subscribe the identifier of the instructor in the audio and video component, and the shared picture of the instructor is rendered into the virtual display model of the first target meeting place of each terminal. So that the audience can see the picture of the instructor at the large screen of the first target meeting place and can also hear the explanation of the instructor.
For example, the schematic diagram of the graphical user interface shown in fig. 4 shows, in the first virtual display model of the first target meeting place, a picture corresponding to the screen data, where the picture is a picture corresponding to the screen data shared by the second terminal, such as a presentation, and fig. 4 is a schematic diagram of the graphical user interface provided by the second terminal, where a virtual character of the second terminal explains the picture. The graphical user interface further comprises a stop control, and the anchor of the second terminal can click the stop control to stop sharing the screen.
In addition, in the first target meeting place, other virtual display models are usually provided on both sides of the first virtual display model, and a camera screen of the instructor or a field screen of the virtual scene is displayed respectively.
Specifically, the server side distributes channel information of the multi-channel data streams and the unique identifier of the anchor or instructor in the channel to all virtual roles of the current scene, the anchor or instructor outputs the data streams through the audio and video component, and audiences render and watch the data streams in the scene virtual display model according to the data streams appointed by the corresponding anchor or instructor data subscription.
In the foregoing manner, a user having a virtual role with screen sharing authority in the first target meeting place can share a screen of a terminal (the second terminal) in the first target meeting place, so as to display a picture corresponding to screen data shared by the second terminal in the main virtual display model of the first target meeting place. The application of the virtual scene is enriched, different authorities can be provided for each virtual role, and the experience of the user is further improved.
In addition, when a user joins the first target meeting place, the server side directly distributes channel information to all users, namely, one first target meeting place has only one channel information, and the first target meeting place is generated in the background, so that the channel information is not changed along with the change of audiences or the change of anchor or the time lapse, namely, the channel information can be used all the time only needing to be sent once without being sent all the time, and the change is not needed only in the first target meeting place, thereby saving part of traffic consumption and front-back end communication consumption.
In this embodiment, in order to further enhance the immersion of the user in viewing the picture and enhance the interest of viewing the picture, the method further includes:
(1) configuring at least one chat scene area in a first target meeting place, and configuring corresponding chat room information for the chat scene area;
(2) responding to the movement operation aiming at the virtual character, and controlling the virtual character to move in the first target meeting place;
(3) when the virtual character moves to a target chat scene area preset in a first target meeting place, obtaining chat room information of the target chat scene area; and starting a voice call function for the chat object corresponding to the virtual character, and adding the chat object into the chat room corresponding to the target chat scene area according to the chat room information, so that the chat object in the chat room can chat.
The chat scene area may be a designated area in a virtual scene, such as a designated table or chair. Each chat scene area is configured with corresponding chat room information, for example, a first chat scene area in the chat scene area may be configured with corresponding chat room information as a first room, and a second chat scene area in the chat scene area may be configured with corresponding chat room information as a second room. Specifically, when a user controls a virtual character to move in a first target meeting place, if the virtual character is controlled to enter one target chat scene area preset in the first target meeting place, the chat operation can be triggered, and meanwhile, a first terminal can acquire chat room information of the target chat scene area; in addition, a virtual seat can be arranged in the chat area, when a virtual character sits in the virtual seat in the chat area, the chat operation can be triggered, and meanwhile, the first terminal can acquire the chat room information of the target chat scene area.
And then, establishing a chat function between the virtual character and other virtual characters in the chat scene area, specifically, starting a voice call function for the chat object corresponding to the virtual character, namely, establishing voice chat between the virtual character and other virtual characters in the chat scene area. In order to enable all virtual objects in the chat scene area to be capable of chatting smoothly without being influenced by other virtual characters, the first terminal can add other virtual characters of the chat scene area into a chat room corresponding to the target chat scene area according to the chat room information, and finally, the chat objects in the chat room can be used for conversation and chat.
Of course, the text chat between the virtual character and other virtual characters in the chat scene area may be established. The avatar information of the virtual character and other virtual characters can be displayed in case of voice chat, and the transmitted text information of the virtual character and other virtual characters can be displayed in case of text chat.
In the above manner, when the virtual character views the picture in the first target scene, the virtual character can perform voice interactive chat with the picture, discuss the picture content, and have more immersion and satisfaction compared with the common method of viewing live broadcast through a 2D interface or viewing lectures of lecturers.
In addition, the virtual scene may generally provide a plurality of meeting places, and the meeting content of each meeting place is different, so that the virtual character of the first terminal may switch between different meeting places to see different screen contents. Specifically, when the virtual character reaches a second target meeting place in the virtual scene, a picture of a corresponding target data stream is displayed through a virtual display model in the second target meeting place, wherein the target data stream is determined according to a second channel corresponding to the second target meeting place.
Because each meeting place is preset with corresponding channel information, different meeting places can determine the data stream required to be acquired by the meeting place and the picture required to be displayed according to the preset channel information. In actual implementation, a second target meeting place to be switched can be selected through a meeting place switching control in a graphical user interface provided by the first terminal, the virtual character can reach the second target meeting place in the virtual scene, or the virtual character can be controlled to move in the first target meeting place, and when the virtual character moves to an exit area of the first target meeting place and an entrance area of the second target meeting place, the virtual character continues to move to the entrance of the second target meeting place, so that the virtual character can reach the second target meeting place in the virtual scene.
When the virtual character reaches the second target meeting place in the virtual scene, the process of displaying the picture is the same as the process when the virtual character enters the first target meeting place, and the description is omitted here.
In the above manner, the virtual scene can provide a plurality of target meeting places for holding different types of conferences, live broadcasts and the like, so that the immersion sense of the user in the virtual scene is further improved, and the application of the virtual scene is further enriched.
To further improve the immersion of a user in viewing a picture while improving the user's concentration on a graphical user interface. The method further comprises the following steps: responding to interesting interactive operation aiming at the virtual character, and controlling the virtual character to execute interesting action corresponding to the interesting interactive operation; wherein the interesting action is for attracting the user of the first terminal to pay attention to the graphical user interface.
The interesting interaction action may be a specified body action, such as a call, dance, etc. action. It is also possible that a virtual character performs a certain limb action together with a nearby virtual character, such as a two-person friendship dance, etc., but it is of course also controlled that a virtual character performs a certain limb action together with other more virtual characters. Specifically, the action control in the graphical user interface may be clicked, and an action that the virtual character is desired to execute is selected, so that the virtual character may be controlled to execute an interesting action corresponding to the interesting interactive operation, or the virtual character and at least one other virtual character may be controlled to execute an interesting action corresponding to the interesting interactive operation.
In the mode, the user can control the virtual character to watch the picture displayed in the virtual display model in the virtual scene and perform some interesting interactive actions in the virtual scene, so that the immersion feeling of the user in watching the picture is improved, the concentration of the user on a graphical user interface is improved, and the effect of virtual activities is improved.
The lecturer lectures or the main broadcast live broadcast generally include multiple data streams, and the main screen may be a picture corresponding to screen data of the main sharing screen displayed by the first main virtual display model, which generally includes a presentation, a live broadcast picture, and the like. In this embodiment, a plurality of virtual display models are provided, so that the sub-screen may be a camera screen that displays a lecturer or a main broadcast in the second virtual display model, or a live screen that displays a virtual scene.
In a possible implementation manner, the second data stream is data collected by a camera at the second terminal side. And the picture corresponding to the data acquired by the camera at the second terminal side is the user picture of the second terminal.
In another possible implementation, the second data stream is scene data corresponding to at least one preset virtual camera with a fixed position in a virtual scene. The virtual camera with at least one preset fixed position can be set in advance according to needs, and the virtual camera is usually fixedly set at several key positions of a virtual scene. And the picture corresponding to the scene data corresponding to the virtual camera is a field picture of the virtual scene. When screen sharing operation is performed, the scene picture corresponding to the scene data corresponding to the virtual camera can be displayed in the second virtual display model.
Specifically, the second virtual display model in the virtual scene may include a plurality of second pictures respectively used for displaying the second picture corresponding to the data acquired by the camera on the second terminal side and displaying the second picture corresponding to the scene data corresponding to the virtual camera. If the second virtual display model in the virtual scene includes one, a second picture corresponding to data acquired by a camera at the second terminal side can be displayed, or a second picture corresponding to scene data corresponding to the virtual camera can be displayed.
In the above mode, the second data stream has multiple possibilities, the second data stream can be data collected by a camera at the second terminal side, and also can be scene data corresponding to at least one virtual camera with a fixed position preset in a virtual scene, so that a plurality of virtual display models are arranged in the virtual scene, pictures corresponding to multiple paths of data streams can be displayed through the plurality of virtual display models, the display effect is more spectacular, the user controls the content which the user wants to see according to the user's needs, and the virtual display models are not influenced by each other and are not shielded by each other.
In order to further improve the immersion of the user in the virtual scene, the method further comprises: responding to the triggering operation aiming at the speaking application control in the graphical user interface, generating a speaking application of the virtual role, and sending the speaking application to the third terminal; responding to the message of agreeing to speak fed back by the third terminal, and displaying a speaking control in the graphical user interface; responding to the triggering operation aiming at the speaking control, acquiring the speaking information of the virtual roles, and sending the speaking information to the terminals corresponding to all the virtual roles in the virtual scene.
The user corresponding to the third terminal is a host, and the host can control the speaking application of each audience virtual role in the virtual scene. Generally, a graphical user interface provided by a first terminal comprises a speech application control, if a user wants to speak to all people in a virtual scene, the user can click the speech application control, the first terminal can generate a speech application of a virtual role, and then the speech application is sent to a third terminal; the user of the third terminal can choose to allow the speech or not, and when a speech approval message fed back by the third terminal is received, a speech control is displayed in a graphical user interface to prompt that the user corresponding to the virtual role can speak; the speaking control can be clicked to speak, and also can be used for directly speaking, the first terminal acquires the speaking information of the virtual roles and sends the speaking information to the terminals corresponding to all the virtual roles in the virtual scene, the speaking process of the virtual roles is completed, which is equivalent to the reality, such as a news bulletin, audiences can hold hands to apply for speaking in the questioning link, when a host allows a certain audience to speak, a microphone can be given to the audience, the audience can speak, and then everyone on the spot can hear the speech of the audience.
In the above manner, the virtual character in the first target meeting place can apply for speaking in the first target meeting place through the speaking application, and the immersion of the user in the virtual scene is further improved.
The method further comprises the following steps: and responding to the content switching operation aiming at the first virtual display model, acquiring a third data stream corresponding to the content switching operation, and updating the display picture of the first virtual display model into a picture corresponding to the third data stream.
In the first target meeting place, the audience cannot reject the screen shared by the instructor, and if the instructor has a problem in sharing, the presenter can perform relevant angle control color processing. The viewer is also unable to change the on-screen display content. However, if the lecturer stops sharing the screen and changes the sharing screen of another lecturer, the display content in the virtual display model in the first target meeting place needs to be updated, which may also be understood as switching of scenes, or that the lecture of the lecturer is changed in the first target meeting place.
Specifically, after the second terminal stops sharing the screen, the fourth terminal starts to share the screen, a third data stream corresponding to the sharing screen of the fourth terminal is acquired, and the display picture of the first virtual display model is switched to a picture corresponding to the third data stream. In addition, data corresponding to a camera at the fourth terminal side or scene data corresponding to a fixed virtual camera in the first target meeting place can be acquired, then the display picture of the second virtual display model is switched to a picture corresponding to the data corresponding to the camera at the fourth terminal side, and simultaneously a picture corresponding to the scene data corresponding to the fixed virtual camera in the first target meeting place is displayed on the other second virtual display model. Or after the second terminal stops sharing the screen, if other content is replaced for sharing, continuing to acquire a third data stream corresponding to the sharing screen of the second terminal, and switching the display picture of the first virtual display model into a picture corresponding to the third data stream. And then continuously acquiring data corresponding to the camera at the second terminal side, or fixing scene data corresponding to the virtual camera in the virtual scene.
Typically, the size of the first virtual representation model is larger than the size of the second virtual representation model. Or the size of the primary virtual representation model may be larger than the size of the secondary virtual representation model.
In the above mode, the picture displayed by the virtual display model in the first target meeting place can be updated to the corresponding picture according to different lecturers or live broadcasts and the like, so that the first target meeting place is closer to reality, the application of the virtual scene is enriched, the display can be closer to the reality, and the immersion of the user in the virtual scene is further improved.
Corresponding to the above method embodiment, an embodiment of the present invention provides an apparatus for displaying a picture, where the apparatus provides a graphical user interface through a first terminal, and the graphical user interface displays a virtual scene, as shown in fig. 5, and the apparatus includes:
a control module 51 for controlling the virtual character to move in the virtual scene in response to a moving operation for the virtual character in the virtual scene; the virtual role is a role controlled by the first terminal;
a meeting place module 52, configured to provide a first target meeting place through a virtual scene, where the first target meeting place includes more than two virtual display models;
the display module 53 is configured to display a first picture corresponding to a first data stream through a first virtual display model in the virtual display model and display a second picture corresponding to a second data stream through a second virtual display model in the virtual display model when the virtual character is located in the first target meeting place, where the first data stream and the second data stream are determined according to a first channel corresponding to the first target meeting place.
The invention provides a picture display device, which responds to the movement operation of a virtual character in a virtual scene and controls the virtual character to move in the virtual scene; the first target meeting place provided by the virtual scene comprises more than two virtual display models; and when the virtual role is positioned in the first target meeting place, displaying a first picture corresponding to the first data stream through the first virtual display model, and displaying a second picture corresponding to the second data stream through the second virtual display model. In the mode, a user can control the virtual character to move in the meeting place of the virtual scene, the meeting place provides a plurality of virtual display models, different pictures can be displayed in different virtual display models, the pictures in the virtual display models and the models can be displayed on the graphical user interface, mutual shielding among all the pictures displayed in the graphical user interface is avoided, different picture effects can be displayed according to user requirements, the user can also control the virtual character to move in the virtual scene, the contents displayed by the virtual display models can be browsed from different angles and different distances, the flexibility of immersion and participation is enhanced, and the experience of the user is improved.
Further, the first data stream or the second data stream includes any one of the following data: screen data shared by the second terminal; anchor picture data corresponding to a first channel; and capturing a picture of the first target meeting place based on any angle.
Further, the screen data shared by the second terminal is displayed through a main virtual display model in the virtual display models.
Furthermore, the contents of the frames displayed by the virtual exhibition models in the first target meeting place are different from each other.
Further, the two or more virtual display models are arranged in an arc shape in the first target meeting place.
Further, the master virtual exhibition model is a virtual exhibition model located at a center position of all virtual exhibition models in the first target meeting place.
Further, the apparatus further includes a chat module, configured to: configuring at least one chat scene area in a first target meeting place, and configuring corresponding chat room information for the chat scene area; responding to the movement operation aiming at the virtual character, and controlling the virtual character to move in the first target meeting place; when the virtual character moves to a target chat scene area preset in a first target meeting place, obtaining chat room information of the target chat scene area; and starting a voice call function for the chat object corresponding to the virtual character, and adding the chat object into the chat room corresponding to the target chat scene area according to the chat room information, so that the chat object in the chat room can chat.
Further, the apparatus further includes a meeting place switching module, configured to: and when the virtual character reaches a second target meeting place in the virtual scene, displaying a picture of a corresponding target data stream through a virtual display model in the second target meeting place, wherein the target data stream is determined according to a second channel corresponding to the second target meeting place.
Further, the apparatus further includes a second control module, configured to: responding to interesting interactive operation aiming at the first virtual character, and controlling the first virtual character to execute interesting action corresponding to the interesting interactive operation; wherein the interesting action is for attracting the user of the first terminal to pay attention to the graphical user interface.
Further, the apparatus further includes a speaking module configured to: responding to the triggering operation aiming at the speaking application control in the graphical user interface, generating a speaking application of the virtual role, and sending the speaking application to the third terminal; responding to the message of agreeing to speak fed back by the third terminal, and displaying a speaking control in the graphical user interface; responding to the triggering operation aiming at the speaking control, acquiring the speaking information of the virtual roles, and sending the speaking information to the terminals corresponding to all the virtual roles in the virtual scene.
Further, the apparatus further includes an update module, configured to: and responding to the sharing switching operation aiming at the first virtual display model, acquiring a third data stream corresponding to the sharing switching operation, and updating the display picture of the first virtual display model into a picture corresponding to the third data stream.
The image display device provided by the embodiment of the invention has the same technical characteristics as the image display method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment also provides an electronic device, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the screen display method. The electronic device may be a server or a terminal device.
Referring to fig. 6, the electronic device includes a processor 100 and a memory 101, the memory 101 stores machine executable instructions capable of being executed by the processor 100, and the processor 100 executes the machine executable instructions to implement the screen display method.
Further, the electronic device shown in fig. 6 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The Memory 101 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 6, but that does not indicate only one bus or one type of bus.
Processor 100 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 100. The Processor 100 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The present embodiments also provide a machine-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the above-described method of screen display.
The method, the apparatus, and the computer program product of the system for displaying a picture provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method in the foregoing method embodiments, and specific implementations may refer to the method embodiments and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases for those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that the following embodiments are merely illustrative of the present invention, and not restrictive, and the scope of the present invention is not limited thereto: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (14)

1. A method for displaying a picture, wherein a graphical user interface is provided through a first terminal, the graphical user interface displaying a virtual scene, the method comprising:
controlling the virtual character to move in the virtual scene in response to a movement operation for the virtual character in the virtual scene; wherein the virtual character is a character controlled by the first terminal;
providing a first target meeting place through the virtual scene, wherein the first target meeting place comprises more than two virtual display models;
when the virtual role is located in the first target meeting place, displaying a first picture corresponding to a first data stream through a first virtual display model in the virtual display models, and displaying a second picture corresponding to a second data stream through a second virtual display model in the virtual display models, wherein the first data stream and the second data stream are determined according to a first channel corresponding to the first target meeting place.
2. The method of claim 1, wherein the first data stream or the second data stream comprises any of:
screen data shared by the second terminal;
anchor picture data corresponding to the first channel;
and picture data of the first target meeting place is captured based on any angle.
3. The method according to claim 2, wherein a picture corresponding to the screen data shared by the second terminal is displayed through a main virtual display model in the virtual display models.
4. The method of claim 1, wherein the virtual exhibition model in the first target meeting place displays different screen contents.
5. The method of claim 1, wherein the two or more virtual display models are arranged in an arc in the first target venue.
6. The method of claim 1, wherein a master one of the virtual presentation models is a virtual presentation model centered on all virtual presentation models in the first target venue.
7. The method of claim 1, further comprising:
configuring at least one chat scene area in the first target meeting place, and configuring corresponding chat room information for the chat scene area;
controlling the virtual character to move in the first target meeting place in response to the movement operation aiming at the virtual character;
when the virtual character moves to a target chat scene area preset in the first target meeting place, obtaining chat room information of the target chat scene area;
and starting a voice call function for the chat object corresponding to the virtual character, and adding the chat object into the chat room corresponding to the target chat scene area according to the chat room information, so that the chat object in the chat room can chat.
8. The method of claim 1, further comprising:
when the virtual character reaches a second target meeting place in the virtual scene, displaying a picture of a corresponding target data stream through a virtual display model in the second target meeting place, wherein the target data stream is determined according to a second channel corresponding to the second target meeting place.
9. The method of claim 1, further comprising:
responding to interesting interactive operation aiming at the virtual character, and controlling the virtual character to execute an interesting action corresponding to the interesting interactive operation; wherein the interesting action is used for attracting the user of the first terminal to pay attention to the graphical user interface.
10. The method of claim 1, further comprising:
responding to a trigger operation aiming at a speech application control in the graphical user interface, generating a speech application of the virtual role, and sending the speech application to a third terminal;
responding to the message of agreeing to speak fed back by the third terminal, and displaying a speaking control in the graphical user interface;
responding to the triggering operation aiming at the speaking control, acquiring the speaking information of the virtual roles, and sending the speaking information to terminals corresponding to all the virtual roles in the virtual scene.
11. The method of claim 1, further comprising:
responding to the content switching operation aiming at the first virtual display model, acquiring a third data stream corresponding to the content switching operation, and updating the display picture of the first virtual display model into a picture corresponding to the third data stream.
12. An apparatus for displaying a picture, wherein a graphical user interface is provided through a first terminal, the graphical user interface displaying a virtual scene, the apparatus comprising:
the control module is used for responding to the movement operation of the virtual character in the virtual scene and controlling the virtual character to move in the virtual scene; wherein the virtual character is a character controlled by the first terminal;
the meeting place module is used for providing a first target meeting place through the virtual scene, and the first target meeting place comprises more than two virtual display models;
and the display module is used for displaying a first picture corresponding to a first data stream through a first virtual display model in the virtual display models and displaying a second picture corresponding to a second data stream through a second virtual display model in the virtual display models when the virtual character is positioned in the first target meeting place, wherein the first data stream and the second data stream are determined according to a first channel corresponding to the first target meeting place.
13. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of screen display according to any one of claims 1-11.
14. A machine-readable storage medium comprising a processor and a memory, the memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the method of displaying a screen of any one of claims 1-11.
CN202111350242.XA 2021-11-15 2021-11-15 Picture display method and device and electronic equipment Pending CN113996062A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111350242.XA CN113996062A (en) 2021-11-15 2021-11-15 Picture display method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111350242.XA CN113996062A (en) 2021-11-15 2021-11-15 Picture display method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113996062A true CN113996062A (en) 2022-02-01

Family

ID=79929083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111350242.XA Pending CN113996062A (en) 2021-11-15 2021-11-15 Picture display method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113996062A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112699A (en) * 2022-12-13 2023-05-12 北京奇艺世纪科技有限公司 Live broadcast method and device, electronic equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116112699A (en) * 2022-12-13 2023-05-12 北京奇艺世纪科技有限公司 Live broadcast method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN109011574B (en) Game interface display method, system, terminal and device based on live broadcast
Prins et al. TogetherVR: A framework for photorealistic shared media experiences in 360-degree VR
CN112235530B (en) Method and device for realizing teleconference, electronic device and storage medium
CN110349456B (en) Intelligent control system, remote control terminal and classroom terminal of interactive classroom
CN104468623A (en) Information display method based on online live broadcast, related device and related system
CN109195003B (en) Interaction method, system, terminal and device for playing game based on live broadcast
JP2004503888A (en) Interactive virtual reality performance theater entertainment system
US11184362B1 (en) Securing private audio in a virtual conference, and applications thereof
WO2010041954A1 (en) Method, device and computer program for processing images during video conferencing
CN113518232B (en) Video display method, device, equipment and storage medium
US20170171509A1 (en) Method and electronic apparatus for realizing two-person simultaneous live video
CN113301363B (en) Live broadcast information processing method and device and electronic equipment
US11743430B2 (en) Providing awareness of who can hear audio in a virtual conference, and applications thereof
JP2019204244A (en) System for animated cartoon distribution, method, and program
CN112770135A (en) Live broadcast-based content explanation method and device, electronic equipment and storage medium
CN111641849B (en) Universal image receiver
Kachach et al. The owl: Immersive telepresence communication for hybrid conferences
CN109040851B (en) Delay processing method, system, server and computer readable storage medium for playing game based on live broadcast
CN113996062A (en) Picture display method and device and electronic equipment
Ursu et al. Orchestration: Tv-like mixing grammars applied to video-communication for social groups
Wong et al. Shared-space: Spatial audio and video layouts for videoconferencing in a virtual room
US20240029339A1 (en) Multi-screen presentation in a virtual videoconferencing environment
WO2022253856A2 (en) Virtual interaction system
KR20220126660A (en) Method and System for Providing Low-latency Network for Metaverse Education Platform with AR Face-Tracking
CN114760520A (en) Live small and medium video shooting interaction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination