CN117729347A - Live broadcast method, live broadcast interaction method, device, electronic equipment and medium - Google Patents

Live broadcast method, live broadcast interaction method, device, electronic equipment and medium Download PDF

Info

Publication number
CN117729347A
CN117729347A CN202311773982.3A CN202311773982A CN117729347A CN 117729347 A CN117729347 A CN 117729347A CN 202311773982 A CN202311773982 A CN 202311773982A CN 117729347 A CN117729347 A CN 117729347A
Authority
CN
China
Prior art keywords
live
target
live broadcast
virtual
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311773982.3A
Other languages
Chinese (zh)
Inventor
许竹君
杨学强
杨倩倩
李培林
冯美丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202311773982.3A priority Critical patent/CN117729347A/en
Publication of CN117729347A publication Critical patent/CN117729347A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The embodiment of the disclosure provides a live broadcast method, a live broadcast interaction method, a device, electronic equipment and a medium, wherein the method is applied to a first user side and comprises the following steps: displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user; determining a target area corresponding to the area selection operation in the first virtual space in response to the area selection operation for the first virtual space; and responding to the live broadcast operation aiming at the target area, and carrying out live broadcast on the space images in the target area. According to the method, the target area in the first virtual space is determined, so that live broadcasting of the space picture in the target area can be realized, and the display of the space picture is enriched.

Description

Live broadcast method, live broadcast interaction method, device, electronic equipment and medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a live broadcast method, a live broadcast interaction device, electronic equipment and a medium.
Background
At present, network live broadcasting is becoming more and more popular, and users can live broadcasting pictures or objects which want to be live broadcast, for example, the pictures in the virtual space can be live broadcast in a network live broadcasting mode for users to watch.
However, the existing live broadcast method is mainly full-screen live broadcast space pictures, and live broadcast pictures are single.
Disclosure of Invention
The embodiment of the disclosure provides a live broadcast method, a live broadcast interaction device, electronic equipment and a medium, so as to enrich the display of space pictures.
In a first aspect, an embodiment of the present disclosure provides a live broadcast method, applied to a first user side, where the method includes:
displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user;
determining a target area corresponding to the area selection operation in the first virtual space in response to the area selection operation for the first virtual space;
and responding to the live broadcast operation aiming at the target area, and carrying out live broadcast on the space images in the target area.
In a second aspect, an embodiment of the present disclosure further provides a live interaction method, applied to a second user side, where the method includes:
displaying a live broadcast picture of a first user, wherein the live broadcast picture is a space picture in a target area, and the target area is positioned in a first virtual space;
and responding to the position switching operation, and controlling a second virtual object corresponding to the second user to enter the first virtual space.
In a third aspect, an embodiment of the present disclosure further provides a live broadcast apparatus configured on a first user side, where the apparatus includes:
the display module is used for displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user;
a determining module, configured to determine, in response to a region selection operation for the first virtual space, a target region in the first virtual space corresponding to the region selection operation;
and the live broadcasting module is used for responding to the live broadcasting operation aiming at the target area and broadcasting the space picture in the target area.
In a fourth aspect, an embodiment of the present disclosure further provides a live interaction device configured on a second user side, where the device includes:
the display module is used for displaying a live broadcast picture of a first user, wherein the live broadcast picture is a space picture in a target area, and the target area is positioned in a first virtual space;
and the control module is used for responding to the position switching operation and controlling a second virtual object corresponding to the second user to enter the first virtual space.
In a fifth aspect, embodiments of the present disclosure further provide an electronic device, including:
one or more processors;
A memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a live method or a live interaction method as described in embodiments of the present disclosure.
In a sixth aspect, the embodiments of the present disclosure further provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a live broadcast method or a live broadcast interaction method according to the embodiments of the present disclosure.
The embodiment of the disclosure provides a live broadcast method, a live broadcast interaction method, a device, an electronic device and a medium, wherein the method is applied to a first user side and comprises the following steps: displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user; determining a target area corresponding to the area selection operation in the first virtual space in response to the area selection operation for the first virtual space; and responding to the live broadcast operation aiming at the target area, and carrying out live broadcast on the space images in the target area. By means of the technical scheme, the target area in the first virtual space is determined, live broadcasting of the space picture in the target area can be achieved, and display of the space picture is enriched.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a live broadcast method according to an embodiment of the disclosure;
fig. 2 is a schematic flow chart of another live broadcast method according to an embodiment of the disclosure;
fig. 3 is a schematic flow chart of a live interaction method according to an embodiment of the disclosure;
fig. 4 is a schematic flow chart of another live interaction method according to an embodiment of the disclosure;
fig. 5 is a schematic flow chart of a live broadcast method and a live broadcast interaction method according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a live broadcast device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a live interaction device according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
It will be appreciated that the data (including but not limited to the data itself, the acquisition or use of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
Fig. 1 is a flow chart of a live broadcasting method provided in an embodiment of the present disclosure, where the method may be applicable to a case of live broadcasting a picture in a virtual space, and the method may be performed by a live broadcasting device, where the device may be implemented by software and/or hardware and is generally integrated on an electronic device, and in this embodiment, the electronic device includes but is not limited to: a computer, a mobile phone or a tablet computer and the like.
It can be considered that the conventional live broadcast method is usually full-screen live broadcast, and covers a single scene, and the audience can only passively see the screen picture of the anchor in the virtual space to know the content and anchor performance in the virtual space, and cannot select and control according to own interests and demands.
Based on the above, the embodiment of the disclosure provides a live broadcast method, which can provide region frame live broadcast selecting capability for live broadcast in a virtual space, for example, can provide region division in an open world based on a 3D world of an application program, and a host can freely select a region for live broadcast, so that a player can freely explore in the application program while watching live broadcast, and personalized live broadcast scenes are provided for the host.
As shown in fig. 1, a live broadcast method provided by an embodiment of the present disclosure includes the following steps:
s110, displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user.
In this embodiment, an application may be provided with a plurality of virtual spaces, where a virtual space may be understood as a virtual world or room, and a scene and a specific playing method of each virtual space may be the same or different, for example, a plurality of virtual spaces may be provided in an application, and a user may select to live broadcast a certain area in the virtual space while exploring the application; the first virtual space may refer to a virtual space in which the first virtual object is currently located, where the first virtual object is a virtual object of the first user, for example, a virtual object in a game, and the like.
In this step, the first virtual space may be displayed, and the specific display style of the first virtual space may be determined according to the specific configuration of the application program, where the timing of displaying the first virtual space is not limited, for example, the first user may select to enter the virtual space of the application program to display the first virtual space, or may trigger the display of the first virtual space through other preset operations, which is not limited in this embodiment.
S120, responding to the region selection operation for the first virtual space, and determining a target region corresponding to the region selection operation in the first virtual space.
The region selection operation may be used to select a target region of the first virtual space, and the specific content of the region selection operation is not limited, for example, the region selection operation may be an operation of triggering and clicking a control, or an operation of selecting a certain region in a frame manner, or may have other different forms, etc., and accordingly, the region selection operation may also act at different positions of the first virtual space according to different specific forms.
Specifically, when the first user performs the region selection operation on the first virtual space, the embodiment may determine, in response to the region selection operation, the target region corresponding to the region selection operation in the first virtual space, so as to provide a basis for live broadcasting of the subsequent spatial image, and the embodiment does not limit a specific manner of determining the target region, for example, may be different according to different region selection operations.
In one embodiment, the determining, in response to a region selection operation for the first virtual space, a target region in the first virtual space corresponding to the region selection operation includes:
Responding to a first region selection operation, taking a region currently positioned in a selection frame in the first virtual space as a target region, wherein the selection frame corresponds to the first region selection operation; or alternatively
Responding to a second region selection operation for a target virtual object in the first virtual space, and taking a region where the target virtual object is located in the first virtual space as a target region; or alternatively
In response to a third region selection operation acting within a region list, a region corresponding to the third region selection operation is taken as a target region, wherein the region list comprises at least one candidate region.
The first region selection operation may be an operation of triggering a frame to select a region, for example, the first region selection operation may perform region frame selection through a selection frame; the second region selection operation may be an operation of triggering a selection of a target virtual object, for example, an operation of clicking the target virtual object, where the target virtual object may be a certain virtual object in the first virtual space, and a specific type is not limited, for example, may be sea; the third region selection operation may refer to an operation acting within a region list, for example, the region list may be a list of candidate regions in the first virtual space, or may be a list of candidate regions in other virtual spaces, where the region list includes at least one candidate region. The third region selection operation may be an operation of clicking on a certain candidate region in the confirmation region list. The candidate region may be a region pre-configured by the client or a related person for selection by the user when live.
In one embodiment, the first user may trigger an operation of selecting a certain region by a frame, that is, perform a first region selection operation, and in this embodiment, the region currently located in the selection frame in the first virtual space may be used as the target region in response to the first region selection operation.
In one embodiment, the first user may perform a second region selection operation on the target virtual object in the first virtual space, for example, clicking a certain target virtual object, and in response to this second region selection operation, the embodiment may take, as the target region, the region where the target virtual object is located in the first virtual space.
In one embodiment, the first user may perform a third region selection operation in the region list of the first virtual space, for example, perform an operation of clicking on a candidate region, and in response to this third region selection operation, the embodiment may use a region corresponding to the third region selection operation as the target region.
S130, live broadcasting is conducted on the space images in the target area in response to live broadcasting operation on the target area.
After the target area is determined through the steps, the first user can execute live broadcast operation on the target area, for example, a certain live broadcast control can be clicked to trigger, so that in the embodiment, live broadcast can be performed on the spatial picture in the target area in response to the live broadcast operation on the target area, and the content of the specific live broadcast spatial picture is not further described herein.
It can be understood that after live broadcasting is started, when the first virtual object is located in the target area, live broadcasting can be continuously performed on the space picture in the target area until a preset live broadcasting ending condition is met. Further, the target region may remain unchanged as the first virtual object moves in the first virtual space, exits the first virtual space, or exits the application. In other words, whether the first virtual object is located in the target area or not, and whether the first user interface displays the spatial image in the target area or not, the spatial image in the target area can be continuously live-broadcast until the preset live-broadcast ending condition is met.
The live broadcast method provided by the embodiment of the disclosure is applied to a first user side and comprises the following steps: displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user; determining a target area corresponding to the area selection operation in the first virtual space in response to the area selection operation for the first virtual space; and responding to the live broadcast operation aiming at the target area, and carrying out live broadcast on the space images in the target area. By means of the method, the target area in the first virtual space is determined, live broadcasting of the space picture in the target area can be achieved, and display of the space picture is enriched.
In one embodiment, after the determining the target area corresponding to the area selection operation in the first virtual space, the method further includes:
adjusting a live view angle of the target area in response to a view angle adjustment operation for the target area;
the live broadcasting of the spatial image in the target area comprises the following steps:
live broadcasting is carried out on the space picture of the target area by adopting the adjusted live broadcasting visual angle; or alternatively
And updating the live view angle of the space picture.
The viewing angle adjustment operation may be used to adjust a live viewing angle of the target area, and the specific content of the viewing angle adjustment operation is not limited, for example, the operation of confirming a certain viewing angle for triggering click may be performed, or the operation of manually adjusting the viewing angle by the first user may be performed; the timing of the viewing angle adjustment operation is not limited.
In one embodiment, after the target area is determined, the live view angle of the target area may be further determined to more accurately live the space image, for example, after the target area is determined, the first user may be queried for the view angle of the target area that is desired to be live in a conversational manner, so as to adjust the live view angle of the target area according to the answer of the first user; or, the first user can be considered to execute the view angle adjusting operation aiming at the target area after clicking and confirming the operation of a certain view angle by the first user, so as to adjust the live view angle of the target area; in addition, the first user can also execute the view angle adjustment operation by manually adjusting the live view angle, so as to correspondingly adjust the live view angle of the target area.
Further, in this embodiment, the spatial frames may be live broadcast according to the adjusted time, and, for example, if the viewing angle adjustment operation is performed after the target area is determined and before the live broadcast is performed, the spatial frames in the target area may be live broadcast by using the adjusted live broadcast viewing angle; or, if the viewing angle adjustment operation is performed after the live broadcasting is performed, the embodiment may update the live broadcasting viewing angle of the spatial frame accordingly according to the viewing angle adjustment operation.
In one embodiment, after the determining the target area corresponding to the area selection operation in the first virtual space, the method further includes:
responding to an event selection operation, and taking a target event corresponding to the event selection operation in the target area as a live event;
the live broadcasting of the spatial image in the target area comprises the following steps:
and live broadcasting is carried out on the event picture of the live broadcasting event in the target area.
The target event may be considered as an event occurring in the target area, the number of the target events is not limited, and may be one or more, for example, when a certain area in the target area is being served, a certain area is playing a movie, a certain area is playing a basketball game, and the target event may be an event of being served.
In one embodiment, the entire spatial frame within the target region may be directly live.
In one embodiment, an event frame of one or more target events in the target area may be selected for live broadcast, for example, after the target area is determined, the first user may perform an event selection operation for the target events in the target area, so that in response to the event selection operation, the embodiment may take, as a live broadcast event, a target event corresponding to the event selection operation in the target area, and further live broadcast the event frame of the live broadcast event in the target area.
In one embodiment, after the live broadcasting of the spatial frame in the target area, the method further includes:
and ending the live broadcast aiming at the space picture when a preset live broadcast ending condition is met, wherein the preset live broadcast ending condition comprises: and receiving an ending live broadcast operation aiming at the space picture, and/or ending the live broadcast event in the target area.
The preset live broadcast ending condition may refer to a preset ending condition for ending live broadcast of the spatial frame, for example, the preset live broadcast ending condition may include receiving an ending live broadcast operation for the spatial frame, and/or ending a live broadcast event in the target area.
In one embodiment, after the live broadcasting of the spatial image in the target area is started, whether the current condition meets the preset live broadcasting ending condition or not can be judged according to the content of the preset live broadcasting ending condition, and when the current condition meets the preset live broadcasting ending condition, the live broadcasting of the spatial image is ended.
In one embodiment, after the determining the target area corresponding to the area selection operation in the first virtual space, the method further includes:
and generating media content based on the space picture in the target area, and publishing the media content.
In one embodiment, the corresponding media content may be generated based on the spatial frame in the target area, so as to implement distribution of the media content, for example, after the spatial frame in the target area is live broadcast, the corresponding media content may be selected to be generated and distributed, or before the spatial frame is live broadcast or during the spatial frame is live broadcast, the corresponding media content may be generated and distributed.
Fig. 2 is a schematic flow chart of another live broadcast method according to an embodiment of the present disclosure, where the embodiment is optimized based on the foregoing embodiments. In this embodiment, the case before the live broadcasting of the spatial frame in the target area in response to the live broadcasting operation for the target area is further specified as follows: and responding to the live broadcast mode setting operation, and determining a target live broadcast mode adopted by the live broadcast.
Meanwhile, the live broadcasting of the spatial image in the target area is further embodied as: and live broadcasting is carried out on the space picture in the target area by adopting the target live broadcasting mode, wherein the target live broadcasting mode comprises a device live broadcasting mode and/or an information live broadcasting mode, the device live broadcasting mode displays the space picture through target virtual display equipment, and the information live broadcasting mode displays the space picture in the form of live broadcasting interaction information issued by the first user.
For details not yet described in detail in this embodiment, reference is made to the above-mentioned embodiments.
As shown in fig. 2, the method includes:
s210, displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user.
S220, responding to the region selection operation for the first virtual space, and determining a target region corresponding to the region selection operation in the first virtual space.
S230, responding to the live broadcast mode setting operation, and determining a target live broadcast mode adopted by the live broadcast.
S240, in response to the live broadcast operation aiming at the target area, adopting the target live broadcast mode to live broadcast the space picture in the target area, wherein the target live broadcast mode comprises a device live broadcast mode and/or an information live broadcast mode, the device live broadcast mode displays the space picture through a target virtual display device, and the information live broadcast mode displays the space picture in a live broadcast interaction information mode issued by the first user.
The live broadcast mode setting operation may be used to set a target live broadcast mode adopted by the live broadcast, for example, the target live broadcast mode may include a device live broadcast mode and/or an information live broadcast mode, and the device live broadcast mode may be understood as that a space picture is displayed through a target virtual display device, where the target virtual display device may refer to a virtual display device set in a virtual space, such as a virtual television or a virtual display in the virtual space, and a specific set position is not limited, for example, the target virtual display device may be located in the first virtual space and/or a virtual space other than the first virtual space; the information live broadcasting mode can be understood as that a spatial picture is displayed in a form of live broadcasting interaction information issued by a first user, the live broadcasting interaction information can be information interacted between users in an application program, such as dynamic information, for example, in the embodiment, the user can issue the live broadcasting interaction information to a preset position in the application program, other users can browse and watch, and further, other users can click on the live broadcasting interaction information to enter into live broadcasting of the spatial picture.
Specifically, the first user may determine the target live broadcast mode adopted by the live broadcast through performing the live broadcast mode setting operation, so that the determined target live broadcast mode may be adopted to realize live broadcast of the spatial image in the target area, and the specific live broadcast process is not limited in this embodiment, for example, different target live broadcast modes may correspond to different live broadcast processes.
In one embodiment, the live broadcasting the spatial frames in the target area by adopting the target live broadcasting mode includes at least one of the following steps:
controlling a target virtual display device to display a space picture in the target area;
generating live broadcast interaction information of the first user, and issuing the live broadcast interaction information, wherein the live broadcast interaction information displays a space picture in the target area.
In one embodiment, the process of live broadcasting in the target live broadcasting mode may be, for example: and controlling the target virtual display equipment to display the space picture in the target area so as to realize live broadcasting of the space picture.
In one embodiment, the process of live broadcasting in the target live broadcasting mode may be, for example: generating live broadcast interaction information of the first user and distributing the live broadcast interaction information.
The live broadcast method provided by the embodiment of the disclosure is applied to a first user side and comprises the following steps: displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user; determining a target area corresponding to the area selection operation in the first virtual space in response to the area selection operation for the first virtual space; responding to the live broadcast mode setting operation, and determining a target live broadcast mode adopted by the live broadcast; and responding to the live broadcast operation aiming at the target area, adopting the target live broadcast mode to live broadcast the space picture in the target area, wherein the target live broadcast mode comprises a device live broadcast mode and/or an information live broadcast mode, the device live broadcast mode displays the space picture through a target virtual display device, and the information live broadcast mode displays the space picture in the form of live broadcast interaction information issued by the first user. By utilizing the method, a feasible live broadcast mode can be provided for subsequent live broadcast by determining the target live broadcast mode adopted by the live broadcast.
In one embodiment, the target live broadcasting mode includes a device live broadcasting mode, and before the live broadcasting the spatial frames in the target area by adopting the target live broadcasting mode, the method further includes:
taking at least part of candidate virtual display devices as target virtual display devices; or alternatively
Displaying a list of devices; in response to a device selection operation acting within the device list, a candidate virtual display device corresponding to the device selection operation is taken as a target virtual display device, and the device list is a list of candidate virtual display devices.
The device list may be a list including one or more candidate virtual display devices, and the candidate virtual display devices may be disposed in the same or different virtual spaces, for example, may be disposed in the first virtual space, or may be disposed in other virtual spaces other than the first virtual space.
In one embodiment, before the spatial image in the target area is live-broadcast by adopting the live-broadcast mode, the target virtual display device needs to be determined, for example, at least part of candidate virtual display devices can be directly used as the target virtual display device, or can be determined by selecting from the device list by the first user, for example, the first user can trigger the display of the device list through a certain preset operation and execute the device selection operation acting in the device list, and then the embodiment can respond to the device selection operation and use the candidate virtual display device corresponding to the device selection operation as the target virtual display device.
In one embodiment, the candidate virtual display device corresponding to the device selection operation includes a candidate virtual display device located in the second virtual space, and before the candidate virtual display device corresponding to the device selection operation is taken as the target virtual display device, the method further includes:
a live broadcast request is sent to the second virtual space, and the live broadcast request is used for requesting to display live broadcast pictures through candidate virtual display equipment in the second virtual space;
the method for selecting the candidate virtual display device corresponding to the device selection operation as the target virtual display device comprises the following steps:
and under the condition that the forward feedback information aiming at the live broadcast request is received, taking the candidate virtual display equipment corresponding to the forward feedback information as a target virtual display equipment.
The first virtual space and the second virtual space are only used to distinguish different virtual spaces. The live request may be for requesting that live pictures be displayed by the candidate virtual display device in the second virtual space; the forward feedback information may be understood as information fed back by the second virtual space, and is used to indicate consent to display a live broadcast picture by the candidate virtual display device.
In one embodiment, when the candidate virtual display device corresponding to the device selection operation is a candidate virtual display device located in the second virtual space, that is, when the virtual display device that the first user wants to select is located in the second virtual space, selection of the target virtual display device needs to be completed by acquiring forward feedback information of the second virtual space, for example, the embodiment may send a live broadcast request to the second virtual space, and when forward feedback information returned for the second virtual space is to be received, it is stated that the second virtual space indicates that the live broadcast screen may be displayed by the candidate virtual display device, and then the candidate virtual display device corresponding to the forward feedback information may be used as the target virtual display device.
Wherein the positive feedback information may be generated in response to a positive feedback operation of a user associated with the second virtual space. The user associated with the second virtual space may be, for example, a user having processing authority for the received live broadcast request, such as a creator of the second virtual space, an administrator, or a user whose distance between the virtual object and the virtual display device corresponding to the live broadcast request satisfies a preset condition, or the like. The preset condition may be set as needed, and exemplary, the preset condition may be that the virtual object is located in the second virtual space, and a distance between the virtual display devices corresponding to the live broadcast request is within a preset distance range or a distance between the virtual display devices corresponding to the live broadcast request is nearest, and so on.
Fig. 3 is a flow chart of a live interaction method provided in an embodiment of the present disclosure, where the method may be applicable to a case of interaction in live broadcast, and the method may be performed by a live broadcast interaction device, where the device may be implemented by software and/or hardware and is generally integrated on an electronic device, and in this embodiment, the electronic device includes but is not limited to: a computer, a mobile phone or a tablet computer and the like.
It is considered that the conventional live interaction method mainly uses bullet screens, praise, gifts and the like to communicate with the anchor and other audiences, but has difficulty in directly connecting with the picture content and lacks interactivity. Based on the above, the disclosed embodiment provides a live interaction method, which can provide prop display try-on capability, extract props and skins appearing in live broadcast through background configuration and video identification, and display the props and skins under live broadcast video. The audience can click the skin below to try on while watching live broadcast, so that higher interactivity is provided for the audience to watch live broadcast.
As shown in fig. 3, a live interaction method provided by an embodiment of the present disclosure includes the following steps:
s310, displaying a live broadcast picture of a first user, wherein the live broadcast picture is a space picture in a target area, and the target area is positioned in a first virtual space.
S320, responding to the position switching operation, and controlling a second virtual object corresponding to a second user to enter the first virtual space.
The second virtual object may be a virtual object of a second user.
The live interaction method provided in this embodiment may be applied to the second user side, where the second user may be other users than the first user, such as other players in the application program. For example, when the first user directly broadcasts the spatial image of the target area in the first virtual space, the embodiment may display the direct broadcast image of the first user in different manners, and when the second user is interested in or wants to arrive at the direct broadcast scene, a position switching operation may be performed, for example, a click may be triggered to enter the first virtual space, so that the embodiment may respond to the position switching operation to control the second virtual object corresponding to the second user to enter the first virtual space, and the embodiment does not limit a process of specifically entering the first virtual space, for example, may control the second virtual object to enter any area of the first virtual space.
In one embodiment, the controlling the second virtual object corresponding to the second user to enter the first virtual space includes:
Controlling a second virtual object corresponding to a second user to enter a preset area of the first virtual space; or alternatively
And controlling a second virtual object corresponding to a second user to enter the target area of the first virtual space.
In an embodiment, the second virtual object may be controlled to enter a preset area of the first virtual space, for example, the preset area may be a fixed area preset by a related person, or alternatively, the second virtual object may be controlled to enter a target area of the first virtual space.
The live broadcast interaction method is applied to a second user side, and is used for displaying live broadcast pictures of a first user, wherein the live broadcast pictures are space pictures in a target area, and the target area is located in a first virtual space; and responding to the position switching operation, and controlling a second virtual object corresponding to the second user to enter the first virtual space. By using the method, the second virtual object can be controlled to quickly enter the first virtual space by responding to the position switching operation, and a foundation is further provided for the second user to watch the live space picture or join the live virtual scene.
In one embodiment, the displaying the live view of the first user includes:
Displaying a live broadcast picture of the first user through target virtual display equipment in the visual field range of the second virtual object; or alternatively
And displaying live broadcast interaction information issued by the first user, wherein live broadcast pictures of the first user are displayed in the live broadcast interaction information.
In one embodiment, the live view of the first user may be displayed by, for example, displaying the live view of the first user through the target virtual display device in the field of view of the second virtual object, that is, the second user may view the live view of the first user through the target virtual display device in the field of view of the second virtual object.
In one embodiment, the live broadcast picture of the first user may be displayed, for example, by displaying live broadcast interaction information issued by the first user, where the live broadcast picture of the first user is displayed, that is, the second user may view the live broadcast picture of the first user through the live broadcast interaction information issued by the first user.
Fig. 4 is a schematic flow chart of another live interaction method provided in an embodiment of the present disclosure, where the embodiment is optimized based on the foregoing embodiments. In this embodiment, the case after the live view of the first user is displayed is further specified as: responding to an element display operation aiming at the live broadcast picture, displaying an element list in the live broadcast picture, wherein the element list is a list of object elements to be tried out; and in response to a trial operation for a target object element in the element list, updating object information of the second virtual object according to the target object element, wherein the object information comprises appearance information and/or clothing information.
For details not yet described in detail in this embodiment, reference is made to the above-mentioned embodiments.
As shown in fig. 4, the method includes:
s410, displaying a live broadcast picture of the first user, wherein the live broadcast picture is a space picture in a target area, and the target area is positioned in a first virtual space.
S420, responding to the position switching operation, and controlling a second virtual object corresponding to a second user to enter the first virtual space.
And S430, responding to an element display operation aiming at the live broadcast picture, and displaying an element list in the live broadcast picture, wherein the element list is a list of object elements to be tried out.
The list of elements may be a list of one or more object elements to be tried, and the type of the object element is not limited, for example, the object element may be skin, prop or other elements that can be tried. The element presentation operation may be used to present an element list.
Specifically, after the second virtual object enters the first virtual space, the second user may try out the object element appearing in the live broadcast picture, for example, an element showing operation may be performed on the live broadcast picture, so that the embodiment may respond to the element showing operation to show an element list in the live broadcast picture, so that the second user may select a target object element to be tried out.
S440, in response to a trial operation for a target object element in the element list, updating object information of the second virtual object according to the target object element, wherein the object information comprises appearance information and/or clothing information.
The target object element may refer to a certain object element in the element list; the object information may include shape information and/or apparel information.
After the element list in the live broadcast picture is displayed, the present embodiment may update the object information of the second virtual object according to the target object element in response to the trial operation for the target object element in the element list, and the specifically updated object information may be determined according to the selected target object element, for example, when the target object element is skin, the shape information of the second virtual object may be updated according to the selected skin; when the target object element is a garment, garment information of the second virtual object may be updated according to the selected garment, and so on.
The live broadcast interaction method is applied to a second user side, and is used for displaying live broadcast pictures of a first user, wherein the live broadcast pictures are space pictures in a target area, and the target area is located in a first virtual space; responding to the position switching operation, and controlling a second virtual object corresponding to a second user to enter the first virtual space; responding to an element display operation aiming at the live broadcast picture, displaying an element list in the live broadcast picture, wherein the element list is a list of object elements to be tried out; and in response to a trial operation for a target object element in the element list, updating object information of the second virtual object according to the target object element, wherein the object information comprises appearance information and/or clothing information. By using the method, the object information of the second virtual object can be updated according to the target object element by responding to the trial operation for the target object element in the element list, so that the interactivity between the user and the picture content is increased.
Fig. 5 is a schematic flow chart of a live broadcast method and a live broadcast interaction method provided by the embodiments of the present disclosure, as shown in fig. 5, a player may enter a world/room (i.e., a first virtual space), may select a partial area in the room, such as a sea or a vicinity of a bar, through a region list that is divided into regions in advance by a background (i.e., determine a target region in the first virtual space corresponding to a region selection operation in response to the region selection operation for the first virtual space), and on one hand, the player may live broadcast a spatial image through a live broadcast operation, such as querying content that the player wants to display, for example, sea, party, etc.; based on the player's answer, the live view angle, the lens (i.e., the live view angle of the target area is adjusted in response to the view angle adjustment operation for the target area) may be adjusted, so that the live view may be displayed above the screen, and the skin or prop appearing in the live may be displayed below the screen. In addition, the player is supported to manually adjust the live view angle and the live lens. When the audience watches live broadcast, the skin or prop displayed below the direct broadcast can be clicked to carry out try-on (namely, the object information of the second virtual object is updated according to the target object element in response to the try-on operation for the target object element in the element list), if the audience has an application account, the right side below the application account can be directly tried-on; if the viewer does not have an application account, an entry may be provided for the user to register a login account; or the audience can try on, and the entry is displayed when the audience is stored, so that the audience registers the login account.
On the other hand, a screen can be formed for a period of time for users to issue, etc. (i.e. generating media content based on the spatial frames in the target area and issuing the media content).
Through the description, the embodiment of the disclosure can divide the region in the virtual world, the anchor can freely select the region for live broadcast, and flexibly adjust the live broadcast viewing angle, so that players can freely explore in the virtual space and watch live broadcast at the same time, and the richness and interestingness of the virtual space are increased. In addition, when the audience watches, the audience can click on the prop under direct broadcasting to try on, so that the props and skins in the virtual space are intuitively experienced, and the interactivity of the audience and the content of the virtual space is improved.
Fig. 6 is a schematic structural diagram of a live broadcast device according to an embodiment of the present disclosure, where the device may be suitable for live broadcasting of pictures in a virtual space, and the device may be implemented by software and/or hardware and is generally integrated on an electronic device.
As shown in fig. 6, the apparatus includes:
the display module 510 is configured to display a first virtual space, where the first virtual space includes a first virtual object corresponding to a first user;
A determining module 520, configured to determine, in response to a region selection operation for the first virtual space, a target region in the first virtual space corresponding to the region selection operation;
and the live broadcasting module 530 is configured to live-broadcast the spatial frames in the target area in response to a live broadcasting operation for the target area.
The live broadcast device provided by the embodiment of the disclosure is configured at a first user side, and a first virtual space is displayed through a display module, wherein the first virtual space comprises a first virtual object corresponding to a first user; determining, by a determination module, a target region in the first virtual space corresponding to a region selection operation for the first virtual space in response to the region selection operation; and responding to the live broadcast operation aiming at the target area through a live broadcast module, and carrying out live broadcast on the space images in the target area. By means of the device, the target area in the first virtual space is determined, live broadcasting of the space picture in the target area can be achieved, and display of the space picture is enriched.
Optionally, the determining module 520 includes:
responding to a first region selection operation, taking a region currently positioned in a selection frame in the first virtual space as a target region, wherein the selection frame corresponds to the first region selection operation; or alternatively
Responding to a second region selection operation for a target virtual object in the first virtual space, and taking a region where the target virtual object is located in the first virtual space as a target region; or alternatively
In response to a third region selection operation acting within a region list, a region corresponding to the third region selection operation is taken as a target region, wherein the region list comprises at least one candidate region.
Optionally, the live broadcast device provided in the embodiment of the present disclosure further includes:
an adjustment module, configured to, after the determining of the target area in the first virtual space corresponding to the area selection operation, adjust a live view angle of the target area in response to a view angle adjustment operation for the target area;
the live broadcast module is specifically used for:
live broadcasting is carried out on the space picture of the target area by adopting the adjusted live broadcasting visual angle; or alternatively
And updating the live view angle of the space picture.
Optionally, the live broadcast device provided in the embodiment of the present disclosure further includes:
the setting module is used for responding to the live broadcast mode setting operation before responding to the live broadcast operation aiming at the target area and carrying out live broadcast on the space picture in the target area, so as to determine a target live broadcast mode adopted by the live broadcast;
The live broadcast module includes:
the live broadcasting unit is used for carrying out live broadcasting on the space picture in the target area in the target live broadcasting mode, the target live broadcasting mode comprises a device live broadcasting mode and/or an information live broadcasting mode, the device live broadcasting mode displays the space picture through target virtual display equipment, and the information live broadcasting mode displays the space picture in the form of live broadcasting interaction information issued by the first user.
Optionally, the live unit includes at least one of:
controlling a target virtual display device to display a space picture in the target area;
generating live broadcast interaction information of the first user, and issuing the live broadcast interaction information, wherein the live broadcast interaction information displays a space picture in the target area.
Optionally, the target virtual display device is located within the first virtual space and/or a virtual space other than the first virtual space.
Optionally, the target live broadcast mode includes a device live broadcast mode, and the live broadcast module further includes:
the first determining unit is used for taking at least part of candidate virtual display devices as target virtual display devices before the spatial frames in the target area are live broadcast in the target live broadcast mode; or alternatively
The second determining unit is used for displaying a device list before the spatial frames in the target area are live broadcast in the target live broadcast mode; in response to a device selection operation acting within the device list, a candidate virtual display device corresponding to the device selection operation is taken as a target virtual display device, and the device list is a list of candidate virtual display devices.
Optionally, the candidate virtual display device corresponding to the device selection operation includes a candidate virtual display device located in a second virtual space, and the second determining unit is further specifically configured to:
before the candidate virtual display device corresponding to the device selection operation is taken as a target virtual display device, a live broadcast request is sent to the second virtual space, wherein the live broadcast request is used for requesting to display a live broadcast picture through the candidate virtual display device in the second virtual space;
the method for selecting the candidate virtual display device corresponding to the device selection operation as the target virtual display device comprises the following steps:
and under the condition that the forward feedback information aiming at the live broadcast request is received, taking the candidate virtual display equipment corresponding to the forward feedback information as a target virtual display equipment.
Optionally, the live broadcast device provided in the embodiment of the present disclosure further includes:
the selection module is used for responding to an event selection operation after the target area corresponding to the area selection operation in the first virtual space is determined, and taking a target event corresponding to the event selection operation in the target area as a live event;
the live broadcast module is specifically used for:
and live broadcasting is carried out on the event picture of the live broadcasting event in the target area.
Optionally, the live broadcast device provided in the embodiment of the present disclosure further includes:
the ending module is configured to end the live broadcast for the spatial frame after the live broadcast is performed on the spatial frame in the target area, when a preset live broadcast ending condition is met, where the preset live broadcast ending condition includes: and receiving an ending live broadcast operation aiming at the space picture, and/or ending the live broadcast event in the target area.
Optionally, the live broadcast device provided in the embodiment of the present disclosure further includes:
and the publishing module is used for generating media content based on a space picture in the target area after the target area corresponding to the area selection operation in the first virtual space is determined, and publishing the media content.
The live broadcast device can execute the live broadcast method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 7 is a schematic structural diagram of a live interaction device provided in an embodiment of the present disclosure, where the device may be suitable for interaction in live broadcast, and the device may be implemented by software and/or hardware and is generally integrated on an electronic device.
As shown in fig. 7, the apparatus includes:
the display module 610 is configured to display a live broadcast picture of a first user, where the live broadcast picture is a spatial picture in a target area, and the target area is located in a first virtual space;
and the control module 620 is configured to control a second virtual object corresponding to a second user to enter the first virtual space in response to the position switching operation.
The live broadcast interaction device is configured on a second user side, and a live broadcast picture of a first user is displayed through a display module, wherein the live broadcast picture is a space picture in a target area, and the target area is located in a first virtual space; and responding to the position switching operation by the control module, and controlling a second virtual object corresponding to the second user to enter the first virtual space. By using the device, the second virtual object can be controlled to enter the first virtual space by responding to the position switching operation, so that a basis is further provided for a second user to watch a space picture.
Optionally, the control module is specifically configured to:
controlling a second virtual object corresponding to a second user to enter a preset area of the first virtual space; or alternatively
And controlling a second virtual object corresponding to a second user to enter the target area of the first virtual space.
Optionally, the display module is specifically configured to:
displaying a live broadcast picture of the first user through target virtual display equipment in the visual field range of the second virtual object; or alternatively
And displaying live broadcast interaction information issued by the first user, wherein live broadcast pictures of the first user are displayed in the live broadcast interaction information.
Optionally, the live broadcast device provided in the embodiment of the present disclosure further includes:
the list module is used for responding to the element display operation aiming at the live broadcast picture after the live broadcast picture of the first user is displayed, and displaying an element list in the live broadcast picture, wherein the element list is a list of object elements to be tried out;
and the trial module is used for responding to trial operation aiming at a target object element in the element list after the live broadcast picture of the first user is displayed, and updating the object information of the second virtual object according to the target object element, wherein the object information comprises appearance information and/or clothing information.
The live broadcast interaction device can execute the live broadcast interaction method provided by any embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Referring now to fig. 8, a schematic diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 8, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
In general, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, magnetic tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate with other devices wirelessly or by wire to exchange data. While fig. 8 shows an electronic device 400 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communications device 409, or from storage 408, or from ROM 402. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 401.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user; determining a target area corresponding to the area selection operation in the first virtual space in response to the area selection operation for the first virtual space; and responding to the live broadcast operation aiming at the target area, and carrying out live broadcast on the space images in the target area.
The computer readable medium carries one or more programs which, when executed by the electronic device, further cause the electronic device to:
displaying a live broadcast picture of a first user, wherein the live broadcast picture is a space picture in a target area, and the target area is positioned in a first virtual space;
and responding to the position switching operation, and controlling a second virtual object corresponding to the second user to enter the first virtual space.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, example 1 provides a live broadcast method, applied to a first user side, including:
displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user;
determining a target area corresponding to the area selection operation in the first virtual space in response to the area selection operation for the first virtual space;
and responding to the live broadcast operation aiming at the target area, and carrying out live broadcast on the space images in the target area.
According to one or more embodiments of the present disclosure, example 2 is the method according to example 1, the determining, in response to a region selection operation for the first virtual space, a target region in the first virtual space corresponding to the region selection operation, including:
responding to a first region selection operation, taking a region currently positioned in a selection frame in the first virtual space as a target region, wherein the selection frame corresponds to the first region selection operation; or alternatively
Responding to a second region selection operation for a target virtual object in the first virtual space, and taking a region where the target virtual object is located in the first virtual space as a target region; or alternatively
In response to a third region selection operation acting within a region list, a region corresponding to the third region selection operation is taken as a target region, wherein the region list comprises at least one candidate region.
According to one or more embodiments of the present disclosure, example 3 further includes, after the determining the target region corresponding to the region selection operation in the first virtual space, according to the method of example 1:
adjusting a live view angle of the target area in response to a view angle adjustment operation for the target area;
the live broadcasting of the spatial image in the target area comprises the following steps:
live broadcasting is carried out on the space picture of the target area by adopting the adjusted live broadcasting visual angle; or alternatively
And updating the live view angle of the space picture.
According to one or more embodiments of the present disclosure, example 4 further includes, before the live broadcasting of the spatial picture within the target area in response to the live broadcasting operation for the target area according to the method of example 1:
responding to the live broadcast mode setting operation, and determining a target live broadcast mode adopted by the live broadcast;
the live broadcasting of the spatial image in the target area comprises the following steps:
And live broadcasting is carried out on the space picture in the target area by adopting the target live broadcasting mode, wherein the target live broadcasting mode comprises a device live broadcasting mode and/or an information live broadcasting mode, the device live broadcasting mode displays the space picture through target virtual display equipment, and the information live broadcasting mode displays the space picture in the form of live broadcasting interaction information issued by the first user.
According to one or more embodiments of the present disclosure, example 5 is the method according to example 4, wherein the live broadcasting the spatial frame in the target area by using the target live broadcasting mode includes at least one of:
controlling a target virtual display device to display a space picture in the target area;
generating live broadcast interaction information of the first user, and issuing the live broadcast interaction information, wherein the live broadcast interaction information displays a space picture in the target area.
According to one or more embodiments of the present disclosure, example 6 is the method of example 4, the target virtual display device being located within the first virtual space and/or a virtual space other than the first virtual space.
According to one or more embodiments of the present disclosure, example 7 is the method according to example 4, wherein the target live broadcast mode includes a device live broadcast mode, and before the live broadcasting the spatial frame in the target area in the target live broadcast mode, further including:
Taking at least part of candidate virtual display devices as target virtual display devices; or alternatively
Displaying a list of devices; in response to a device selection operation acting within the device list, a candidate virtual display device corresponding to the device selection operation is taken as a target virtual display device, and the device list is a list of candidate virtual display devices.
According to one or more embodiments of the present disclosure, example 8 is the method of example 7, wherein the candidate virtual display device corresponding to the device selection operation includes a candidate virtual display device located in a second virtual space, and further including, before the candidate virtual display device corresponding to the device selection operation is the target virtual display device:
a live broadcast request is sent to the second virtual space, and the live broadcast request is used for requesting to display live broadcast pictures through candidate virtual display equipment in the second virtual space;
the method for selecting the candidate virtual display device corresponding to the device selection operation as the target virtual display device comprises the following steps:
and under the condition that the forward feedback information aiming at the live broadcast request is received, taking the candidate virtual display equipment corresponding to the forward feedback information as a target virtual display equipment.
According to one or more embodiments of the present disclosure, example 9 further includes, after the determining the target region corresponding to the region selection operation in the first virtual space, according to the method of example 1:
responding to an event selection operation, and taking a target event corresponding to the event selection operation in the target area as a live event;
the live broadcasting of the spatial image in the target area comprises the following steps:
and live broadcasting is carried out on the event picture of the live broadcasting event in the target area.
According to one or more embodiments of the present disclosure, example 10, after the live broadcasting of the spatial picture in the target area, further includes:
and ending the live broadcast aiming at the space picture when a preset live broadcast ending condition is met, wherein the preset live broadcast ending condition comprises: and receiving an ending live broadcast operation aiming at the space picture, and/or ending the live broadcast event in the target area.
According to one or more embodiments of the present disclosure, example 11 further includes, after the determining the target region in the first virtual space corresponding to the region selection operation, according to the method of any one of examples 1 to 9:
And generating media content based on the space picture in the target area, and publishing the media content.
Example 12 provides a live interaction method, according to one or more embodiments of the present disclosure, applied to a second user side, including:
displaying a live broadcast picture of a first user, wherein the live broadcast picture is a space picture in a target area, and the target area is positioned in a first virtual space;
and responding to the position switching operation, and controlling a second virtual object corresponding to the second user to enter the first virtual space.
According to one or more embodiments of the present disclosure, example 13 is the method of example 12, the controlling a second virtual object corresponding to a second user to enter the first virtual space, comprising:
controlling a second virtual object corresponding to a second user to enter a preset area of the first virtual space; or alternatively
And controlling a second virtual object corresponding to a second user to enter the target area of the first virtual space.
According to one or more embodiments of the present disclosure, example 14 is the method of example 12, the displaying a live view of the first user, comprising:
displaying a live broadcast picture of the first user through target virtual display equipment in the visual field range of the second virtual object; or alternatively
And displaying live broadcast interaction information issued by the first user, wherein live broadcast pictures of the first user are displayed in the live broadcast interaction information.
According to one or more embodiments of the present disclosure, example 15 is the method of example 12, further comprising, after the displaying the live view of the first user:
responding to an element display operation aiming at the live broadcast picture, displaying an element list in the live broadcast picture, wherein the element list is a list of object elements to be tried out;
and in response to a trial operation for a target object element in the element list, updating object information of the second virtual object according to the target object element, wherein the object information comprises appearance information and/or clothing information.
In accordance with one or more embodiments of the present disclosure, example 16 provides a live device configured on a first user side, comprising:
the display module is used for displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user;
a determining module, configured to determine, in response to a region selection operation for the first virtual space, a target region in the first virtual space corresponding to the region selection operation;
And the live broadcasting module is used for responding to the live broadcasting operation aiming at the target area and broadcasting the space picture in the target area.
Example 17 provides a live interaction device configured on a second user side, according to one or more embodiments of the present disclosure, including:
the display module is used for displaying a live broadcast picture of a first user, wherein the live broadcast picture is a space picture in a target area, and the target area is positioned in a first virtual space;
and the control module is used for responding to the position switching operation and controlling a second virtual object corresponding to the second user to enter the first virtual space.
Example 18 provides an electronic device, according to one or more embodiments of the present disclosure, comprising:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the live interaction method of any of examples 1-11 or examples 12-15.
According to one or more embodiments of the present disclosure, example 19 provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the live method of any of examples 1-11 or the live interaction method of any of examples 12-15.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.

Claims (19)

1. A live broadcast method, applied to a first user side, the method comprising:
displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user;
determining a target area corresponding to the area selection operation in the first virtual space in response to the area selection operation for the first virtual space;
and responding to the live broadcast operation aiming at the target area, and carrying out live broadcast on the space images in the target area.
2. The method of claim 1, wherein the determining, in response to a region selection operation for the first virtual space, a target region in the first virtual space corresponding to the region selection operation comprises:
responding to a first region selection operation, taking a region currently positioned in a selection frame in the first virtual space as a target region, wherein the selection frame corresponds to the first region selection operation; or alternatively
Responding to a second region selection operation for a target virtual object in the first virtual space, and taking a region where the target virtual object is located in the first virtual space as a target region; or alternatively
In response to a third region selection operation acting within a region list, a region corresponding to the third region selection operation is taken as a target region, wherein the region list comprises at least one candidate region.
3. The method of claim 1, further comprising, after the determining the target region in the first virtual space corresponding to the region selection operation:
adjusting a live view angle of the target area in response to a view angle adjustment operation for the target area;
the live broadcasting of the spatial image in the target area comprises the following steps:
live broadcasting is carried out on the space picture of the target area by adopting the adjusted live broadcasting visual angle; or alternatively
And updating the live view angle of the space picture.
4. The method of claim 1, further comprising, prior to said live broadcasting spatial pictures within said target region in response to a live operation for said target region:
Responding to the live broadcast mode setting operation, and determining a target live broadcast mode adopted by the live broadcast;
the live broadcasting of the spatial image in the target area comprises the following steps:
and live broadcasting is carried out on the space picture in the target area by adopting the target live broadcasting mode, wherein the target live broadcasting mode comprises a device live broadcasting mode and/or an information live broadcasting mode, the device live broadcasting mode displays the space picture through target virtual display equipment, and the information live broadcasting mode displays the space picture in the form of live broadcasting interaction information issued by the first user.
5. The method of claim 4, wherein the live broadcasting the spatial frames in the target area by using the target live broadcasting method includes at least one of:
controlling a target virtual display device to display a space picture in the target area;
generating live broadcast interaction information of the first user, and issuing the live broadcast interaction information, wherein the live broadcast interaction information displays a space picture in the target area.
6. The method of claim 4, wherein the target virtual display device is located within the first virtual space and/or a virtual space other than the first virtual space.
7. The method of claim 4, wherein the target live mode comprises a device live mode, and further comprising, prior to said live broadcasting spatial pictures in the target area using the target live mode:
taking at least part of candidate virtual display devices as target virtual display devices; or alternatively
Displaying a list of devices; in response to a device selection operation acting within the device list, a candidate virtual display device corresponding to the device selection operation is taken as a target virtual display device, and the device list is a list of candidate virtual display devices.
8. The method of claim 7, wherein the candidate virtual display device corresponding to the device selection operation comprises a candidate virtual display device located in a second virtual space, and further comprising, prior to the taking the candidate virtual display device corresponding to the device selection operation as a target virtual display device:
a live broadcast request is sent to the second virtual space, and the live broadcast request is used for requesting to display live broadcast pictures through candidate virtual display equipment in the second virtual space;
the method for selecting the candidate virtual display device corresponding to the device selection operation as the target virtual display device comprises the following steps:
And under the condition that the forward feedback information aiming at the live broadcast request is received, taking the candidate virtual display equipment corresponding to the forward feedback information as a target virtual display equipment.
9. The method of claim 1, further comprising, after the determining the target region in the first virtual space corresponding to the region selection operation:
responding to an event selection operation, and taking a target event corresponding to the event selection operation in the target area as a live event;
the live broadcasting of the spatial image in the target area comprises the following steps:
and live broadcasting is carried out on the event picture of the live broadcasting event in the target area.
10. The method according to any one of claims 1-9, further comprising, after said live broadcasting of spatial pictures within said target area:
and ending the live broadcast aiming at the space picture when a preset live broadcast ending condition is met, wherein the preset live broadcast ending condition comprises: and receiving an ending live broadcast operation aiming at the space picture, and/or ending the live broadcast event in the target area.
11. The method according to any one of claims 1-9, further comprising, after said determining a target region in said first virtual space corresponding to said region selection operation:
And generating media content based on the space picture in the target area, and publishing the media content.
12. A live interaction method, characterized in that it is applied to a second user side, the method comprising:
displaying a live broadcast picture of a first user, wherein the live broadcast picture is a space picture in a target area, and the target area is positioned in a first virtual space;
and responding to the position switching operation, and controlling a second virtual object corresponding to the second user to enter the first virtual space.
13. The method of claim 12, wherein controlling the second virtual object corresponding to the second user to enter the first virtual space comprises:
controlling a second virtual object corresponding to a second user to enter a preset area of the first virtual space; or alternatively
And controlling a second virtual object corresponding to a second user to enter the target area of the first virtual space.
14. The method of claim 12, wherein displaying the live view of the first user comprises:
displaying a live broadcast picture of the first user through target virtual display equipment in the visual field range of the second virtual object; or alternatively
And displaying live broadcast interaction information issued by the first user, wherein live broadcast pictures of the first user are displayed in the live broadcast interaction information.
15. The method of claim 12, further comprising, after said displaying the first user's live view:
responding to an element display operation aiming at the live broadcast picture, displaying an element list in the live broadcast picture, wherein the element list is a list of object elements to be tried out;
and in response to a trial operation for a target object element in the element list, updating object information of the second virtual object according to the target object element, wherein the object information comprises appearance information and/or clothing information.
16. A live broadcast device, configured on a first user side, the device comprising:
the display module is used for displaying a first virtual space, wherein the first virtual space comprises a first virtual object corresponding to a first user;
a determining module, configured to determine, in response to a region selection operation for the first virtual space, a target region in the first virtual space corresponding to the region selection operation;
and the live broadcasting module is used for responding to the live broadcasting operation aiming at the target area and broadcasting the space picture in the target area.
17. A live interaction device, configured on a second user side, the device comprising:
The display module is used for displaying a live broadcast picture of a first user, wherein the live broadcast picture is a space picture in a target area, and the target area is positioned in a first virtual space;
and the control module is used for responding to the position switching operation and controlling a second virtual object corresponding to the second user to enter the first virtual space.
18. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the live method of any one of claims 1-11 or the live interaction method of any one of claims 12-15.
19. A computer readable storage medium storing computer instructions for causing a processor to implement the live method of any one of claims 1-11 or the live interaction method of any one of claims 12-15 when executed.
CN202311773982.3A 2023-12-21 2023-12-21 Live broadcast method, live broadcast interaction method, device, electronic equipment and medium Pending CN117729347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311773982.3A CN117729347A (en) 2023-12-21 2023-12-21 Live broadcast method, live broadcast interaction method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311773982.3A CN117729347A (en) 2023-12-21 2023-12-21 Live broadcast method, live broadcast interaction method, device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117729347A true CN117729347A (en) 2024-03-19

Family

ID=90201365

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311773982.3A Pending CN117729347A (en) 2023-12-21 2023-12-21 Live broadcast method, live broadcast interaction method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117729347A (en)

Similar Documents

Publication Publication Date Title
JP7443621B2 (en) Video interaction methods, devices, electronic devices and storage media
CN113225483B (en) Image fusion method and device, electronic equipment and storage medium
CN114727146B (en) Information processing method, device, equipment and storage medium
JP2023528958A (en) Video Composite Method, Apparatus, Electronics and Computer Readable Medium
CN115278275B (en) Information display method, apparatus, device, storage medium, and program product
WO2021018186A1 (en) Video update push method and terminal
US20240028189A1 (en) Interaction method and apparatus, electronic device and computer readable medium
CN114679628B (en) Bullet screen adding method and device, electronic equipment and storage medium
CN114390308A (en) Interface display method, device, equipment, medium and product in live broadcast process
CN114390360B (en) Live voting method and device, electronic equipment and storage medium
CN115617436A (en) Content display method, device, equipment and storage medium
CN109635131B (en) Multimedia content list display method, pushing method, device and storage medium
CN111246245B (en) Method and device for pushing video aggregation page, server and terminal equipment
CN111147885B (en) Live broadcast room interaction method and device, readable medium and electronic equipment
CN115396716B (en) Live video processing method, device, equipment and medium
CN113965768B (en) Live broadcasting room information display method and device, electronic equipment and server
CN115529485A (en) Live video processing method, device, equipment and medium
CN111246242A (en) Searching method and device based on played video, application server and terminal equipment
CN115174946A (en) Display method, device, equipment, storage medium and program product of live broadcast page
CN115086745A (en) Live video processing method, device, equipment and medium
CN110798743A (en) Video playing method and device and computer readable storage medium
CN117729347A (en) Live broadcast method, live broadcast interaction method, device, electronic equipment and medium
CN114489891A (en) Control method, system, device, readable medium and equipment of cloud application program
CN113794836B (en) Bullet time video generation method, device, system, equipment and medium
CN111294656B (en) Method and device for adjusting video playing and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination