WO2024108431A1 - 直播互动方法、装置、设备、存储介质及程序产品 - Google Patents

直播互动方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2024108431A1
WO2024108431A1 PCT/CN2022/133768 CN2022133768W WO2024108431A1 WO 2024108431 A1 WO2024108431 A1 WO 2024108431A1 CN 2022133768 W CN2022133768 W CN 2022133768W WO 2024108431 A1 WO2024108431 A1 WO 2024108431A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
interactive
interactive shooting
shooting
image
Prior art date
Application number
PCT/CN2022/133768
Other languages
English (en)
French (fr)
Inventor
赵紫辰
饶红玉
颜远青
Original Assignee
广州酷狗计算机科技有限公司
广州繁星互娱信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州酷狗计算机科技有限公司, 广州繁星互娱信息科技有限公司 filed Critical 广州酷狗计算机科技有限公司
Priority to PCT/CN2022/133768 priority Critical patent/WO2024108431A1/zh
Priority to CN202280004685.XA priority patent/CN116076075A/zh
Publication of WO2024108431A1 publication Critical patent/WO2024108431A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content

Definitions

  • the embodiments of the present application relate to the field of Internet technology, and in particular to a live interactive method, device, equipment, storage medium and program product.
  • users can interact online based on some applications. For example, users can chat with each other through social applications, and users can also interact online through video or voice through live broadcast applications.
  • the embodiments of the present application provide a live interactive method, device, equipment, storage medium and program product.
  • the technical solution is as follows:
  • a live broadcast interaction method comprising:
  • the interactive shooting instruction of the second user When the interactive shooting instruction of the second user is responded by the first user, displaying the live broadcast screen of the first user during the interactive shooting process; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image;
  • the interactive shooting image obtained by the second user is displayed, where the interactive shooting image is obtained during the interactive shooting process.
  • a live broadcast interaction method comprising:
  • interactive shooting information generated based on the interactive shooting instruction of the second user is displayed; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image;
  • the interactive shooting image sent to the second user is displayed, where the interactive shooting image is obtained during the interactive shooting process.
  • a live interactive device comprising:
  • An interface display module used to display a live broadcast interface of a first user, wherein the live broadcast interface is used to display the live broadcast content of the first user;
  • a screen display module configured to display a live screen of the first user during the interactive shooting process when the interactive shooting instruction of the second user is responded to by the first user; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image;
  • the image display module is used to display the interactive shooting image obtained by the second user, wherein the interactive shooting image is obtained during the interactive shooting process.
  • a live interactive device comprising:
  • An information display module is used to display interactive shooting information generated based on an interactive shooting instruction of a second user during a live broadcast by the first user; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image;
  • a screen display module configured to display a live screen of the first user during the interactive shooting process in response to a response instruction to the interactive shooting instruction of the second user;
  • the image display module is used to display the interactive shooting image sent to the second user, wherein the interactive shooting image is obtained during the interactive shooting process.
  • a terminal device which includes a processor and a memory, wherein the memory stores a computer program, and the computer program is loaded and executed by the processor to implement the above-mentioned live broadcast interaction method on the viewer client side, or to implement the above-mentioned live broadcast interaction method on the anchor client side.
  • a computer-readable storage medium in which a computer program is stored.
  • the computer program is loaded and executed by a processor to implement the above-mentioned live broadcast interaction method on the viewer client side, or to implement the above-mentioned live broadcast interaction method on the anchor client side.
  • a computer program product comprising a computer program, the computer program being stored in a computer-readable storage medium.
  • a processor of a terminal device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the terminal device executes the live broadcast interaction method on the viewer client side, or implements the live broadcast interaction method on the host client side.
  • the present application provides a new live interactive method, in which an audience user initiates an interactive shooting command, and when the interactive shooting command is answered by the host user, the live broadcast screen of the host user during the interactive shooting process is displayed, and the interactive shooting image obtained by the audience user is displayed.
  • the audience user can initiate an interactive shooting command, and the host user shoots and gives the audience user an interactive shooting image. Due to the difference in interactive shooting commands, the interactive shooting images obtained are also different. Therefore, the interactive shooting images are unknown and random, which enriches the way of live interactive and increases the fun of live interactive.
  • FIG1 is a schematic diagram of an implementation environment of a solution provided by an embodiment of the present application.
  • FIG2 is a flow chart of a live interactive method provided by an embodiment of the present application.
  • FIG3 is a flow chart of a live interactive method provided by another embodiment of the present application.
  • FIG4 is a schematic diagram of a viewer user interface provided by an embodiment of the present application.
  • FIG5 is a schematic diagram of a viewer user interface provided by another embodiment of the present application.
  • FIG6 is a flow chart of a live interactive method provided by another embodiment of the present application.
  • FIG7 is a schematic diagram of a viewer user interface provided by another embodiment of the present application.
  • FIG8 is a schematic diagram of a viewer user interface provided by another embodiment of the present application.
  • FIG9 is a schematic diagram of a viewer user interface provided by another embodiment of the present application.
  • FIG10 is a schematic diagram of a viewer user interface provided by another embodiment of the present application.
  • FIG11 is a flow chart of a live interactive method provided by another embodiment of the present application.
  • FIG12 is a schematic diagram of a viewer user interface provided by another embodiment of the present application.
  • FIG13 is a flow chart of a live interactive method provided by another embodiment of the present application.
  • FIG14 is a flow chart of a live interactive method provided by another embodiment of the present application.
  • FIG15 is a schematic diagram of an anchor user interface provided by an embodiment of the present application.
  • FIG16 is a schematic diagram of an anchor user interface provided by another embodiment of the present application.
  • FIG17 is a schematic diagram of an anchor user interface provided by another embodiment of the present application.
  • FIG18 is a schematic diagram of an anchor user interface provided by another embodiment of the present application.
  • FIG19 is a schematic diagram of an anchor user interface provided by another embodiment of the present application.
  • FIG20 is a schematic diagram of an anchor user interface provided by another embodiment of the present application.
  • FIG21 is a schematic diagram of an anchor user interface provided by another embodiment of the present application.
  • FIG22 is a block diagram of a live interactive method provided by an embodiment of the present application.
  • FIG23 is a block diagram of a live interactive device provided by an embodiment of the present application.
  • FIG24 is a block diagram of a live interactive device provided by another embodiment of the present application.
  • FIG25 is a block diagram of a live interactive device provided by another embodiment of the present application.
  • FIG26 is a block diagram of a live interactive device provided by another embodiment of the present application.
  • FIG. 27 is a structural block diagram of a terminal device provided in one embodiment of the present application.
  • FIG1 shows a schematic diagram of a solution implementation environment provided by an embodiment of the present application.
  • the solution implementation environment may include: an audience terminal device 11 , a server 12 , and an anchor terminal device 13 .
  • the audience terminal device 11 and the anchor terminal device 13 can be electronic devices such as mobile phones, tablet computers, PCs (Personal Computers), wearable devices, VR (Virtual Reality) devices, AR (Augmented Reality) devices, vehicle-mounted devices, etc., and this application does not limit this.
  • the audience terminal device 11 and the anchor terminal device 13 can be installed with a client running a target application.
  • the target application can be a live video application, a music playback application, a social application, an interactive entertainment application, etc., and this application does not limit this.
  • the audience terminal device 11 is used to initiate interactive tasks
  • the anchor terminal device 13 is used to receive interactive tasks.
  • the server 12 can be a single server, or a server cluster composed of multiple servers, or a cloud computing service center.
  • the server 12 can be the background server of the above-mentioned target application, used to provide background services for the client of the target application.
  • the above-mentioned terminal devices can communicate with the server 12 through the network.
  • the client logged in in the audience terminal device 11 can be called the audience client, and the user corresponding to the audience client is the first user or the third user, wherein the first user is the audience user who initiates the interactive shooting instruction, and the third user is the audience user who does not initiate the interactive shooting instruction;
  • the client logged in in the anchor terminal device 13 can be called the anchor client, and the user corresponding to the anchor client is the second user (or anchor user).
  • the client of the target application (such as a live video application) installed and running in the anchor terminal device can be called the anchor client, and the anchor client has the function of live video broadcasting;
  • the client of the target application installed and running in the audience terminal device can be called the audience client, and the audience client has the function of watching live video broadcasting.
  • the anchor client and the audience client can be two different versions of the client of the target application, and these two different versions of the client are respectively oriented to the anchor and the audience, that is, the version oriented to the anchor has the function of realizing the above-mentioned anchor client, and the version oriented to the audience has the function of realizing the above-mentioned audience client; or, it can also be the same version of the client of the target application, and the client of this version has both the function of realizing the above-mentioned anchor client and the function of the audience client.
  • the audience client can not only watch live video, but also broadcast live video.
  • the anchor client can not only broadcast live video, but also watch the live video of other anchors. This application does not limit this.
  • the technical solutions provided by the embodiments of the present application can also be applied to other scenarios, such as social applications, instant messaging applications, office applications, and other applications, in which the terminal devices used by both parties who can conduct video can execute the live broadcast interaction method provided by the embodiments of the present application.
  • the technical solutions provided by the embodiments of the present application can also be used in video-related scenarios such as video conferencing and multi-person online video. Therefore, the present application does not limit the application scenarios of the method.
  • the following embodiments mainly take live broadcast applications as an example for exemplary and explanatory introduction and description.
  • FIG 2 shows a flow chart of a live interactive method provided by an embodiment of the present application.
  • the execution subject of each step of the method can be the audience terminal device 11 in the implementation environment of the solution shown in Figure 1, such as the execution subject of each step can be the audience client.
  • the execution subject of each step is introduced as the "audience client".
  • the method can include at least one of the following steps (210-230):
  • Step 210 display the live broadcast interface of the first user, where the live broadcast interface is used to display the live broadcast content of the first user.
  • Live broadcast interface includes at least one of a live broadcast screen and live broadcast controls.
  • the live broadcast screen is a video screen captured by the camera of the terminal device (which can be considered as the anchor terminal device) of the person initiating the live broadcast.
  • the live broadcast control is a control of another layer above the live broadcast screen.
  • the live broadcast control is used by the user to operate the live broadcast interface.
  • the live broadcast controls include but are not limited to return controls, gift controls, follow-up anchor controls, etc.
  • all live broadcast controls in response to the audience user's command to fold controls, all live broadcast controls can be folded in the live broadcast interface, and only the live broadcast screen is displayed.
  • the live broadcast interface may also only display the live broadcast controls.
  • the anchor interface is used to display the live broadcast content of the first user, and the live broadcast content includes but is not limited to the first user himself, what the first user is doing, the environment in which the first user is located, and the game interface that the first user is operating.
  • the live broadcast content is the first user himself, and the camera of the anchor terminal device is used to collect the video screen of the first user.
  • the live broadcast content is what the first user is doing, for example, the first user uses the camera of the anchor terminal device to collect the video screen of the first user cooking.
  • the live broadcast content is the environment in which the first user is located, and the camera of the anchor terminal is used to collect the video screen of the indoor environment or outdoor environment in which the first user is located.
  • the live broadcast content is the game interface that the first user is operating, and the game interface of the anchor terminal device is displayed as the live broadcast content on the anchor interface through screen recording or other methods.
  • Step 220 when the interactive shooting instruction of the second user is responded to by the first user, display the live screen of the first user during the interactive shooting process; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image.
  • Interactive shooting instruction an instruction initiated by the second user for interactive shooting with the first user.
  • the interactive shooting instruction is generated in response to the first user's instruction to use the interactive shooting props.
  • the interactive shooting props include but are not limited to virtual gifts, virtual coupons, virtual cards, etc., and the present application does not limit the specific types of interactive shooting props and the value of the corresponding virtual resources.
  • the interactive shooting instruction is generated.
  • the interactive shooting instruction is generated in response to the first user's trigger instruction for the interactive shooting control.
  • an interactive shooting control is displayed on the live broadcast interface, and the interactive shooting control is a control for the first user to initiate an interactive shooting instruction.
  • the interactive shooting control is not displayed on the live broadcast interface, but in response to the first user's trigger instruction to initiate the interactive task control on the live broadcast interface, the interactive operation interface is displayed, and the interactive shooting control is displayed on the interactive operation interface.
  • the interactive shooting instruction is generated in response to the first user and the second user communicating, triggered by the keywords therein.
  • the first user and the second user communicate by connecting to a microphone.
  • the audience terminal device recognizes the keyword in the voice and initiates an interactive shooting instruction.
  • the keyword can be customized by the second user or the first user, or it can be pre-set by the program or updated by the server.
  • the first user communicates with the second user by sending text on the public screen of the live broadcast interface.
  • the keyword is recognized in the text sent by the first user, it is considered that the first user has initiated an interactive shooting instruction.
  • the embodiment of the present application does not limit the specific content of the keyword.
  • it can also be "make a face", “make a smiley face” and other contents as keywords.
  • a keyword recognition model is set in the live broadcast program corresponding to the terminal device, and the keyword recognition model is used to recognize the keywords in the above-mentioned voice or text.
  • a keyword recognition model is set on the server, and the audience terminal device sends the text sent by the first user or the spoken voice information to the server, which is recognized by the keyword recognition model on the server, and the recognition result is fed back to the audience terminal device.
  • the recognition result is displayed in the form of text on the live broadcast interface - "Keywords detected, interactive shooting instructions sent to the anchor".
  • the keyword recognition model is a pre-trained neural network model or other algorithm model that can be used to detect keywords.
  • the interactive shooting instruction is initiated by the first user, that is, the audience terminal device initiates the interactive shooting instruction, and the interactive shooting instruction can be sent to the server, and then sent to the anchor terminal device via the server.
  • the audience terminal device initiates the interactive shooting instruction, and the interactive shooting instruction can be sent directly to the anchor terminal device. This application does not limit the sending process of the interactive shooting instruction.
  • the interactive shooting instruction is used to request the first user to shoot an image.
  • the image shot based on the interactive shooting instruction is targeted at the first user. That is, the interactive shooting instruction is used to request the first user to shoot an image about the first user.
  • the second user wants the first user to take a photo of the first user, and attaches relevant requirement information. For example, the second user wants the first user to take an image of the first user's smile, and the second user wants the first user to take an image of the first user's sadness.
  • the image shot based on the interactive shooting instruction is not targeted at the first user. That is, the interactive shooting instruction is used for the first user to shoot images of other objects other than the first user.
  • the first user wants the second user to take images about cooking
  • the second user wants the second user to take images about insects
  • the shot image can be targeted at objects existing in nature, including but not limited to insects, plants, animals, and the like.
  • the type of image which may be a photo or a video.
  • the image taken is a photo
  • the image taken is a video
  • the keywords in the above embodiments may also be "Please ask the anchor to shoot a 5-second video about sadness”.
  • an interactive shooting instruction is initiated.
  • the interactive shooting instruction of the second user is used to request the first user to shoot an image and send the captured image to the second user.
  • the live broadcast screen during the interactive shooting process includes the live broadcast screen from the start of the interactive shooting to the end of the interactive shooting.
  • the interactive shooting instruction is used to request the first user to take an image of the first user, and the requirement corresponding to the interactive shooting instruction is "the anchor puts on a happy expression.” Then, during the interactive shooting process, the live broadcast screen of the anchor user from preparing to shoot to putting on a happy expression is displayed until the interactive shooting ends, and the normal live broadcast screen is displayed.
  • the anchor terminal device receives the interactive shooting instruction, which can be displayed on the live broadcast interface of the first user, and the first user can choose to accept the task or not.
  • the first user's acceptance instruction for the interactive shooting instruction it is considered that the second user's interactive shooting instruction is answered by the first user, wherein the acceptance instruction can be generated in response to a trigger operation, etc.
  • the second user's interactive shooting instruction is answered by the first user.
  • the first user When the first user chooses to accept the task, it is considered that the second user's interactive shooting instruction is answered by the first user, and the first user can be given a corresponding preparation time, and after the preparation time is reached, the image is started. In other embodiments, when the first user chooses to refuse to accept the task, it is considered that the second user's interactive shooting instruction is not answered by the first user, and the image is not started. In some embodiments, when the first user does not answer the second user's interactive shooting instruction, the first user will be given a corresponding virtual penalty, for example, reducing the first user's virtual charm value, virtual heat value, etc.
  • the methods for acquiring the captured image in the embodiments of the present application include but are not limited to the following two.
  • the image is captured by the camera of the anchor terminal device of the first user.
  • the captured image is obtained by capturing a picture of a specific area on the live screen that the first user is broadcasting.
  • the shooting position is determined, where the shooting position can be the position of the anchor terminal device used to capture the image, or it can be used to capture the picture of the corresponding position on the live screen of the first user as the captured image.
  • Step 230 displaying the interactive shooting image obtained by the second user, where the interactive shooting image is obtained during the interactive shooting process.
  • the interactive shooting image includes the above-mentioned image obtained by shooting.
  • the interactive shooting image may also include but is not limited to the name (or nickname, logo) of the anchor, the name (or nickname, logo) of the audience user who initiated the interactive shooting instruction, the time of image shooting, the location of image shooting, etc.
  • the interactive shooting image corresponds to multiple styles.
  • the above-mentioned image obtained by shooting is displayed in the middle area of the interactive shooting image, for example, the image obtained by shooting is for the anchor user.
  • the requirement information corresponding to the interactive shooting instruction is displayed above the interactive shooting area, for example, the requirement information is "the anchor puts on a happy expression".
  • the interactive information of the image is displayed below the interactive shooting area, for example, the interactive information is "B user gives A user", where A user is the audience user and B user is the anchor user.
  • the anchor user can also add text information or content information to the interactive shooting image by himself, for example, the anchor user adds the text information "I wish you happiness every day” to the interactive shooting image.
  • the text information here can be added by the anchor user by himself, or it can be set by the program, but it needs to be manually added to the interactive shooting image by the anchor user.
  • the interactive shooting image is only displayed on the live broadcast interface of the viewer user who initiated the interactive shooting command, while the interactive shooting image is not displayed on the live broadcast interface of other viewer users who did not initiate the interactive shooting command. In this way, the privacy of the viewer user who initiated the interactive shooting command is protected, and the interests of the viewer users are better protected.
  • the interactive shooting image is displayed on the live broadcast interface of all viewer users, and the user experience of the viewer user who initiated the interactive shooting command is improved by public display, and the live broadcast interaction method can also be enriched.
  • the present application provides a new live interactive method, in which an audience user initiates an interactive shooting command, and when the interactive shooting command is answered by the host user, the live broadcast screen of the host user during the interactive shooting process is displayed, and the interactive shooting image obtained by the audience user is displayed.
  • the audience user can initiate an interactive shooting command, and the host user shoots and gives the audience user an interactive shooting image. Due to the difference in interactive shooting commands, the interactive shooting images obtained are also different. Therefore, the interactive shooting images are unknown and random, which enriches the way of live interactive and increases the fun of live interactive.
  • FIG 3 shows a flow chart of a live interactive method provided by another embodiment of the present application.
  • the execution subject of each step of the method can be the audience terminal device 11 in the implementation environment of the solution shown in Figure 1, such as the execution subject of each step can be the audience client.
  • the execution subject of each step is introduced as the "audience client".
  • the method can include at least one of the following steps (210-230):
  • Step 210 display the live broadcast interface of the first user, where the live broadcast interface is used to display the live broadcast content of the first user.
  • Step 222 display a first prompt message, where the first prompt message is used to guide the first user to take a photo.
  • First prompt information prompt information for guiding the anchor user to shoot.
  • the form of the prompt information includes but is not limited to text, voice, pattern, etc.
  • the first prompt information is on the mask above the live screen, and optionally, the prompt information on the mask cannot be operated or controlled by the audience user or the anchor user.
  • the mask corresponds to first opacity information, and the first opacity information represents the opacity of the mask.
  • the first prompt information when shooting starts, a mask is set above the layer where the live screen is located, and the first prompt information is displayed on the mask.
  • the mask is set below the layer where the control is located. When the audience user operates the control, it does not affect the first prompt information in the mask.
  • the technical solution provided in the embodiment of the present application can avoid the audience user from accidentally touching the first prompt information by setting a mask and displaying the first prompt information on the mask, and the prompt information of the mask is combined with the live screen and displayed in the live interface, so that the displayed live effect is more in line with the "shooting" scene, and the experience of the audience user and the anchor user is improved.
  • the first prompt information includes: first object information, the first object information is used to indicate the object of the image taken by the first user.
  • the first object information is a host
  • the host user needs to take the image of the host, that is, the host can just take a selfie.
  • the host user needs to take the image of the pet, that is, the host needs to take a photo of the pet.
  • the first prompt information includes: first requirement information, and the first requirement information is used to indicate the requirements that the image taken by the first user needs to meet.
  • the first requirement information includes but is not limited to theme information, style information, posture information, expression information, etc.
  • the first requirement information includes theme information, and optionally, the theme information includes but is not limited to daily themes, ancient style themes, comic themes, etc.
  • style information includes but is not limited to hip-hop style, jazz style, student style, etc.
  • posture information includes but is not limited to raising hands, looking up, kissing, etc.
  • expression information includes but is not limited to happy expressions, sad expressions, tearful expressions, regretful expressions, etc.
  • the first object is an anchor
  • the first requirement information requires the anchor's dressing style to be a student style. After giving the anchor a period of preparation time, the anchor needs to show a student style of dressing, and shoot to obtain an interactive shooting image.
  • the first object is an anchor
  • the first requirement information requires the anchor's expression to be sad. After giving the anchor a period of preparation time, the anchor needs to show a sad expression, and shoot to obtain an interactive shooting image.
  • the first object is the host's pet
  • the first requirement information is to require the pet to raise its hand. After giving the host and the pet a period of preparation time, the pet needs to raise its hand and be photographed to obtain an interactive shooting image.
  • the first prompt information includes: first location information, and the first location information is used to indicate the area where the image taken by the first user is located.
  • the shape of the area where the image taken by the first user is located includes but is not limited to a circle, a rectangle, and a sector. The present application does not limit the shape of the area where the image taken by the first user is located. The shape can be selected and determined by the first user or determined by the server.
  • the location information is displayed in the form of a frame, that is, the edge of the area where the image taken by the first user is located is highlighted in the form of a frame, and optionally, the frame is a circle, a rectangle or a sector.
  • the anchor can be intuitively informed of the area where the image taken is located.
  • the area where the image taken by the first user is located can also be displayed with different clarity or transparency, that is, the grayscale or transparency set for the area where the image taken by the first user and other areas on the mask is different.
  • the clarity or transparency of the area where the image taken by the first user is located is higher, and the clarity or transparency of other areas except the area where the image taken by the first user is located on the mask is lower, so as to highlight or highlight the area where the image taken by the first user is located.
  • the first requirement information is "happy".
  • the first prompt information includes: first time information, the first time information is used to indicate the preparation time before the first user starts to shoot the image or the duration of shooting the image.
  • the first time information is the preparation time, which is set by the program, and can also be manually adjusted by the audience user or the anchor user, and can also be extended or shortened based on the virtual props given to the anchor by other audience users.
  • the preparation time is displayed in a countdown manner. In some embodiments, the preparation time is 1 minute, but the preparation time required by the anchor is only 20 seconds. After 20 seconds, the anchor can manually choose to start shooting the image, the preparation time is shortened, and the shooting starts directly. In other embodiments, the preparation time given to the anchor is not enough for the anchor to prepare.
  • the audience user can extend the anchor's preparation time by giving virtual props to the anchor user.
  • the anchor user can also give virtual props to shorten the anchor's preparation time.
  • the first time information also includes the duration of the image shooting, that is, when the image is a video, the duration of the image shooting is displayed to give the anchor a prompt so that the anchor knows how long it will take to shoot. As shown in sub-picture b of Figure 4, a shooting countdown is displayed on the live broadcast interface.
  • the first prompt information includes: first quantity information, and the first quantity information is used to indicate the number of images taken by the first user.
  • multiple interactive shooting images of the first user are displayed.
  • multiple interactive shooting images of the first user are displayed, and the interactive instruction includes instructions for using multiple virtual shooting props, or includes shooting instructions for multiple interactive shooting images.
  • the shooting instructions for multiple interactive shooting images can be generated by extracting keywords that identify the audience user. For example, if the audience user says "Please take three pictures, host" in the public screen, then based on the keyword, the shooting instructions for multiple interactive shooting images are generated to obtain the interactive shooting instructions.
  • multiple interactive shooting pictures can be obtained during an interactive shooting process of the anchor user, which can also be called “continuous shooting”.
  • multiple interactive shooting videos can be obtained during an interactive shooting process of the anchor user.
  • Step 224 displaying the live screen of the first user shooting according to the first prompt information during the interactive shooting process.
  • a mask layer and a live broadcast screen are displayed.
  • the mask layer displays first prompt information
  • the anchor user shows different postures, expressions or actions according to the first prompt information on the mask layer.
  • the live broadcast screen at this time is the live broadcast screen shot by the anchor user according to the first prompt information.
  • the first requirement information and the first preparation time information are displayed in the mask, wherein the first requirement information is displayed above the shooting area corresponding to the first position information, and the first preparation time information is displayed in a countdown manner.
  • the anchor user can adjust his or her own actions and expressions in real time according to the displayed first requirement information so that the displayed picture is a picture that meets the first requirement information.
  • the countdown drops to 0, the captured image is used as an interactive shooting image.
  • not only the live screen when the first user shoots according to the first prompt information during the interactive shooting process is displayed, but also the live screen before shooting according to the first prompt information is displayed. For example, when preparing to shoot the countdown, the live screen is displayed.
  • Step 230 displaying the interactive shooting image obtained by the second user, where the interactive shooting image is obtained during the interactive shooting process.
  • the technical solution provided by the embodiment of the present application enables the anchor user to adjust the live broadcast screen according to the prompt information by displaying the first prompt information.
  • the live broadcast screen of the first user shooting according to the first prompt information during the interactive shooting process is displayed, that is, the live broadcast is uninterrupted, and the live broadcast screen when preparing to shoot is also displayed, so the live broadcast interaction is more transparent, shortening the distance between the anchor and the user, and enriching the form of live broadcast interaction.
  • the first prompt information also includes the first requirement information, which makes the live broadcast interesting while increasing the difficulty and the challenge of the live broadcast interaction.
  • FIG 6 shows a flow chart of a live interactive method provided by another embodiment of the present application.
  • the execution subject of each step of the method can be the audience terminal device 11 in the implementation environment of the solution shown in Figure 1, such as the execution subject of each step can be the audience client.
  • the execution subject of each step is introduced as the "audience client".
  • the method can include at least one of the following steps (210-230):
  • Step 210 display the live broadcast interface of the first user, where the live broadcast interface is used to display the live broadcast content of the first user.
  • Step 211 displaying interactive shooting props, where the interactive shooting props are used to trigger generation of interactive shooting instructions.
  • Interactive shooting props include but are not limited to virtual gifts or controls.
  • the interactive shooting props are virtual gifts, and optionally, the virtual gifts are Polaroid cameras.
  • audience users can send multiple interactive shooting props at a time, thereby triggering the generation of interactive shooting instructions and obtaining multiple interactive shooting images.
  • the interactive shooting props are virtual controls, and the virtual controls may or may not be set on the live broadcast interface.
  • the interactive operation initiation interface may be displayed in response to the interactive operation initiation control on the live broadcast interface, and the virtual controls may be displayed on the interactive operation initiation interface.
  • interactive shooting instructions are generated.
  • 60 in sub-image a is a virtual gift “Polaroid”, and sub-image b in FIG7 and sub-images c and d in FIG8 show that relevant operation instructions will be given when the second user uses the Polaroid for the first time.
  • step 211 also includes step 211 - 1.
  • Step 211 - 1 displaying queue prompt information, where the queue prompt information is used to indicate the queue progress of the interactive shooting instruction of the second user; wherein the queue progress includes at least one of the following: number of waiting persons, estimated waiting time, priority queue prompt, and prioritized queue prompt.
  • the queuing prompt information is displayed at any position of the live broadcast interface.
  • the queuing prompt information is displayed in the central area of the live broadcast interface.
  • the queuing process includes the number of people waiting, so the number of people who have currently initiated the shooting interactive instruction can be informed.
  • the queuing process includes the estimated waiting time, so the audience user can be informed of the waiting time. When the waiting time is too long, the audience user can not trigger the generation of the interactive shooting instruction, and when the waiting time is short, the generation of the interactive shooting instruction is triggered again.
  • the queuing process includes a priority queuing prompt, then when the value of the virtual gift corresponding to the interactive shooting instruction triggered by the second user is higher, the priority queuing prompt can be displayed on the audience terminal device corresponding to the second user, and correspondingly, when the value of the virtual gift corresponding to the interactive shooting instruction triggered by the fourth user is higher than the value of the virtual gift corresponding to the interactive shooting instruction triggered by the second user, the queuing prompt information is displayed on the audience terminal device corresponding to the second user, and the priority queuing prompt is displayed on the audience terminal device corresponding to the fourth user.
  • This application does not limit the specific content of the priority queue prompt and the priority queue prompt.
  • the priority queue prompt is "The value of the interactive gift you gave is higher than that of other users, and you have been prioritized in the queue.”
  • the priority queue prompt is "The interactive gift sent by user xx is more expensive and has been prioritized in the queue.”
  • the interactive gift refers to the virtual gift corresponding to the interactive shooting prop.
  • sub-picture a represents the prompt information of the second user sending a virtual gift
  • 70 in sub-picture b represents the priority queue prompt
  • 71 represents the priority queue prompt
  • 72 represents that the first user can perform an interactive shooting operation and is in an order-taking state.
  • the first user can also preliminarily refuse the order-taking state without performing the interactive shooting task, and cannot respond to the interactive shooting instruction initiated by the second user.
  • the second user cannot initiate an interactive shooting instruction.
  • sub-picture c indicates that after consuming the virtual gift "Polaroid", priority can be queued.
  • sub-picture d 75xx users gave virtual gifts and displayed the current number of people in the queue.
  • the technical solution provided in the embodiment of the present application can consume multiple virtual gifts at one time by setting the number of virtual gifts consumed for initiating an interactive shooting task, so that the second user does not have to perform an interactive operation for each virtual gift, which helps to reduce the complexity of operations and reduce the processing overhead of terminal devices and servers.
  • the first user can be reminded that when there are a large number of people in the queue, the first user can choose not to trigger the generation of the interactive shooting instruction to reduce the pressure on the server.
  • Step 212 in response to the instruction to use the interactive shooting props, display a shooting requirement setting interface.
  • Shooting requirement setting interface an interface for setting shooting requirements.
  • the second user can set the first requirement information, so in response to the second user's instruction to use the interactive shooting props, the shooting requirement setting interface is displayed.
  • the use instruction is generated in response to the second user's use operation of the interactive shooting props.
  • the use operation includes but is not limited to clicking, long pressing, sliding, etc., and the embodiment of the present application does not limit the specific types of the use instruction and the use operation.
  • Step 213 In the shooting requirement setting interface, first requirement information set by the second user is displayed, where the first requirement information is used to indicate requirements that the images shot by the first user must meet.
  • the first requirement information is set by the second user.
  • the first requirement information is determined according to the settings of the second user.
  • the first requirement information includes but is not limited to theme information, style information, posture information, expression information, etc.
  • the second user can set the favorite theme information, style information, posture information, expression information, etc. on the shooting requirement setting interface.
  • the second user sets the first requirement information, he or she determines to select the desired requirement from several given options, or he or she can input the requirement by himself or herself.
  • a certain amount of virtual resources will be consumed.
  • first requirement information In addition to the first requirement information, other first prompt information set by the second user is displayed in the shooting requirement setting interface, that is, the first object information, first location information, first time information, first quantity information, etc. can all be set by the second user on the shooting requirement setting interface.
  • Step 214 In response to the instruction to use the interactive shooting props, the interactive shooting instruction of the second user is sent to the client of the first user.
  • the use instruction is generated in response to the second user's use operation of the interactive shooting props.
  • the use operation includes but is not limited to clicking, long pressing, sliding, etc., and the embodiment of the present application does not limit the specific types of the use instruction and the use operation.
  • Step 220 when the interactive shooting instruction of the second user is responded to by the first user, display the live screen of the first user during the interactive shooting process; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image.
  • Step 230 displaying the interactive shooting image obtained by the second user, where the interactive shooting image is obtained during the interactive shooting process.
  • the second user can set the first requirement information on the shooting requirement setting interface by himself, which can improve the interactivity between the audience user and the anchor user.
  • the second audience user who sends virtual gifts is given special permissions to distinguish from other audience users, so as to enhance the experience of the audience user who sends virtual gifts.
  • the first requirement information is set by the second user himself, which can enrich the content of the first requirement information and further enrich the content and form of the interactive shooting images.
  • FIG 11 shows a flowchart of a live interactive method provided by another embodiment of the present application.
  • the execution subject of each step of the method can be the audience terminal device 11 in the implementation environment of the solution shown in Figure 1, such as the execution subject of each step can be the audience client.
  • the execution subject of each step can be the audience client.
  • the method can include at least one of the following steps (210-250):
  • Step 210 display the live broadcast interface of the first user, where the live broadcast interface is used to display the live broadcast content of the first user.
  • Step 220 when the interactive shooting instruction of the second user is responded to by the first user, display the live screen of the first user during the interactive shooting process; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image.
  • step 250 is also included.
  • Step 250 when the interactive shooting instruction of the second user is responded to by the first user, the video screen of the second user is collected through the camera, and the video screen of the second user is sent to the server; wherein the live screen of the first user during the interactive shooting process includes: the video screen of the first user and the video screen of the second user.
  • the second user can choose whether to shoot with the first user. If the second user does not choose to shoot with the first user, the live broadcast screen of the first user during the interactive shooting process only includes the video screen of the first user. If the second user chooses to shoot with the first user, the live broadcast screen of the first user during the interactive shooting process only includes the video screen of the first user and the video screen of the second user. That is, when the second user chooses to shoot together, the video screen of the second user is displayed on the live broadcast interface of the audience device corresponding to the second user. Optionally, the video screen of the second user may be displayed on other audience terminal devices, or the video screen of the second user may not be displayed.
  • the second user can set whether to display his video screen on the live broadcast interface of other audience terminal devices or anchor terminal devices, or the server may pre-set whether to display the video screen of the second user on the live broadcast interface of other audience terminal devices or anchor terminal devices.
  • the video screen of the second user and the video screen of the first user are displayed on the live broadcast interface
  • the video screen of the first user appears in the first area of the live broadcast interface
  • the live broadcast screen of the second user appears in the second area of the live broadcast interface.
  • the first area and the second area do not overlap, and the position of the area can be determined by the second user.
  • the background processing can be performed on the video screen of the first user and the video screen of the second user, so that the video screen of the first user and the video screen of the second user displayed on the live broadcast interface have the same background, or in other words, on the displayed live broadcast interface, the first user and the second user appear in the same background.
  • background processing the first user and the second user appear in the same scene, so that the final interactive shooting image is more in line with the essence of co-shooting, and can better reflect the meaning of co-shooting.
  • the interactive shooting image is more special for the second user, further enhancing the second user's experience.
  • the audience terminal device corresponding to the second user collects the video image of the second user through a camera.
  • the audience terminal device corresponding to the second user starts to collect the video image of the second user, and optionally, the video image of the second user is displayed on the live broadcast interface of the anchor terminal device, or the video image of the second user may not be displayed.
  • the video screen of the first user and the video screen of the second user are displayed.
  • step 230 also includes step 260 (not shown in the figure).
  • Step 260 During the interactive shooting process, a second prompt message is displayed, where the second prompt message is used to guide the second user to shoot with the first user.
  • the second prompt information is prompt information for guiding the anchor user and the second user to shoot.
  • the prompt information includes, but is not limited to, text, voice, pattern, etc.
  • the second prompt information is on a mask above the live screen. For the explanation of the mask, please refer to the above embodiment and will not be repeated here.
  • the second prompt information includes: second object information, the second object information is used to indicate the object in the video screen of the anchor user targeted by the image taken by the first user.
  • the second object information is the anchor, that is, the anchor can take a selfie at this time.
  • the first object is a pet
  • the object targeted by the image that the anchor user needs to take is the pet, that is, the anchor needs to take a photo of the pet at this time.
  • the second prompt information includes: second requirement information, the second requirement information is the requirement information for the video screen of the first user in the image taken by the first user.
  • the second prompt information includes third requirement information, and the third requirement information is the requirement information for the second user.
  • the second requirement information is displayed in the live broadcast interface of the first user, and the third requirement information is displayed in the live broadcast interface of the second user.
  • the second requirement information and the third requirement information include but are not limited to theme information, style information, posture information, expression information, etc. See the explanation of the first requirement information in the above embodiment, which will not be repeated here.
  • the second requirement information is "the host makes a heart with his right hand”
  • the third requirement information is "the audience makes a heart with his left hand”.
  • the second prompt information includes: second position information, the second position information is used to indicate the area where the video screen of the first user is located in the image taken by the first user.
  • the shape of the area where the video screen of the first user is located in the image taken by the first user includes but is not limited to a circle, a rectangle, and a sector. The present application does not limit the shape of the area where the image taken by the first user is located. The shape can be selected and determined by the first user or determined by the server.
  • the second prompt information includes: third position information, the third position information is used to refer to the area where the video screen of the second user is located in the image taken by the first user.
  • the shape of the area where the video screen of the second user is located in the image taken by the first user includes but is not limited to a circle, a rectangle, and a sector.
  • the present application does not limit the shape of the area where the image taken by the first user is located.
  • the shape can be selected and determined by the first user or determined by the server.
  • the second position information and the third position information are both displayed in the form of a frame. Please refer to the above explanation of the first position information, which will not be repeated here.
  • the second prompt information includes: second time information, the second time information is used to indicate the preparation time before the first user starts to shoot the image or the duration of shooting the image. Please refer to the explanation of the first time information above, which will not be repeated here.
  • the second prompt information includes: second quantity information, the second quantity information is used to indicate the number of images taken by the first user. Please refer to the explanation of the second quantity information above, which will not be repeated here.
  • Step 230 displaying the interactive shooting image obtained by the second user, where the interactive shooting image is obtained during the interactive shooting process.
  • Step 240 in response to a viewing instruction for the interactive shooting image, displaying an interactive shooting album, where the interactive shooting album is used to store the interactive shooting image obtained by the second user.
  • View instruction generated in response to the second user's viewing operation on the interactively shot image.
  • the viewing operation includes but is not limited to clicking, long pressing, sliding, etc.
  • the embodiment of the present application does not limit the specific types of the viewing instruction and the viewing operation.
  • Interactive shooting album an album that saves the interactive shooting images obtained by the second user.
  • the interactive shooting album in response to the second user's viewing operation on the interactive shooting images, the interactive shooting album is displayed on the audience terminal device corresponding to the second user.
  • the interactive shooting album saves interactive shooting images of different anchors.
  • the interactive shooting album saves interactive shooting images with different requirement information.
  • the interactive shooting images when the first requirement is an expression, the interactive shooting images are classified differently according to different expressions. For example, expressions include happy, sad, crying, etc., then the interactive shooting images are classified and saved corresponding to these types of expressions.
  • the interactive images that meet the requirements of the filtering operation are filtered out from multiple interactive images.
  • the filtering operation is an operation on the filtering control, including but not limited to clicking, long pressing, sliding, etc.
  • the specific type of filtering is not limited in the embodiment of the present application.
  • the type of the filtering control can correspond to different anchor names and different vital information.
  • the filtering operation of the filtering control of "Anchor A” the interactive images that match the corresponding anchor A are filtered out from multiple interactive images.
  • the filtering operation of the filtering control of "Expression is happy" the interactive images that meet the "Expression is happy" are filtered out from multiple interactive images.
  • an interactive photo album is established, and the second user saves the interactive photo images.
  • the second user wants to view the images, they can filter them in a targeted manner, which simplifies the user's operation.
  • sub-picture a shows the interactive photo album, and there are different interactive photos corresponding to different requirements.
  • sub-picture b of FIG12 when filtering the interactive photos, the first requirement information is "happy" as the filtering condition, and the interactive photos corresponding to "happy" are displayed.
  • the technical solution provided in the embodiment of the present application allows the first user and the second user to take photos together to satisfy the second user and the first user's idea of taking photos together, and by adjusting the virtual background, the interactive shooting images taken are more realistic and have more collection and preservation value. Therefore, it can better enhance the second user's live broadcast experience and enrich the live broadcast interaction method.
  • corresponding prompt information is displayed to obtain interactive shooting images that are more in line with expectations, thereby improving the efficiency of obtaining interactive shooting images.
  • FIG 13 shows a flow chart of a live interactive method provided by an embodiment of the present application.
  • the execution subject of each step of the method may be the anchor terminal device 13 in the implementation environment of the solution shown in Figure 1, such as the execution subject of each step may be the anchor client.
  • the execution subject of each step may be the anchor client.
  • the method may include at least one of the following steps (310-330):
  • Step 310 during the live broadcast of the first user, interactive shooting information generated based on the interactive shooting instruction of the second user is displayed; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image.
  • the interactive shooting information includes identification information of the second user, interactive shooting props information, value information corresponding to the interactive shooting props, and the like.
  • the interactive shooting instruction of the second user is used to request the first user to shoot an image and send the shot image to the second user.
  • Step 320 in response to the response instruction to the interactive shooting instruction of the second user, display the live screen of the first user during the interactive shooting process.
  • Response instruction an instruction generated based on the response operation of the first user.
  • the response operation includes but is not limited to clicking, long pressing, sliding, etc.
  • the embodiment of the present application does not limit the specific types of response instructions and response operations.
  • Step 330 displaying the interactive shooting image sent to the second user, where the interactive shooting image is obtained during the interactive shooting process.
  • the host terminal device obtains the interactive shooting image and sends the interactive shooting image to the audience terminal device. In some embodiments, the host terminal device sends the interactive shooting image to the audience terminal device through the server.
  • the technical solution provided by the embodiment of the present application provides a new live interactive method, in which an audience user initiates an interactive shooting command, and when the interactive shooting command is answered by the host user, the live screen of the host user during the interactive shooting process is displayed, and the interactive shooting image obtained by the audience user is displayed.
  • the audience user can initiate an interactive shooting command, and the host user shoots and gives the audience user an interactive shooting image. Due to the different interactive shooting commands, the interactive shooting images obtained are also different. Therefore, the interactive shooting images are unknown and random, which enriches the way of live interaction and increases the fun of live interaction.
  • FIG 14 shows a flowchart of a live interactive method provided by another embodiment of the present application.
  • the execution subject of each step of the method may be the anchor terminal device 13 in the implementation environment of the solution shown in Figure 1, such as the execution subject of each step may be the anchor client.
  • the execution subject of each step may be the anchor client.
  • the method may include at least one of the following steps (310-350):
  • Step 310 during the live broadcast of the first user, interactive shooting information generated based on the interactive shooting instruction of the second user is displayed; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image.
  • step 310 also includes step 301 (not shown in the figure).
  • Step 301 display multiple interactive shooting information to be executed, wherein the multiple interactive shooting information to be executed are displayed according to priority, and the priority is related to at least one of the following: the generation time of the interactive shooting instruction corresponding to the interactive shooting information, and the expenditure resources of the interactive shooting instruction corresponding to the interactive shooting information.
  • the expenditure resources include but are not limited to at least one of real currency, virtual currency, virtual energy, etc.
  • the earlier the generation time of the interactive shooting instruction corresponding to the interactive shooting information is, the higher the priority of the corresponding interactive shooting information is. In some embodiments, the higher the expenditure resource of the interactive shooting instruction corresponding to the interactive shooting information is, the higher the priority of the corresponding interactive shooting information is.
  • Step 322 In response to a response instruction to the interactive shooting instruction of the second user, display a first prompt message, where the first prompt message is used to guide the first user to shoot.
  • the first requirement information is used to indicate the requirements that the image taken by the first user must meet.
  • 120 represents the first requirement information, that is, “the expression is happy”.
  • 121 represents the first prompt information.
  • 122 represents the displayed interactive shot image. In some embodiments, after the interactive shot image is displayed, the interactive shot image is canceled by sliding out of the live broadcast interface or fading.
  • Step 324 displaying the live screen of the first user shooting according to the first prompt information during the interactive shooting process.
  • Step 330 displaying the interactive shooting image sent to the second user, where the interactive shooting image is obtained during the interactive shooting process.
  • steps 326 to 328 are included before step 330.
  • Step 326 During the interactive shooting process, if the first user is within the shooting range, an interactive shooting image is generated based on the image of the first user within the shooting range.
  • the first prompt information includes the first position information, which is used to represent the area where the captured image is located, that is, the shooting range here.
  • Step 328 If the first user is not within the shooting range, the setting image related to the first user is determined as the interactive shooting image.
  • the setting image can be set by the server or by the first user.
  • the live broadcast screen at the time when the first user starts broadcasting is used as the setting image.
  • the first user sets the setting image in advance, for example, for different first requirement information, the corresponding setting image is taken in advance.
  • the requirement information is "the expression is happy", “the expression is sad”, etc., then for different expressions, the first user sets the corresponding different images in advance and uses these images as the setting images.
  • a setting image corresponding to the first requirement information is found from a plurality of setting images as the interactive shooting image.
  • Step 340 displaying the quality score of the interactive shooting image obtained by scoring the interactive shooting image based on the first requirement information.
  • the interactively shot images are scored by a server or a terminal device, which is not limited in this application.
  • the quality score is displayed on the user terminal device as well as the viewer terminal device.
  • the quality score of the interactively shot image is determined according to the degree of conformity between the interactively shot image and the first requirement information. In some embodiments, the quality score of the interactively shot image is determined according to the quality scoring model according to the degree of conformity between the interactively shot image and the first requirement information. In some embodiments, the quality scoring model is a neural network model or other model for scoring images, which is not limited in this application. In some embodiments, the quality scoring model is trained in advance, and some interactively shot images are used as training samples, and the quality scores of the interactively shot images are manually annotated as training labels to train the quality scoring model.
  • the quality assessment model is a classification model.
  • corresponding to each requirement information there is a corresponding image with the highest score.
  • the quality score of the interactive shot image is determined by comparing the difference between the image with the highest score and the interactive shot image under the requirement information.
  • first requirement information is determined from multiple requirement information, and taking the first requirement information corresponding to the first emotional expression as an example, optionally, face detection is performed on the interactive shot image through a quality evaluation model; based on the face detection result, the emotional expression of the first user in the interactive shot image is identified; and based on the degree of match between the identified emotional expression of the first user and the first emotional expression, the quality score of the interactive shot image is determined.
  • Step 350 Display reward information obtained by the first user.
  • the reward information is used to indicate the reward obtained by the first user for shooting the interactive shooting image.
  • the reward is related to the quality score.
  • the reward is positively correlated with the quality score, and the higher the quality score, the greater the reward.
  • the interactive shot image is a set image.
  • the interactive shot image does not participate in the calculation of the quality score.
  • the first user is not given a reward.
  • the first user's reward is reduced.
  • rewards include but are not limited to virtual currency, virtual props, virtual charm value, virtual competition value, virtual gold coins, etc. This application does not limit the specific form of rewards, nor does it limit the specific form of reward information corresponding to rewards.
  • 131 in sub-image a indicates a quality score
  • sub-image a shows that the first user is not within the shooting range
  • the set image related to the first user is determined as an interactive shooting image
  • the set image is the start-up image
  • 132 also shows a quality score
  • sub-image c of FIG19 it indicates that the anchor can stop accepting gifts, that is, stop performing the interactive shooting task.
  • sub-image d of FIG19 it indicates that multiple interactive shooting images 133 can be displayed based on multiple virtual gifts.
  • multiple interactive shooting information 140 to be executed are shown. As shown in FIG21, when the anchor and other anchors are connected, when one of the anchors is performing an interactive shooting task, the anchor on the left is performing an interactive shooting task, and a prompt message 141 is displayed.
  • the anchor's anchor terminal device when the anchor's anchor terminal device is a portable handheld terminal device, the portable terminal device cannot be used to perform the interactive photo task. Only when the anchor's anchor terminal device is a non-portable handheld terminal device, the terminal device can be used to perform the interactive photo task.
  • the technical solution provided by the embodiment of the present application displays prompt information on the live broadcast interface of the anchor terminal device, so that the anchor user can perform the next operation according to the prompt information.
  • the scoring mechanism can promote the competitive awareness of the anchor, liven up the live broadcast atmosphere, and further enhance the live broadcast experience of the anchor user and the audience user.
  • determining rewards based on quality scores is relatively fair and can also promote the anchor's enthusiasm for participation.
  • FIG. 22 shows a block diagram of a live interactive method provided by an embodiment of the present application.
  • the method may include at least one of the following steps (S1 to S4):
  • Step S1 The second user sends a Polaroid.
  • the second user requests the first user to take a photo in the live broadcast room.
  • Step S2 queue and wait.
  • the queue order is determined according to the time when the second user initiates the interactive shooting instruction.
  • Step S3 determining a random theme and recipe based on the Polaroid photos sent.
  • the first requirement information is determined.
  • the first user communicates with the second user in real time in the live broadcast room, by microphone or text, according to the photo theme required by the system and the number of photos required by the user, and the first user takes one or more photos for the second user (interactive photo images).
  • Step S4 scoring the accuracy of the emotional expression required by the subject.
  • the quality score is determined based on the interactively shot images.
  • the AI Artificial Intelligence
  • the AI algorithm scores the fit between the interactively shot images and the theme required by the system by modeling the emotion classification of the existing reference photos. For example, if the system requires the theme of the photo to be: happy, the AI model will identify and distinguish whether the host's emotions in the photo match the theme, and score the photo.
  • the AI algorithm is mainly divided into face detection and expression recognition; the algorithm divides the human face into seven basic expressions, namely anger, disgust, fear, happiness, neutrality, sadness, and surprise.
  • the AI model is trained with a large amount of face data.
  • the face position in the photo is detected and input into the expression recognition model to determine whether the photo meets the theme required by the system and score it according to its fit. For example, if the system requires the theme of the photo to be: happy, the AI model will identify and distinguish whether the host's emotions in the photo match the theme, and score the photo.
  • the server rewards the host based on the AI scoring results; the host's photos are distributed to the user's Polaroid album, which supports users to save and collect.
  • Figure 23 shows a block diagram of a live interactive device provided by an embodiment of the present application.
  • the device has the function of implementing the method example of the above-mentioned audience client side, and the function can be implemented by hardware, or by hardware executing corresponding software.
  • the device can be the audience terminal device introduced above, or it can be set in the audience terminal device.
  • the device 2300 may include: an interface display module 2310, a screen display module 2320 and an image display module 2330.
  • the interface display module 2310 is used to display the live broadcast interface of the first user, and the live broadcast interface is used to display the live broadcast content of the first user.
  • the screen display module 2320 is used to display the live screen of the first user during the interactive shooting process when the interactive shooting instruction of the second user is responded to by the first user; wherein the interactive shooting instruction of the second user is used to request the first user to shoot an image.
  • the image display module 2330 is used to display the interactive shooting image obtained by the second user, where the interactive shooting image is obtained during the interactive shooting process.
  • the screen display module 2320 is used to display first prompt information, where the first prompt information is used to guide the first user to take a photo.
  • the picture display module 2320 is also used to display the live picture when the first user shoots according to the first prompt information during the interactive shooting process.
  • the first prompt information includes: first requirement information, where the first requirement information is used to indicate requirements that the image taken by the first user must meet.
  • the module further includes a prop display module 2340 and an instruction sending module 2350 .
  • the prop display module 2340 is used to display interactive shooting props, and the interactive shooting props are used to trigger the generation of the interactive shooting instructions.
  • the instruction sending module 2350 is used to send the interactive shooting instruction of the second user to the client of the first user in response to the instruction for using the interactive shooting prop.
  • the module further includes an information display module 2360 .
  • the interface display module 2310 is further configured to display a shooting requirement setting interface in response to a usage instruction for the interactive shooting prop.
  • the information display module 2360 is used to display the first requirement information set by the second user in the shooting requirement setting interface, where the first requirement information is used to indicate the requirements that the image shot by the first user must meet; wherein the second user's interactive shooting instructions include the first requirement information.
  • the information display module 2360 is also used to display queue prompt information, and the queue prompt information is used to indicate the queue progress of the interactive shooting instruction of the second user; wherein, the queue progress includes at least one of the following: the number of waiting people, the estimated waiting time, the priority queue prompt, and the priority queue prompt.
  • the module further includes an album display module 2370 .
  • the album display module 2370 is used to display an interactive shooting album in response to a viewing instruction for the interactive shooting image, and the interactive shooting album is used to store the interactive shooting image obtained by the second user.
  • the module further includes a screen sending module 2380 .
  • the picture sending module 2380 is used to collect the video picture of the second user through a camera and send the video picture of the second user to a server when the interactive shooting instruction of the second user is responded to by the first user; wherein the live broadcast picture of the first user during the interactive shooting process includes: the video picture of the first user and the video picture of the second user.
  • the information display module 2360 is further used to display second prompt information during the interactive shooting process, where the second prompt information is used to guide the second user to shoot with the first user.
  • Figure 25 shows a block diagram of a live interactive device provided by another embodiment of the present application.
  • the device has the function of implementing the method example of the above-mentioned anchor client side, and the function can be implemented by hardware, or by hardware executing corresponding software.
  • the device can be the anchor terminal device introduced above, or it can be set in the anchor terminal device.
  • the device 2500 may include: an information display module 2510, a screen display module 2520 and an image display module 2530.
  • the screen display module 2520 is used to display the live screen of the first user during the interactive shooting process in response to the response instruction to the interactive shooting instruction of the second user;
  • the image display module 2530 is used to display the interactive shooting image sent to the second user, where the interactive shooting image is obtained during the interactive shooting process.
  • the screen display module 2520 is used to display first prompt information, and the first prompt information is used to guide the first user to take pictures.
  • the picture display module 2520 is also used to display the live picture when the first user shoots according to the first prompt information during the interactive shooting process.
  • the first prompt information includes: first requirement information, where the first requirement information is used to indicate requirements that the image taken by the first user must meet.
  • the apparatus further includes a score display module 2540 .
  • the scoring display module is used to display the quality score of the interactive shooting image obtained by scoring the interactive shooting image based on the first requirement information.
  • the information display module 2510 is further used to display reward information obtained by the first user, where the reward information is used to indicate the reward obtained by the first user for photographing the interactive shooting image, and the reward is related to the quality score.
  • the image display module 2530 is used to generate the interactive shooting image according to the image of the first user within the shooting range if the first user is within the shooting range during the interactive shooting process.
  • the image display module 2530 is further configured to determine a setting image related to the first user as the interactive shooting image if the first user is not within the shooting range.
  • the information display module 2510 is also used to display multiple interactive shooting information to be executed, wherein the multiple interactive shooting information to be executed are displayed according to priority, and the priority is related to at least one of the following: the generation time of the interactive shooting instruction corresponding to the interactive shooting information, and the expenditure resources of the interactive shooting instruction corresponding to the interactive shooting information.
  • the device provided in the above embodiment when implementing its functions, only uses the division of the above functional modules as an example.
  • the above functions can be assigned to different functional modules as needed, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device and method embodiments provided in the above embodiment belong to the same concept, and their specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • FIG. 27 shows a block diagram of a terminal device 2700 provided in one embodiment of the present application.
  • the terminal device 2700 may be the anchor terminal device 11 in the implementation environment shown in Figure 1, which is used to implement the live interactive method on the anchor terminal device side provided in the above embodiment, or may be the audience terminal device 13 in the implementation environment shown in Figure 1, which is used to implement the live interactive method on the audience terminal device side provided in the above embodiment.
  • the anchor terminal device 11 in the implementation environment shown in Figure 1
  • the audience terminal device 13 in the implementation environment shown in Figure 1, which is used to implement the live interactive method on the audience terminal device side provided in the above embodiment.
  • the terminal device 2700 includes: a processor 2701 and a memory 2702 .
  • the processor 2701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
  • the processor 2701 may be implemented in at least one hardware form of DSP (Digital Signal Processing), FPGA (Field Programmable Gate Array), and PLA (Programmable Logic Array).
  • the processor 2701 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in the awake state, also known as CPU (Central Processing Unit); the coprocessor is a low-power processor for processing data in the standby state.
  • the processor 2701 may be integrated with a GPU (Graphics Processing Unit), and the GPU is responsible for rendering and drawing the content to be displayed on the display screen.
  • the processor 2701 may also include an AI processor, which is used to process computing operations related to machine learning.
  • the memory 2702 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 2702 may also include a high-speed random access memory and a non-volatile memory, such as one or more disk storage devices and flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 2702 is used to store a computer program, which is configured to be executed by one or more processors to implement the live broadcast interactive method on the anchor terminal device side or the live broadcast interactive method on the audience terminal device side.
  • the terminal device 2700 may further optionally include: a peripheral device interface 2703 and at least one peripheral device.
  • the processor 2701, the memory 2702 and the peripheral device interface 2703 may be connected via a bus or a signal line.
  • Each peripheral device may be connected to the peripheral device interface 2703 via a bus, a signal line or a circuit board.
  • the peripheral device includes: at least one of a radio frequency circuit 2704, a display screen 2705, an audio circuit 2707 and a power supply 2708.
  • FIG. 27 does not constitute a limitation on the terminal device 2700 , and may include more or fewer components than shown in the figure, or combine certain components, or adopt a different component arrangement.
  • a computer-readable storage medium in which a computer program is stored.
  • the computer program is executed by a processor, it is used to implement the live broadcast interactive method on the anchor terminal device side or the live broadcast interactive method on the audience terminal device side.
  • the computer readable storage medium may include: ROM (Read-Only Memory), RAM (Random Access Memory), SSD (Solid State Drives) or optical disk, etc.
  • the random access memory may include ReRAM (Resistance Random Access Memory) and DRAM (Dynamic Random Access Memory).
  • a computer program product comprising a computer program, the computer program being stored in a computer-readable storage medium.
  • a processor of a terminal device reads the computer program from the computer-readable storage medium, and the processor executes the computer program, so that the anchor terminal device executes the live broadcast interactive method on the anchor terminal device side, or the audience terminal device executes the live broadcast interactive method on the audience terminal device side.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种直播互动方法、装置、设备、存储介质及程序产品,涉及互联网技术领域。上述方法包括:显示第一用户的直播界面,直播界面用于展示第一用户的直播内容;在第二用户的互动拍摄指令被第一用户应答的情况下,显示第一用户在互动拍摄过程中的直播画面(210);其中,第二用户的互动拍摄指令用于请求第一用户拍摄图像(220);显示第二用户获得的互动拍摄图像,互动拍摄图像是在互动拍摄过程中得到的(230)。上述方法中由于互动拍摄指令的不同,得到的互动拍摄图像也不相同,因此,互动拍摄图像存在着未知性和随机性,从而丰富了直播互动的方式,增加了直播互动的趣味性。

Description

直播互动方法、装置、设备、存储介质及程序产品 技术领域
本申请实施例涉及互联网技术领域,特别涉及一种直播互动方法、装置、设备、存储介质及程序产品。
背景技术
目前,用户之间可以基于一些应用程序进行在线互动。例如,用户之间可以通过社交应用进行聊天互动,用户之间也可以通过直播应用进行视频或语音的在线互动。
然而,目前的互动方式仍然较为单一。
发明内容
本申请实施例提供了一种直播互动方法、装置、设备、存储介质及程序产品。所述技术方案如下:
根据本申请实施例的一个方面,提供了一种直播互动方法,所述方法包括:
显示第一用户的直播界面,所述直播界面用于展示所述第一用户的直播内容;
在第二用户的互动拍摄指令被所述第一用户应答的情况下,显示所述第一用户在互动拍摄过程中的直播画面;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像;
显示所述第二用户获得的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
根据本申请实施例的另一个方面,提供了一种直播互动方法,所述方法包括:
在第一用户进行直播的过程中,显示基于第二用户的互动拍摄指令生成的互动拍摄信息;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像;
响应于针对所述第二用户的互动拍摄指令的应答指令,显示所述第一用户在互动拍摄过程中的直播画面;
显示发送给所述第二用户的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
根据本申请实施例的一个方面,提供了一种直播互动装置,所述装置包括:
界面显示模块,用于显示第一用户的直播界面,所述直播界面用于展示所述第一用户的直播内容;
画面显示模块,用于在第二用户的互动拍摄指令被所述第一用户应答的情况下,显示所述第一用户在互动拍摄过程中的直播画面;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像;
图像显示模块,用于显示所述第二用户获得的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
根据本申请实施例的另一个方面,提供了一种直播互动装置,所述装置包括:
信息显示模块,用于在第一用户进行直播的过程中,显示基于第二用户的互动拍摄指令生成的互动拍摄信息;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像;
画面显示模块,用于响应于针对所述第二用户的互动拍摄指令的应答指令,显示所述第一用户在互动拍摄过程中的直播画面;
图像显示模块,用于显示发送给所述第二用户的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
根据本申请实施例的一个方面,提供了一种终端设备,所述终端设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现上述观众客户端侧的直播互动方法,或实现上述主播客户端侧的直播互动方法。
根据本申请实施例的一个方面,提供了一种计算机可读存储介质,所述可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现上述观众客户端侧的直播互动方法,或实现上述主播客户端侧的直播互动方法。
根据本申请实施例的一个方面,提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序存储在计算机可读存储介质中。终端设备的处理器从计算机可读存储介质读取该计算机程序,处理器执行该计算机程序,使得该终端设备执行上述观众客户端侧的直播互动方法,或实现上述主播客户端侧的直播互动方法。
本申请实施例提供的技术方案可以包括如下有益效果:
本申请提供了一种新的直播互动方式,由观众用户发起互动拍摄指令,在互动拍摄指令被主播用户应答的情况下,显示主播用户在互动拍摄过程中的直播画面,并显示观众用户获得的互动拍摄图像。通过本申请实施例提供的技术方案,可以由观众用户发起互动拍摄指令,而由主播用户拍摄并给予观众用户互动拍摄图像。由于互动拍摄指令的不同,因此得到的互动拍摄图像也不相同,因此,互动拍摄图像存在着未知性和随机性,从而丰富了直播互动的方式,增加了直播互动的趣味性。
附图说明
图1是本申请一个实施例提供的方案实施环境的示意图;
图2是本申请一个实施例提供的直播互动方法的流程图;
图3是本申请另一个实施例提供的直播互动方法的流程图;
图4是本申请一个实施例提供的观众用户界面的示意图;
图5是本申请另一个实施例提供的观众用户界面的示意图;
图6是本申请另一个实施例提供的直播互动方法的流程图;
图7是本申请另一个实施例提供的观众用户界面的示意图;
图8是本申请另一个实施例提供的观众用户界面的示意图;
图9是本申请另一个实施例提供的观众用户界面的示意图;
图10是本申请另一个实施例提供的观众用户界面的示意图;
图11是本申请另一个实施例提供的直播互动方法的流程图;
图12是本申请另一个实施例提供的观众用户界面的示意图;
图13是本申请另一个实施例提供的直播互动方法的流程图;
图14是本申请另一个实施例提供的直播互动方法的流程图;
图15是本申请一个实施例提供的主播用户界面的示意图;
图16是本申请另一个实施例提供的主播用户界面的示意图;
图17是本申请另一个实施例提供的主播用户界面的示意图;
图18是本申请另一个实施例提供的主播用户界面的示意图;
图19是本申请另一个实施例提供的主播用户界面的示意图;
图20是本申请另一个实施例提供的主播用户界面的示意图;
图21是本申请另一个实施例提供的主播用户界面的示意图;
图22是本申请一个实施例提供的直播互动方法的框图;
图23是本申请一个实施例提供的直播互动装置的框图;
图24是本申请另一个实施例提供的直播互动装置的框图;
图25是本申请另一个实施例提供的直播互动装置的框图;
图26是本申请另一个实施例提供的直播互动装置的框图;
图27是本申请一个实施例提供的终端设备的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
请参考图1,其示出了本申请一个实施例提供的方案实施环境的示意图。该方案实施环境可以包括:观众终端设备11、服务器12和主播终端设备13。
观众终端设备11和主播终端设备13可以是诸如手机、平板电脑、PC(Personal Computer,个人计算机)、可穿戴设备、VR(Virtual Reality,虚拟现实)设备、AR(Augmented Reality,增强现实)设备、车载设备等电子设备,本申请对此不作限定。观众终端设备11和主播终端设备13中可以安装运行有目标应用程序的客户端。例如,该目标应用程序可以是视频直播应用程序、音乐播放类应用程序、社交类应用程序、互动娱乐类应用程序等应用程序,本申请对此不作限定。观众终端设备11用于发起互动任务,主播终端设备13用于接收互动任务。服务器12可以是一台服务器,也可以是由多台服务器组成的服务器集群,或者是一个云计算服务中心。服务器12可以是上述目标应用程序的后台服务器,用于为目标应用程序的客户端提供后台服务。
上述终端设备(包括观众终端设备11和主播终端设备13)可以和服务器12之间可以通过网络进行通信。
在本申请实施例中,观众终端设备11中登录的客户端可以称为观众客户端,该观众客户端对应的用户为第一用户或者第三用户,其中第一用户是发起互动拍摄指令的观众用户,而第三用户是没有发起互动拍摄指令的观众用户;主播终端设备13中登录的客户端可以称为主播客户端,该主播客户端对应的用户为第二用户(或称为主播用户)。在一些实施例中,主播终端设备中安装运行的目标应用程序(如视频直播应用)的客户端,可以称为主播客户端,主播客户端具有进行视频直播的功能;观众终端设备中安装运行的目标应用程序的客户端,可以称为观众客户端,观众客户端具有观看视频直播的功能。主播客户端和观众客户端,可以是目标应用程序的两种不同版本的客户端,这两种不同版本的客户端分别面向主播和观众,即面向主播的版本具有实现上述主播客户端的功能,面向观众的版本具有实现上述观众客户端的功能;或者,也可以是目标应用程序的同一版本的客户端,该版本的客户端即兼具实现上述主播客户端的功能以及观众客户端的功能。例如,观众客户端不仅可以观看视频直播,该观众客户端也可以进行视频直播。又例如,主播客户端不仅可以进行视频直播,也可以观看其他主播的视频直播。本申请对此不作限定。
本申请实施例提供的技术方案除了应用于直播场景之外,还可以应用于其他场景,例如社交应用、即时通信应用、办公类应用等应用中,其中可以进行视频的双方所使用的终端设备,均可以执行本申请实施例提供的直播互动方法。在一些实施例中,在视频会议、多人在线视频等涉及视频的场景中也同样可以使用本申请实施例提供的技术方案,因此本申请对于该方法的应用场景不作限定,以下实施例中主要以直播应用为例,进行示例性和解释性的介绍说明。
请参考图2,其示出了本申请一个实施例提供的直播互动方法的流程图,该方法各步骤的执行主体可以是图1所示方案实施环境中的观众终端设备11,如各步骤的执行主体可以是观众客户端。在下文方法实施例中,为了便于描述,仅以各步骤的执行主体为“观众客户端”进行介绍说明。该方法可以包括如下几个步骤(210~230)中的至少一个步骤:
步骤210,显示第一用户的直播界面,直播界面用于展示第一用户的直播内容。
直播界面:包括直播画面和直播控件中的至少一种。在一些实施例中,直播画面是通过发起直播者的终端设备(可以认为是主播终端设备)的摄像头采集的视频画面。直播控件是在直播画面之上的另一个图层的控件,直播控件用于用户对直播界面进行操作,直播控件包括但不限于返回控件、送礼控件、关注主播控件等等。在一些实施例中,响应于观众用户的收起控件指令,直播界面中可以收起所有直播控件,而仅显示直播画面。在另一些实施例中,在主播未打开摄像头的情况下,直播界面也可以仅显示直播控件。
在一些实施例中,主播界面用于展示第一用户的直播内容,直播内容包括但不限于第一 用户本人、第一用户正在进行的事情、第一用户所处的环境、第一用户正在操作的游戏界面。可选地,直播内容是第一用户本人,则采用主播终端设备的摄像头采集第一用户的视频画面。可选地,直播内容是第一用户正在进行的事,例如第一用户使用主播终端设备的摄像头采集第一用户正在做饭的视频画面。可选地,直播内容是第一用户所处的环境,则采用主播终端的摄像头采集第一用户所处的室内环境或者室外环境的视频画面。可选地,直播内容是第一用户正在操作的游戏界面,则通过录屏或者其他的方式将主播终端设备的游戏界面作为直播内容展示在主播界面上。
步骤220,在第二用户的互动拍摄指令被第一用户应答的情况下,显示第一用户在互动拍摄过程中的直播画面;其中,第二用户的互动拍摄指令用于请求第一用户拍摄图像。
互动拍摄指令:第二用户发起的用于和第一用户进行互动拍摄的指令。在一些实施例中,互动拍摄指令响应于第一用户针对互动拍摄道具的使用指令而生成。在一些实施例中,互动拍摄道具包括但不限于虚拟礼物、虚拟点券、虚拟卡片等等,本申请对于互动拍摄道具的具体类型以及对应的虚拟资源的价值不作限定。响应于第一用户针对互动拍摄道具的使用指令,生成互动拍摄指令。在一些实施例中,互动拍摄指令是响应于第一用户针对互动拍摄控件的触发指令而生成。可选地,在直播界面上展示有互动拍摄控件,互动拍摄控件是用于第一用户发起互动拍摄指令的控件。可选地,在直播界面上没有展示互动拍摄控件,但是响应于第一用户对直播界面上互动任务发起控件的触发指令,展示互动操作界面,在互动操作界面上显示有互动拍摄控件。本申请对于互动拍摄控件的形式、在直播界面中的位置等不作限定。在一些实施例中,互动拍摄指令响应于第一用户和第二用户进行沟通交流时,由其中的关键词触发而生成。可选地,第一用户和第二用户采用连麦的方式进行沟通,则当第二用户在连麦语音中提到关键词——“发起互动拍摄”时,观众终端设备识别到语音中的关键词,发起互动拍摄指令。可选地,关键词可以由第二用户或者第一用户进行自定义,也可以由程序预先设定好或者服务器更新得到的。可选地,第一用户采用在直播界面的公屏上发送文字的形式和第二用户进行沟通,则当识别到第一用户发送的文字中存在关键词时,认为第一用户发起了互动拍摄指令。本申请实施例对于关键词的具体内容不作限定,除了上述“发起互动拍摄”,还可以是“做个鬼脸”、“做个笑脸”等等内容作为关键词。在一些实施例中,在终端设备对应的直播程序中设置有关键词识别模型,关键词识别模型用于识别上述语音或者文字中的关键词。在另一些实施例中,在服务器上设置有关键词识别模型,观众终端设备将第一用户发送的文字或者说出的语音信息发送给服务器,由服务器上的关键词识别模型进行识别,并将识别结果反馈给观众终端设备。例如,以文字的形式在直播界面上展示识别结果——“检测到关键词,向主播发送互动拍摄指令”。其中,关键词识别模型是经过预训练的神经网络模型或者其他可以用于检测关键词的算法模型。在一些实施例中,由第一用户发起互动拍摄指令,即观众终端设备发起互动拍摄指令,该互动拍摄指令可以发送给服务器,经由服务器发送给主播终端设备。可选地,观众终端设备发起互动拍摄指令,该互动拍摄指令可以直接发送给主播终端设备。本申请对于互动拍摄指令的发送流程不作限定。
互动拍摄指令用于请求第一用户拍摄图像。在一些实施例中,基于互动拍摄指令而拍摄出来的图像针对的对象是第一用户。也即,互动拍摄指令是用于请求第一用户拍摄关于第一用户的图像。可选地,第二用户想要第一用户拍摄关于第一用户的照片,并附带相关要求信息。例如,第二用户想要第一用户拍摄关于第一用户微笑的图像、第二用户想要第一用户拍摄关于第一用户悲伤的图像。在另一些实施例中,基于互动拍摄指令而拍摄出来的图像针对的对象并非第一用户。也即,互动拍摄指令是用于第一用户拍摄关于除第一用户以外的其他对象的图像。可选地,第一用户想要第二用户拍摄关于烹饪的图像、第二用户想要第二用户拍摄关于昆虫的图像等等。在基于互动拍摄指令而拍摄出来的图像针对的对象并非第一用户时,拍摄出来的图像可以针对自然自然界存在物体,包括但不限于昆虫、植物、动物等等。
本申请实施例中,对于图像的类型不作限定,可以是照片,也可以是视频。当拍摄的图 像的是照片时,则存在特定的拍摄时刻。当拍摄的图像是视频时,则存在开始拍摄视频的时刻以及结束拍摄视频的时刻,或者是拍摄时长。可选地,上文实施例中的关键词还可以是“请主播拍摄一段5秒的关于悲伤的视频”,当检测到文字或者语音中存在关键词时,发起互动拍摄指令。在一些实施例中,第二用户的互动拍摄指令用于请求第一用户拍摄图像,并将拍摄得到的图像发送给第二用户。
在一些实施例中,互动拍摄过程中的直播画面包括开始互动拍摄到结束互动拍摄之间的直播画面。在一些实施例中,开始互动拍摄以及结束互动拍摄均有信息提示或者语音提示。可选地,在互动拍摄过程中,直播界面上会存在虚拟摄像机,通过展示虚拟摄像机的方式,给第二用户更好的带入体验。在一些实施例中,互动拍摄指令是用于请求第一用户拍摄关于第一用户的图像,并且互动拍摄指令对应的要求是“主播摆出开心的表情”。则在互动拍摄过程中,展示主播用户从开始准备拍摄到摆出开心的表情的直播画面,直到互动拍摄结束之后,展示正常的直播画面。
在一些实施例中,当观众终端设备发起互动拍摄指令之后,主播终端设备接收到互动拍摄指令,该互动拍摄指令可以显示在第一用户的直播界面上,并且第一用户可以选择接受该任务或者不接受该任务。可选地,响应于第一用户对互动拍摄指令的接受指令,认为第二用户的互动拍摄指令被第一用户应答,其中接受指令可以响应于触发操作等生成。可选地,响应于识别到第一用户的语音信息中的存在“接受”等关键词,来认为第二用户的互动拍摄指令被第一用户应答。当第一用户选择接受该任务时,认为第二用户的互动拍摄指令被第一用户应答,则可以给予第一用户相应的准备时长,在达到准备时长之后,开始拍摄图像。在另一些实施例中,当第一用户选择拒绝接受该任务时,认为第二用户的互动拍摄指令没有被第一用户应答,不开始拍摄图像。在一些实施例中,当第一用户没有应答第二用户的互动拍摄指令时,会给予第一用户相应的虚拟惩罚,例如,降低第一用户的虚拟魅力值、虚拟热度值等等。
本申请实施例中的拍摄得到的图像的获取方式包括但不限于以下两种。在一些实施例中,通过第一用户的主播终端设备的摄像头进行拍摄而得到的图像。在另一些实施例中,通过截取第一用户正在直播的直播画面上的特定区域的画面作为拍摄得到的图像。可选地,在第二用户的互动拍摄指令被第一用户应答的情况下,确定拍摄位置,此处拍摄位置可以是主播终端设备用于拍摄图像的位置,也可以是用于截取第一用户的直播画面上的对应位置的画面作为拍摄得到的图像。
步骤230,显示第二用户获得的互动拍摄图像,互动拍摄图像是在互动拍摄过程中得到的。
互动拍摄图像是包括上述拍摄得到的图像。在一些实施例中,互动拍摄图像中还可以包括但不限于主播的名字(或昵称、标识)、发起互动拍摄指令的观众用户的名字(或昵称、标识)、图像拍摄的时间、图像拍摄的地点等等。在一些实施例中,互动拍摄图像对应有多种样式。可选地,在互动拍摄图像的中间区域显示上述拍摄得到的图像,例如是拍摄得到的图像是针对主播用户的。在互动拍摄区域的上方显示互动拍摄指令对应的要求信息,例如要求信息是“主播摆出开心的表情”。在互动拍摄区域的下方显示该图像的互动信息,例如互动信息是“B用户赠与A用户”,其中A用户是观众用户,B用户是主播用户。在另一些实施例中,互动拍摄图像中还可以由主播用户自行添加文字信息或者内容信息,例如,主播用户在互动拍摄图像中增加文字信息“祝你天天开心”,此处文字信息可以由主播用户自行添加,也可以由程序设定好,但是需要主播用户手动添加到互动拍摄图像中。
在一些实施例中,仅在发起互动拍摄指令的观众用户的直播界面上显示互动拍摄图像,而其他未发起互动拍摄指令的观众用户直播界面上并不显示互动拍图像。通过此种方式,保护了发起互动拍摄指令的观众用户的隐私,更好地维护观众用户的利益。在另一些实施例中,在所有观众用户的直播界面上显示互动拍摄图像,通过公开显示的方式,提高发起互动拍摄 指令的观众用户的用户体验,同时还可以丰富直播互动的方式。
本申请提供了一种新的直播互动方式,由观众用户发起互动拍摄指令,在互动拍摄指令被主播用户应答的情况下,显示主播用户在互动拍摄过程中的直播画面,并显示观众用户获得的互动拍摄图像。通过本申请实施例提供的技术方案,可以由观众用户发起互动拍摄指令,而由主播用户拍摄并给予观众用户互动拍摄图像。由于互动拍摄指令的不同,因此得到的互动拍摄图像也不相同,因此,互动拍摄图像存在着未知性和随机性,从而丰富了直播互动的方式,增加了直播互动的趣味性。
请参考图3,其示出了本申请另一个实施例提供的直播互动方法的流程图,该方法各步骤的执行主体可以是图1所示方案实施环境中的观众终端设备11,如各步骤的执行主体可以是观众客户端。在下文方法实施例中,为了便于描述,仅以各步骤的执行主体为“观众客户端”进行介绍说明。该方法可以包括如下几个步骤(210~230)中的至少一个步骤:
步骤210,显示第一用户的直播界面,直播界面用于展示第一用户的直播内容。
步骤222,显示第一提示信息,第一提示信息用于引导第一用户进行拍摄。
第一提示信息:用于引导主播用户进行拍摄的提示信息。可选地,提示信息的形式包括但不限于文字、语音、图案等等。可选地,第一提示信息在直播画面上方的蒙层上,可选地,该蒙层上的提示信息不可以被观众用户或者主播用户操作或者控制。可选地,该蒙层对应有第一不透明度信息,该第一不透明度信息表征该蒙层的不透明度。在一些实施例中,当开始进行拍摄时,在直播画面所在图层的上方设置蒙层,并将第一提示信息在蒙层上进行展示。可选地,该蒙层设置在控件所在图层的下方。当观众用户对于控件进行操作时,并不会影响蒙层中的第一提示信息。本申请实施例提供的技术方案,通过设置蒙层,并在蒙层上显示第一提示信息,可以避免观众用户对于第一提示信息的误触,并且将蒙层的提示信息与直播画面相结合展示在直播界面中,使得展示出来的直播效果更加符合“拍摄”这一场景,提高观众用户以及主播用户的带入体验感。
在一些实施例中,第一提示信息包括:第一对象信息,第一对象信息用于指示第一用户拍摄的图像所针对的对象。可选地,第一对象信息是主播,则主播用户需要拍摄的图像所针对的对象是主播,也即,此时主播自拍即可。可选地,第一对象是宠物,则主播用户需要拍摄的图像所针对的对象是宠物,也即,此时主播需要对宠物进行拍摄。
在一些实施例中,第一提示信息包括:第一要求信息,第一要求信息用于指示第一用户拍摄的图像所需满足的要求。在一些实施例中,第一要求信息包括但不限于主题信息、风格信息、姿态信息、表情信息等等。可选地,第一要求信息包括主题信息,可选地,主题信息包括但不限于日常主题、古风主题、漫画主题等等。可选地,风格信息包括但不限于嘻哈风格、爵士风格、学生风格等等。可选地,姿态信息包括但不限于举手、抬头、亲吻等等。可选地,表情信息包括但不限于开心表情、悲伤表情、流泪表情、遗憾表情等等。在一些实施例中,例如第一对象是主播,第一要求信息是要求主播的穿衣风格是学生风格,则给主播一段时间的准备时间之后,主播需要表现出学生风格的穿衣风格,并且进行拍摄得到互动拍摄图像。在一些实施例中,例如第一对象是主播,第一要求信息是要求主播的表情是悲伤,则给主播一段时间的准备时间之后,主播需要表现出悲伤的表情,并且进行拍摄得到互动拍摄图像。在一些实施例中,例如第一对象是主播的宠物,第一要求信息是要求宠物做出举手的动作,则给主播以及宠物一段时间的准备时间之后,宠物需要表现出举手的动作,并且进行拍摄得到互动拍摄图像。
在一些实施例中,第一提示信息包括:第一位置信息,第一位置信息用于指示第一用户拍摄的图像所在的区域。可选地,第一用户拍摄的图像所在的区域的形状包括但不限于圆形、矩形、扇形,本申请对于第一用户拍摄的图像所在区域的形状不作限定,该形状可以由第一用户选择确定,也可以由服务器自行确定。在一些实施例中,位置信息以边框的形式展示, 也即,对于第一用户拍摄的图像所在的区域的边缘以边框的形式凸显出来,可选地,边框是圆形、矩形或者扇形。因此可以直观地告知主播,拍摄的图像所在的区域。可选地,还可以以不同清晰度或者透明度来展示第一用户拍摄的图像所在的区域,也即在蒙层上对于第一用户拍摄的图像所在的区域和其他区域设置的灰度或者透明度不同,可选地,第一用户拍摄的图像所在的区域的清晰度或者透明度较高,而在蒙层上除去第一用户拍摄的图像所在区域的其他区域的清晰度或者透明度较低,以将第一用户拍摄的图像所在的区域突出或者高亮显示。如图4的子图a所示,第一要求信息是“开心”。
在一些实施例中,第一提示信息包括:第一时间信息,第一时间信息用于指示第一用户开始拍摄图像之前的准备时长或拍摄图像的持续时长。在一些实施例中,第一时间信息是准备时长,准备时长由程序设定,也可以由观众用户或者主播用户手动进行调整,还可以基于其他观众用户给主播赠送的虚拟道具而延长或者缩短。可选地,准备时长以倒计时的方式进行展示。在一些实施例中,准备时长是1分钟,不过主播需要的准备时长仅为20秒,则在20秒之后,主播可以手动选择开始拍摄图像,准备时长被缩短,直接开始拍摄。在另一些实施例中,给予主播的准备时长并不够主播准备,此时可以用观众用户通过赠送虚拟道具给主播用户的方式以延长主播的准备时长,同样的,为了增加互动趣味性,同样可以给主播用户赠送虚拟道具的方式以缩短主播的准备时长。在另一些实施例中,第一时间信息还包括拍摄图像的持续时长,也即,当图像是视频时,通过显示拍摄图像的持续时长来给主播以提示,以便主播得知还需要拍摄的时长。如图4的子图b所示,直播界面上显示有拍摄倒计时。
在一些实施例中,第一提示信息包括:第一数量信息,第一数量信息用于指示第一用户拍摄的图像的数量。在一些实施例中,响应于多条互动拍摄指令,显示第一用户的多张互动拍摄图像。在一些实施例中,响应于一条互动拍摄指令,显示第一用户的多张互动拍摄图像,该一条互动指令中包括对多个虚拟拍摄道具的使用指令,或者包括多张互动拍摄图像的拍摄指令。此处多张互动拍摄图像的拍摄指令可以通过提取识别观众用户的关键词而生成,例如观众用户在公屏中说“请主播拍摄三张图片”,则基于该关键词,生成多张互动拍摄图像的拍摄指令得到互动拍摄指令。可选地,主播用户的一次互动拍摄过程中,可以获取多张互动拍摄图片,也可以称作“连拍”。可选地,主播用户的一次互动拍摄过程中,可以获取多段互动拍摄视频。
步骤224,显示第一用户在互动拍摄过程中,根据第一提示信息进行拍摄时的直播画面。
在一些实施例中,在互动拍摄过程中,显示蒙层以及直播画面。蒙层上显示有第一提示信息,主播用户根据蒙层上的第一提示信息,表现出不同的姿态、表情或者动作,此时的直播画面就是主播用户根据第一提示信息进行拍摄的直播画面。
在一些实施例中,蒙层中展示第一要求信息以及第一准备时长信息,其中第一要求信息显示在第一位置信息对应的拍摄区域的上方,第一准备时长信息以倒计时的方式展示。则主播用户可以根据展示的第一要求信息,实时调整自身的动作、表情以使得表现出来的画面是符合第一要求信息的画面。在倒计时降为0的时候,拍摄得到的图像作为互动拍摄图像。在一些实施例中,不仅显示第一用户在互动拍摄过程中,根据第一提示信息进行拍摄时的直播画面,还显示根据第一提示信息进行拍摄前的直播画面。比如在准备拍摄倒计时的时候,显示直播画面。
步骤230,显示第二用户获得的互动拍摄图像,互动拍摄图像是在互动拍摄过程中得到的。
如图5的500所示,其示出了第二用户获得互动拍摄图像。
本申请实施例提供的技术方案,通过显示第一提示信息,使得主播用户可以根据提示信息来调整直播画面。同时显示第一用户在互动拍摄过程中,根据第一提示信息进行拍摄时的直播画面,也就是说直播是不间断的,将准备拍摄时的直播画面也展现出来,因此直播互动更加透明,拉进主播和用户之间的距离,丰富了直播互动的形式。
同时第一提示信息中还包括第一要求信息,使得直播在具有趣味性的同时,也提高了难度,增加了直播互动的挑战性。
请参考图6,其示出了本申请另一个实施例提供的直播互动方法的流程图,该方法各步骤的执行主体可以是图1所示方案实施环境中的观众终端设备11,如各步骤的执行主体可以是观众客户端。在下文方法实施例中,为了便于描述,仅以各步骤的执行主体为“观众客户端”进行介绍说明。该方法可以包括如下几个步骤(210~230)中的至少一个步骤:
步骤210,显示第一用户的直播界面,直播界面用于展示第一用户的直播内容。
步骤211,显示互动拍摄道具,互动拍摄道具用于触发生成互动拍摄指令。
互动拍摄道具包括但不限于虚拟礼物或者控件。在一些实施例中,互动拍摄道具是虚拟礼物,可选地,虚拟礼物是拍立得。在一些实施例中,观众用户可以一次送出多个互动拍摄道具,从而触发生成互动拍摄指令,获得多张互动拍摄图像。在一些实施例中,互动拍摄道具是虚拟控件,则虚拟控件设置在直播界面上,也可以不设置在直播界面上。可选地,若虚拟控件并未设置在直播界面上,则可以响应于对于直播界面上的互动操作发起控件,显示互动操作发起界面,在互动操作发起界面上显示有虚拟控件。可选地,响应于对互动拍摄道具对应的虚拟礼物或者虚拟控件的点击、长按等操作,生成互动拍摄指令。
如图7所示的直播界面上,子图a中的60为虚拟礼物“拍立得”,图7的子图b以及图8的子图c、d展示了在第二用户第一次使用拍立得时,会给出相关的操作指引。
在一些实施例中,步骤211之后还包括步骤211-1。
步骤211-1:显示排队提示信息,排队提示信息用于指示第二用户的互动拍摄指令的排队进程;其中,排队进程包括以下至少之一:等待人数、预计等待时长、优先排队提示、被优先排队提示。
在一些实施例中,排队提示信息显示在直播界面的任意位置。可选地,排队提示信息显示在直播界面的中心区域。可选地,排队进程包括等待人数,则可以告知观众用户当前已经发起拍摄互动指令的人数。可选地,排队进程包括预计等待时长,则可以告知观众用户需要等待的时长,在等待时长过长时,观众用户可以不触发生成互动拍摄指令,在等待时长较短时,再触发生成互动拍摄指令。可选地,排队进程包括优先排队提示,则当第二用户触发生成的互动拍摄指令所对应的虚拟礼物的价值较高时,在第二用户对应的观众终端设备上可以显示优先排队提示,而相应的,当第四用户触发生成的互动拍摄指令所对应的虚拟礼物的价值高于第二用户触发生成的互动拍摄指令所对应的虚拟礼物的价值时,在第二用户对应的观众终端设备上显示被排队提示信息,而在第四用户对应的观众终端设备上显示优先排队提示。本申请对于优先排队提示以及被优先排队的提示的具体内容不作限定,可选地,优先排队提示是“您所赠送的互动礼物的价值高于其他用户,已为您优先排队”,可选地,被优先排队提示是“用户xx送出的互动礼物价格更高,已被优先排队”,其中互动礼物是指互动拍摄道具对应的虚拟礼物。如图9所示的直播界面中,子图a表示第二用户送出虚拟礼物的提示信息,子图b中的70表示优先排队提示,71表示被优先排队提示,72表示第一用户可以执行互动拍摄操作,处于接单状态,相应地,第一用户也可以初步拒绝接单状态,而不执行互动拍摄任务,也无法应答第二用户发起的互动拍摄指令。可选地,在第一用户处于拒绝接单状态时,第二用户无法发起互动拍摄指令。如图10所示,子图c表示消耗了虚拟礼物“拍立得”之后,可以优先排队。子图d中75xx用户赠送了虚拟礼物,并显示当前排队人数。
本申请实施例提供的技术方案,通过设置发起互动拍摄任务所消耗的虚拟礼物的数量,可以一次消耗多个数量的虚拟礼物,使得第二用户不必对于每一个虚拟礼物都执行一次互动操作,有助于降低操作复杂度,同时减少终端设备和服务器的处理开销。
进一步的,根据显示排队提示信息,可以给第一用户以提醒,当排队人数较多时,第一用户可以选择不触发生成互动拍摄指令,减轻服务器的压力。
步骤212,响应于针对互动拍摄道具的使用指令,显示拍摄要求设置界面。
拍摄要求设置界面:用于设置拍摄要求的界面。在一些实施例中,第二用户可以对于第一要求信息进行设置,因此响应于第二用户针对互动拍摄道具的使用指令,显示拍摄要求设置界面。
可选地,使用指令是响应于第二用户对互动拍摄道具的使用操作而生成的。其中,使用操作包括但不限于点击、长按、滑动等等,本申请实施例对于使用指令和使用操作的具体类型不作限定。
步骤213,在拍摄要求设置界面中,显示由第二用户设置的第一要求信息,第一要求信息用于指示第一用户拍摄的图像所需满足的要求。
关于第一要求信息的相关解释说明参见上述实施例,本申请实施例中对于第一要求信息的确定方式不作限定,可选地,第一要求信息是第二用户设置的。
在一些实施例中,第一要求信息是根据第二用户的设置而确定的。在一些实施例中,第一要求信息包括但不限于主题信息、风格信息、姿态信息、表情信息等等,第二用户可以在拍摄要求设置界面上设置喜欢的主题信息、风格信息、姿态信息、表情信息等等。在一些实施例中,第二用户在设置第一要求信息时,是在给定给的几个选项中确定选择想要的要求,也可以是自行输入要求。在一些实施例中,如果是第二用户自行输入要求信息,则需要消耗一定的虚拟资源。
除去第一要求信息之外,在拍摄要求设置界面中,显示由第二用户设置的其他第一提示信息等等,也即第一对象信息、第一位置信息、第一时间信息、第一数量信息等都可以由第二用户在拍摄要求设置界面上进行设置。
步骤214,响应于针对互动拍摄道具的使用指令,向第一用户的客户端发送所述第二用户的互动拍摄指令。
可选地,使用指令是响应于第二用户对互动拍摄道具的使用操作而生成的。其中,使用操作包括但不限于点击、长按、滑动等等,本申请实施例对于使用指令和使用操作的具体类型不作限定。
步骤220,在第二用户的互动拍摄指令被第一用户应答的情况下,显示第一用户在互动拍摄过程中的直播画面;其中,第二用户的互动拍摄指令用于请求第一用户拍摄图像。
步骤230,显示第二用户获得的互动拍摄图像,互动拍摄图像是在互动拍摄过程中得到的。
本申请实施例提供的技术方案中,可以由第二用户自行在拍摄要求设置界面上设置第一要求信息,可以提高观众用户和主播用户的交互性,同时,给予送出虚拟礼物的第二观众用户以特殊权限,以和其他观众用户作区分,来提升送出虚拟礼物的观众用户的体验感。同时,第一要求信息是由第二用户自行设定的,可以丰富第一要求信息的内容,进一步丰富互动拍摄图像的内容和形式。
请参考图11,其示出了本申请另一个实施例提供的直播互动方法的流程图,该方法各步骤的执行主体可以是图1所示方案实施环境中的观众终端设备11,如各步骤的执行主体可以是观众客户端。在下文方法实施例中,为了便于描述,仅以各步骤的执行主体为“观众客户端”进行介绍说明。该方法可以包括如下几个步骤(210~250)中的至少一个步骤:
步骤210,显示第一用户的直播界面,直播界面用于展示第一用户的直播内容。
步骤220,在第二用户的互动拍摄指令被第一用户应答的情况下,显示第一用户在互动拍摄过程中的直播画面;其中,第二用户的互动拍摄指令用于请求第一用户拍摄图像。
在一些实施例中,执行完步骤220之前,还包括步骤250。
步骤250,在第二用户的互动拍摄指令被第一用户应答的情况下,通过摄像头采集第二用户的视频画面,将第二用户的视频画面发送给服务器;其中,第一用户在互动拍摄过程中 的直播画面包括:第一用户的视频画面和第二用户的视频画面。
在一些实施例中,当第二用户发起互动拍摄指令之后,第二用户可以选择是否和第一用户合拍,如果第二用户不选择和第一用户合拍,则第一用户在互动拍摄过程中的直播画面仅包括第一用户的视频画面。如果第二用户选择和第一用户合拍,则第一用户在互动拍摄过程中的直播画面仅包括第一用户的视频画面和第二用户的视频画面。也即,当第二用户选择合拍时,在第二用户对应的观众设备的直播界面上显示第二用户的视频画面。可选地,其他观众终端设备上可以显示第二用户的视频画面,也可以不显示第二用户的视频画面,此处可以由第二用户自行设定是否将自己的视频画面展示在其他观众终端设备或主播终端设备的直播界面上,也可以由服务器预先设定好,是否将第二用户的视频画面展示在其他观众终端设备或主播终端设备的直播界面上。
在一些实施例中,当直播界面上显示第二用户的视频画面以及第一用户的视频画面时,第一用户的视频画面出现在直播界面的第一区域,而第二用户的直播画面出现在直播界面的第二区域,第一区域和第二区域并不重合,并且区域的位置可以由第二用户自行确定。在一些实施例中,服务器接收到第二用户的终端设备发送的第二用户的视频画面时,可以对第一用户的视频画面以及第二用户的视频画面作背景处理,使得在直播界面上展示出来的第一用户的视频画面以及第二用户的视频画面存在相同的背景,或者说,在展示出来的直播界面上,第一用户和第二用户出现在同一片背景中。通过背景处理的方式,使得第一用户和第二用户仿佛出现在同一场景,使得最终确定的互动拍摄图像更加符合合拍的本质,更能体现合拍的意义,互动拍摄图像对于第二用户来说更加特殊,进一步提升第二用户的体验感。
在一些实施例中,第二用户对应的观众终端设备通过摄像头采集第二用户的视频画面。可选地,第二用户的互动拍摄指令被第一用户应答的情况下,第二用户对应的观众终端设备开始采集第二用户的视频画面,可选地,主播终端设备的直播界面上显示第二用户的视频画面,也可以不显示第二用户的视频画面。
在一些实施例中,在第二用户的互动拍摄指令被第一用户应答且被第二用户应答的情况,显示第一用户的视频画面以及第二用户的视频画面。
在一些实施例中,步骤230之前还包括步骤260(图中未示出)。
步骤260,在互动拍摄过程中,显示第二提示信息,第二提示信息用于引导第二用户与第一用户进行合拍。
在一些实施例中,第二提示信息:用于引导主播用户以及第二用户进行拍摄的提示信息。可选地,提示信息的形式包括但不限于文字、语音、图案等等。可选地,第二提示信息在直播画面上方的蒙层上,此处关于蒙层的解释参见上述实施例,此处不再赘述。
在一些实施例中,第二提示信息包括:第二对象信息,第二对象信息用于指示第一用户拍摄的图像中针对的主播用户的视频画面中的对象。可选地,第二对象信息是主播,也即,此时主播自拍即可。可选地,第一对象是宠物,则主播用户需要拍摄的图像所针对的对象是宠物,也即,此时主播需要对宠物进行拍摄。
在一些实施例中,第二提示信息包括:第二要求信息,第二要求信息对于第一用户所拍摄的图像中关于第一用户的视频画面的要求信息。在另一些实施例中,第二提示信息中包括第三要求信息,第三要求信息是对于第二用户的要求信息。在一些实施例中,第二要求信息在第一用户的直播界面中显示,而第三要求信息在第二用户的直播界面中显示。在一些实施例中,第二要求信息和第三要求信息包括但不限于主题信息、风格信息、姿态信息、表情信息等等。此处参见上述实施例中关于第一要求信息的解释说明,此处不再赘述。在一些实施例中,第二要求信息是“主播右手比心”,而第三要求信息是“观众左手比心”,则当主播和观众均按照要求进行操作之后,可以由主播用户拍摄得到主播和观众比完整的心的互动拍摄图像,并将该图像显示在第二用户的直播界面上。
在一些实施例中,第二提示信息包括:第二位置信息,第二位置信息用于指示第一用户 拍摄的图像中第一用户的视频画面所在的区域。可选地,第一用户拍摄的图像中第一用户的视频画面所在的区域的形状包括但不限于圆形、矩形、扇形,本申请对于第一用户拍摄的图像所在区域的形状不作限定,该形状可以由第一用户选择确定,也可以由服务器自行确定。在一些实施例中,第二提示信息包括:第三位置信息,第三位置信息用于指第一用户拍摄的图像中第二用户的视频画面所在的区域。可选地,第一用户拍摄的图像中第二用户的视频画面所在的区域的形状包括但不限于圆形、矩形、扇形,本申请对于第一用户拍摄的图像所在区域的形状不作限定,该形状可以由第一用户选择确定,也可以由服务器自行确定。在一些实施例中,第二位置信息以及第三位置信息均以边框的形式展示,此处参见上述对第一位置信息的解释说明,此处不再赘述。
在一些实施例中,第二提示信息包括:第二时间信息,第二时间信息用于指示第一用户开始拍摄图像之前的准备时长或拍摄图像的持续时长。此处参见上述第一时间信息的解释说明,此处不再赘述。
在一些实施例中,第二提示信息包括:第二数量信息,第二数量信息用于指示第一用户拍摄的图像的数量。此处参见上述第二数量信息的解释说明,此处不再赘述。
步骤230,显示第二用户获得的互动拍摄图像,互动拍摄图像是在互动拍摄过程中得到的。
步骤240,响应于针对互动拍摄图像的查看指令,显示互动拍摄相册,互动拍摄相册用于保存第二用户获得的互动拍摄图像。
查看指令:响应于第二用户对互动拍摄图像的查看操作而生成的。其中,查看操作包括但不限于点击、长按、滑动等等,本申请实施例对于查看指令和查看操作的具体类型不作限定。
互动拍摄相册:保存第二用户获得互动拍摄图像的相册。在一些实施例中,响应于第二用户对于互动拍摄图像的查看操作,在第二用户对应的观众终端设备上显示互动拍摄相册。在一些实施例中,互动拍摄相册中保存有不同的主播的互动拍摄图像,在一些实施例中,互动拍摄相册中保存有不同的要求信息的互动拍摄图像。在一些实施例中,当第一要求是表情时,则根据表情的不同对互动拍摄图像作不同的分类。例如表情包括开心、悲伤、哭泣等等,则对应到这几类表情,对互动拍摄图像进行分类保存。
在一些实施例中,响应于第二用户对于互动拍摄相册的筛选操作,从多个互动拍摄图像中筛选出符合筛选操作要求的互动拍摄图像。可选地,筛选操作是针对筛选控件的操作,包括但不限于点击、长按、滑动等等,本申请实施例对于筛选的具体类型不作限定。其中筛选控件的类型可以对应到不同的主播名称、不同的要命信息。可选地,响应于对“主播A”的筛选控件的筛选操作,从多个互动拍摄图像中筛选出符合主播A对应的互动拍摄图像。可选地,响应于对“表情为开心”的筛选控件的筛选操作,从多个互动拍摄图像中筛选出符合“表情为开心”的互动拍摄图像。
本申请实施例提供的技术方案中,建立互动拍摄相册,用户第二用户保存互动拍摄图像,在第二用户想要查看图像时,可以有针对性地进行筛选,简化了用户的操作。如图12所示,子图a显示了互动拍摄相册,并对应不同的要求有不同的互动拍摄图像,如图12的子图b所示,当筛选互动拍摄图像时,以第一要求信息为“开心”为筛选条件,展示与“开心”对应的互动拍摄图像。
另外,本申请实施例提供的技术方案,可以让第一用户和第二用户进行合拍,以满足第二用户和第一用户合拍的想法,并通过调整虚拟背景的方式,使得拍摄出来的互动拍摄图像更加逼真,也更具收藏和保存价值,因此可以较好地提升第二用户的直播体验感,丰富直播互动方式。
并且,在进行合拍的时候,通过显示相应的提示信息的方式,以获取到较为符合预期的互动拍摄图像,提升获取互动拍摄图像的效率。
请参考图13,其示出了本申请一个实施例提供的直播互动方法的流程图。该方法各步骤的执行主体可以是图1所示方案实施环境中的主播终端设备13,如各步骤的执行主体可以是主播客户端。在下文方法实施例中,为了便于描述,仅以各步骤的执行主体为“主播客户端”进行介绍说明。该方法可以包括如下几个步骤(310~330)中的至少一个步骤:
步骤310,在第一用户进行直播的过程中,显示基于第二用户的互动拍摄指令生成的互动拍摄信息;其中,第二用户的互动拍摄指令用于请求第一用户拍摄图像。
在一些实施例中,互动拍摄信息包括第二用户的标识信息、互动拍摄道具信息、互动拍摄道具对应的价值信息等等。
在一些实施例中,第二用户的互动拍摄指令用于请求第一用户拍摄图像,并将拍摄得到的图像发送给第二用户。
步骤320,响应于针对第二用户的互动拍摄指令的应答指令,显示第一用户在互动拍摄过程中的直播画面。
应答指令:基于第一用户的应答操作而生成的指令。应答操作包括但不限于点击、长按、滑动等等,本申请实施例对于应答指令和应答操作的具体类型不作限定。
步骤330,显示发送给第二用户的互动拍摄图像,互动拍摄图像是在互动拍摄过程中得到的。
在一些实施例中,主播终端设备获取到互动拍摄图像,并将该互动拍摄图像发送给观众终端设备。在一些实施例中,主播终端设备通过服务器将互动拍摄图像发送给观众终端设备。
本申请实施例提供的技术方案,本申请提供了一种新的直播互动方式,由观众用户发起互动拍摄指令,在互动拍摄指令被主播用户应答的情况下,显示主播用户在互动拍摄过程中的直播画面,并显示观众用户获得的互动拍摄图像。通过本申请实施例提供的技术方案,可以由观众用户发起互动拍摄指令,而由主播用户拍摄并给予观众用户互动拍摄图像。由于互动拍摄指令的不同,因此得到的互动拍摄图像也不相同,因此,互动拍摄图像存在着未知性和随机性,从而丰富了直播互动的方式,增加了直播互动的趣味性。
请参考图14,其示出了本申请另一个实施例提供的直播互动方法的流程图。该方法各步骤的执行主体可以是图1所示方案实施环境中的主播终端设备13,如各步骤的执行主体可以是主播客户端。在下文方法实施例中,为了便于描述,仅以各步骤的执行主体为“主播客户端”进行介绍说明。该方法可以包括如下几个步骤(310~350)中的至少一个步骤:
步骤310,在第一用户进行直播的过程中,显示基于第二用户的互动拍摄指令生成的互动拍摄信息;其中,第二用户的互动拍摄指令用于请求第一用户拍摄图像。
在一些实施例中,步骤310之前还包括步骤301(图中未示出)。
步骤301,显示多条待执行的互动拍摄信息,其中,多条待执行的互动拍摄信息根据优先级进行显示,优先级与以下至少之一有关:互动拍摄信息对应的互动拍摄指令的生成时刻、互动拍摄信息对应的互动拍摄指令的支出资源。
支出资源包括但不限于真实货币、虚拟货币、虚拟能量等中的至少一种。
在一些实施例中,互动拍摄信息对应的互动拍摄指令的生成时刻越早,则对应的互动拍摄信息的优先级越高。在一些实施例中,互动拍摄信息对应的互动拍摄指令的支出资源越高,则对应的互动拍摄信息的优先级越高。
步骤322,响应于针对第二用户的互动拍摄指令的应答指令,显示第一提示信息,第一提示信息用于引导第一用户进行拍摄。
在一些实施例中,第一要求信息,第一要求信息用于指示第一用户拍摄的图像所需满足的要求。在一些实施例中,如图15所示,120表示第一要求信息,即“表情为开心”。如图16所示,121表示第一提示信息。如图17所示,122表示显示的互动拍摄图像。在一些实施例中,在显示互动拍摄图像之后,互动拍摄图像以滑动出直播界面或者渐淡的方式取消显示。
此处关于应答指令、第一提示信息,参见上述实施例中的解释说明,在此不作赘述。
步骤324,显示第一用户在互动拍摄过程中,根据第一提示信息进行拍摄时的直播画面。
步骤330,显示发送给第二用户的互动拍摄图像,互动拍摄图像是在互动拍摄过程中得到的。
在一些实施例中,步骤330之前还包括步骤326~步骤328(图中未示出)中的至少一个步骤。
步骤326,在互动拍摄过程中,若第一用户位于拍摄范围之内,则根据拍摄范围内的第一用户的图像,生成互动拍摄图像。
根据上述实施例,可知,第一提示信息中包括第一位置信息,用于表征拍摄的图像所在的区域,也即此处的拍摄范围。
步骤328,若第一用户不位于拍摄范围之内,则将与第一用户相关的设定图像确定为互动拍摄图像。
在一些实施例中,设定图像可以由服务器设定,也可以由第一用户自行设定。可选地,将第一用户开播时刻的直播画面作为设定图像。可选地,第一用户提前设定好设定图像,例如针对不同第一要求信息,提前拍摄好对应的设定图像。可选地,要求信息是“表情是开心”、“表情是悲伤”等等,则针对不同的表情,第一用户提前设定好对应的不同的图像,将这些图像作为设定图像。
在一些实施例中,若第一用户不位于拍摄范围之内,则从多张设定图像中找出与第一要求信息对应的设定图像作为互动拍摄图像。
步骤340,显示基于第一要求信息对互动拍摄图像进行打分,得到的互动拍摄图像的质量评分。
在一些实施例中,由服务器或者终端设备对互动拍摄图像进行打分,本申请对此不作限定。
在一些实施例中,在用户终端设备以及观众终端设备上显示质量评分。
在一些实施例中,根据互动拍摄图像与第一要求信息的符合程度,确定互动拍摄图像的质量评分。在一些实施例中,根据质量评分模型,根据互动拍摄图像与第一要求信息的符合程度,确定互动拍摄图像的质量评分。在一些实施例中,质量评分模型是神经网络模型或者其他用于对图像进行评分的模型,本申请不作限定。在一些实施例中,质量评分模型是提前训练好的,将一些互动拍摄图像作为训练样本,人工标注互动拍摄图像的质量评分作为训练标签,训练质量评分模型。
在一些实施例中,质量评价模型是分类模型,可选地,对应于每一种要求信息,都对应有一种最高评分的图像,通过比对该最高评分的图像与该要求信息下的互动拍摄图像的差异度,确定互动拍摄图像的质量评分。
在一些实施例中,从多种要求信息中确定出第一要求信息,以第一要求信息对应第一情绪表情为例,可选地,通过质量评价模型,对互动拍摄图像进行人脸检测;根据人脸检测结果,识别互动拍摄图像中第一用户的情绪表情;根据识别出来的第一用户的情绪表情与第一情绪表情匹配程度,确定互动拍摄图像的质量评分。
步骤350,显示第一用户获得的奖励信息,奖励信息用于指示第一用户拍摄互动拍摄图像所获得的奖励,奖励与质量评分有关。
在一些实施例中,奖励与质量评分呈正相关的关系,质量评分越高,奖励越丰厚。在一些实施例中,若互动拍摄图像是设定图像。可选地,该互动拍摄图像不参与质量评分的计算。可选地,不给予第一用户奖励。可选地,减少第一用户的奖励。
在一些实施例中,奖励包括但不限于虚拟货币、虚拟道具、虚拟魅力值、虚拟比拼值、虚拟金币等等。本申请对于奖励的具体形式不作限定,对于奖励对应的奖励信息的具体形式不作限定。
在一些实施例中,如图18所示的直播界面中,子图a中131表示质量评分,并且子图a展示了第一用户不位于拍摄范围之内,则将与第一用户相关的设定图像确定为互动拍摄图像,此时设定图像是开播图像。如图18的子图b所示,132同样示出了质量评分。如图19的子图c中,表示主播可以停止接受礼物,也即停止执行互动拍摄任务。如图19的子图d中,表示可以基于多个虚拟礼物,而显示多张互动拍摄图像133。在一些实施例中,如图20所示的直播界面中,示出了多条待执行的互动拍摄信息140。如图21示出了主播和其他主播进行连麦时,当其中一个主播在执行互动拍摄任务时的直播界面,左边的主播正在执行互动拍摄任务,显示提示信息141。
在一些实施例中,当主播所在主播终端设备为便携手持终端设备时,不可以使用便携终端设备来执行互动拍照任务。只有当主播所在的主播终端设备为非便携手持终端设备时,可以使用该终端设备来执行互动拍照任务。
本申请实施例提供的技术方案,通过在主播终端设备的直播界面上显示提示信息,便于主播用户根据提示信息进行下一步操作。同时,采用打分机制,可以促进主播的竞争意识,活跃直播氛围,进一步提升主播用户以及观众用户的直播体验感。另外,根据质量评分确定奖励,相对来说比较公平,也能够促进主播的参与积极性。
请参考图22,其示出了本申请一个实施例提供的直播互动方法的框图,该方法可以包括如下几个步骤(S1~S4)中的至少一个步骤:
步骤S1,第二用户送出拍立得。
第二用户在直播间请求第一用户进行拍照。
步骤S2,进行排队等待。
根据第二用户发起互动拍摄指令的时间,确定排队顺序。
步骤S3,根据送出的拍立得,确定随机主题和配方。
也即确定第一要求信息。当排到第二用户互动时,第一用户在直播间与第二用户实时沟通,以连麦或者文字的方式,按系统要求的拍照主题和用户要求的拍照数量,第一用户为第二用户拍一张或多张照片(互动拍摄图像)。
步骤S4,对主题要求的情绪表达准确度评分。
也即,根据互动拍摄图像,确定质量评分。AI(Artificial Intelligence,人工智能)算法通过对已有参照照片情绪分类建模,为互动拍摄图像与系统要求的主题的契合度评分,例如:系统要求拍照主题为:开心,AI模型将识别且分辨照片中主播情绪是否与主题相符,给该照片评分AI算法主要分为人脸检测及表情识别;算法将人类面部分为七种基本表情,分别为愤怒、厌恶、恐惧、快乐、中性、悲伤、惊喜,通过大量人脸数据训练AI模型,主播拍照后,检测出照片中人脸位置并将其输入表情识别模型,判别照片是否符合系统要求的主题并根据其契合度评分。例如:系统要求拍照主题为:开心,AI模型将识别且分辨照片中主播情绪是否与主题相符,给该照片评分。服务器根据AI评分结果,给主播发放奖励;主播照片发放到用户的拍立得相册,支持用户保存收藏。
下述为本申请装置实施例,可以用于执行本申请方法实施例。对于本申请装置实施例中未披露的细节,请参照本申请方法实施例。
请参考图23,其示出了本申请一个实施例提供的直播互动装置的框图。该装置具有实现上述观众客户端侧的方法示例的功能,所述功能可以由硬件实现,也可以由硬件执行相应的软件实现。该装置可以是上文介绍的观众终端设备,也可以设置在观众终端设备中。如图23所示,该装置2300可以包括:界面显示模块2310、画面显示模块2320和图像显示模块2330。
所述界面显示模块2310,用于显示第一用户的直播界面,所述直播界面用于展示所述第一用户的直播内容。
所述画面显示模块2320,用于在第二用户的互动拍摄指令被所述第一用户应答的情况下, 显示所述第一用户在互动拍摄过程中的直播画面;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像。
所述图像显示模块2330,用于显示所述第二用户获得的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
在一些实施例中,所述画面显示模块2320,用于显示第一提示信息,所述第一提示信息用于引导所述第一用户进行拍摄。
所述画面显示模块2320,还用于显示所述第一用户在所述互动拍摄过程中,根据所述第一提示信息进行拍摄时的直播画面。
在一些实施例中,所述第一提示信息包括:第一要求信息,所述第一要求信息用于指示所述第一用户拍摄的图像所需满足的要求。
在一些实施例中,如图24所示,所述模块还包括道具显示模块2340和指令发送模块2350。
所述道具显示模块2340,用于显示互动拍摄道具,所述互动拍摄道具用于触发生成所述互动拍摄指令。
所述指令发送模块2350,用于响应于针对所述互动拍摄道具的使用指令,向所述第一用户的客户端发送所述第二用户的互动拍摄指令。
在一些实施例中,如图24所示,所述模块还包括信息显示模块2360。
所述界面显示模块2310,还用于响应于针对所述互动拍摄道具的使用指令,显示拍摄要求设置界面。
所述信息显示模块2360,用于在所述拍摄要求设置界面中,显示由所述第二用户设置的第一要求信息,所述第一要求信息用于指示所述第一用户拍摄的图像所需满足的要求;其中,所述第二用户的互动拍摄指令中包括所述第一要求信息。
在一些实施例中,所述信息显示模块2360,还用于显示排队提示信息,所述排队提示信息用于指示所述第二用户的互动拍摄指令的排队进程;其中,所述排队进程包括以下至少之一:等待人数、预计等待时长、优先排队提示、被优先排队提示。
在一些实施例中,如图24所示,所述模块还包括相册显示模块2370。
所述相册显示模块2370,用于响应于针对所述互动拍摄图像的查看指令,显示互动拍摄相册,所述互动拍摄相册用于保存所述第二用户获得的互动拍摄图像。
在一些实施例中,如图24所示,所述模块还包括画面发送模块2380。
所述画面发送模块2380,用于在第二用户的互动拍摄指令被所述第一用户应答的情况下,通过摄像头采集所述第二用户的视频画面,将所述第二用户的视频画面发送给服务器;其中,所述第一用户在所述互动拍摄过程中的直播画面包括:所述第一用户的视频画面和所述第二用户的视频画面。
在一些实施例中,所述信息显示模块2360,还用于在所述互动拍摄过程中,显示第二提示信息,所述第二提示信息用于引导所述第二用户与所述第一用户进行合拍。
请参考图25,其示出了本申请另一个实施例提供的直播互动装置的框图。该装置具有实现上述主播客户端侧的方法示例的功能,所述功能可以由硬件实现,也可以由硬件执行相应的软件实现。该装置可以是上文介绍的主播终端设备,也可以设置在主播终端设备中。如图25所示,该装置2500可以包括:信息显示模块2510、画面显示模块2520和图像显示模块2530。
所述信息显示模块2510,用于在第一用户进行直播的过程中,显示基于第二用户的互动拍摄指令生成的互动拍摄信息;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像;
所述画面显示模块2520,用于响应于针对所述第二用户的互动拍摄指令的应答指令,显 示所述第一用户在互动拍摄过程中的直播画面;
所述图像显示模块2530,用于显示发送给所述第二用户的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
在一些实施例中,所述画面显示模块2520,用于显示第一提示信息,所述第一提示信息用于引导所述第一用户进行拍摄。
所述画面显示模块2520,还用于显示所述第一用户在所述互动拍摄过程中,根据所述第一提示信息进行拍摄时的直播画面。
在一些实施例中,所述第一提示信息包括:第一要求信息,所述第一要求信息用于指示所述第一用户拍摄的图像所需满足的要求。
在一些实施例中,如图26所示,所述装置还包括评分显示模块2540。
所述评分显示模块,用于显示基于所述第一要求信息对所述互动拍摄图像进行打分,得到的所述互动拍摄图像的质量评分。
在一些实施例中,所述信息显示模块2510,还用于显示所述第一用户获得的奖励信息,所述奖励信息用于指示所述第一用户拍摄所述互动拍摄图像所获得的奖励,所述奖励与所述质量评分有关。
在一些实施例中,所述图像显示模块2530,用于在所述互动拍摄过程中,若所述第一用户位于拍摄范围之内,则根据所述拍摄范围内的所述第一用户的图像,生成所述互动拍摄图像。
所述图像显示模块2530,还用于若所述第一用户不位于所述拍摄范围之内,则将与所述第一用户相关的设定图像确定为所述互动拍摄图像。
在一些实施例中,所述信息显示模块2510,还用于显示多条待执行的互动拍摄信息,其中,所述多条待执行的互动拍摄信息根据优先级进行显示,所述优先级与以下至少之一有关:所述互动拍摄信息对应的互动拍摄指令的生成时刻、所述互动拍摄信息对应的互动拍摄指令的支出资源。
需要说明的是,上述实施例提供的装置,在实现其功能时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的装置与方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
请参考图27,其示出了本申请一个实施例提供的终端设备2700的结构框图。该终端设备2700可以是图1所示实施环境中的主播终端设备11,用于实施上述实施例中提供的主播终端设备侧的直播互动方法,还可以是图1所示实施环境中的观众终端设备13,用于实施上述实施例中提供的观众终端设备侧的直播互动方法。具体来讲:
通常,终端设备2700包括有:处理器2701和存储器2702。
处理器2701可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器2701可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器2701也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器2701可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器2701还可以包括AI处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器2702可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非 暂态的。存储器2702还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器2702中的非暂态的计算机可读存储介质用于存储计算机程序,所述计算机程序经配置以由一个或者一个以上处理器执行,以实现上述主播终端设备侧的直播互动方法,或上述观众终端设备侧的直播互动方法。
在一些实施例中,终端设备2700还可选包括有:外围设备接口2703和至少一个外围设备。处理器2701、存储器2702和外围设备接口2703之间可以通过总线或信号线相连。各个外围设备可以通过总线、信号线或电路板与外围设备接口2703相连。具体地,外围设备包括:射频电路2704、显示屏2705、音频电路2707和电源2708中的至少一种。
本领域技术人员可以理解,图27中示出的结构并不构成对终端设备2700的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
在示例性实施例中,还提供了一种计算机可读存储介质,所述存储介质中存储有计算机程序,所述计算机程序在被处理器执行时,以实现上述主播终端设备侧的直播互动方法,或上述观众终端设备侧的直播互动方法。
可选地,该计算机可读存储介质可以包括:ROM(Read-Only Memory,只读存储器)、RAM(Random Access Memory,随机存取存储器)、SSD(Solid State Drives,固态硬盘)或光盘等。其中,随机存取存储器可以包括ReRAM(Resistance Random Access Memory,电阻式随机存取存储器)和DRAM(Dynamic Random Access Memory,动态随机存取存储器)。
在示例性实施例中,还提供了一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中。终端设备的处理器从所述计算机可读存储介质中读取所述计算机程序,所述处理器执行所述计算机程序,使得所述主播终端设备执行主播终端设备侧的直播互动方法,或使得所述观众终端设备执行观众终端设备侧的直播互动方法。
应当理解的是,在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。另外,本文中描述的步骤编号,仅示例性示出了步骤间的一种可能的执行先后顺序,在一些其它实施例中,上述步骤也可以不按照编号顺序来执行,如两个不同编号的步骤同时执行,或者两个不同编号的步骤按照与图示相反的顺序执行,本申请实施例对此不作限定。
以上所述仅为本申请的示例性实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (21)

  1. 一种直播互动方法,其特征在于,所述方法包括:
    显示第一用户的直播界面,所述直播界面用于展示所述第一用户的直播内容;
    在第二用户的互动拍摄指令被所述第一用户应答的情况下,显示所述第一用户在互动拍摄过程中的直播画面;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像;
    显示所述第二用户获得的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
  2. 根据权利要求1所述的方法,其特征在于,所述显示所述第一用户在互动拍摄过程中的直播画面,包括:
    显示第一提示信息,所述第一提示信息用于引导所述第一用户进行拍摄;
    显示所述第一用户在所述互动拍摄过程中,根据所述第一提示信息进行拍摄时的直播画面。
  3. 根据权利要求2所述的方法,其特征在于,所述第一提示信息包括:第一要求信息,所述第一要求信息用于指示所述第一用户拍摄的图像所需满足的要求。
  4. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    显示互动拍摄道具,所述互动拍摄道具用于触发生成所述互动拍摄指令;
    响应于针对所述互动拍摄道具的使用指令,向所述第一用户的客户端发送所述第二用户的互动拍摄指令。
  5. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    响应于针对所述互动拍摄道具的使用指令,显示拍摄要求设置界面;
    在所述拍摄要求设置界面中,显示由所述第二用户设置的第一要求信息,所述第一要求信息用于指示所述第一用户拍摄的图像所需满足的要求;
    其中,所述第二用户的互动拍摄指令中包括所述第一要求信息。
  6. 根据权利要求4所述的方法,其特征在于,所述向所述第一用户的客户端发送所述第二用户的互动拍摄指令之后,还包括:
    显示排队提示信息,所述排队提示信息用于指示所述第二用户的互动拍摄指令的排队进程;其中,所述排队进程包括以下至少之一:等待人数、预计等待时长、优先排队提示、被优先排队提示。
  7. 根据权利要求1所述的方法,其特征在于,所述显示所述第二用户获得的互动拍摄图像之后,还包括:
    响应于针对所述互动拍摄图像的查看指令,显示互动拍摄相册,所述互动拍摄相册用于保存所述第二用户获得的互动拍摄图像。
  8. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    在第二用户的互动拍摄指令被所述第一用户应答的情况下,通过摄像头采集所述第二用户的视频画面,将所述第二用户的视频画面发送给服务器;
    其中,所述第一用户在所述互动拍摄过程中的直播画面包括:所述第一用户的视频画面和所述第二用户的视频画面。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    在所述互动拍摄过程中,显示第二提示信息,所述第二提示信息用于引导所述第二用户与所述第一用户进行合拍。
  10. 一种直播互动方法,其特征在于,所述方法包括:
    在第一用户进行直播的过程中,显示基于第二用户的互动拍摄指令生成的互动拍摄信息;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像;
    响应于针对所述第二用户的互动拍摄指令的应答指令,显示所述第一用户在互动拍摄过程中的直播画面;
    显示发送给所述第二用户的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
  11. 根据权利要求10所述的方法,其特征在于,所述显示所述第一用户在互动拍摄过程中的直播画面,包括:
    显示第一提示信息,所述第一提示信息用于引导所述第一用户进行拍摄;
    显示所述第一用户在所述互动拍摄过程中,根据所述第一提示信息进行拍摄时的直播画面。
  12. 根据权利要求11所述的方法,其特征在于,所述第一提示信息包括:第一要求信息,所述第一要求信息用于指示所述第一用户拍摄的图像所需满足的要求。
  13. 根据权利要求12所述的方法,其特征在于,所述方法还包括:
    显示基于所述第一要求信息对所述互动拍摄图像进行打分,得到的所述互动拍摄图像的质量评分。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    显示所述第一用户获得的奖励信息,所述奖励信息用于指示所述第一用户拍摄所述互动拍摄图像所获得的奖励,所述奖励与所述质量评分有关。
  15. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    在所述互动拍摄过程中,若所述第一用户位于拍摄范围之内,则根据所述拍摄范围内的所述第一用户的图像,生成所述互动拍摄图像;
    若所述第一用户不位于所述拍摄范围之内,则将与所述第一用户相关的设定图像确定为所述互动拍摄图像。
  16. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    显示多条待执行的互动拍摄信息,其中,所述多条待执行的互动拍摄信息根据优先级进行显示,所述优先级与以下至少之一有关:所述互动拍摄信息对应的互动拍摄指令的生成时刻、所述互动拍摄信息对应的互动拍摄指令的支出资源。
  17. 一种直播互动装置,其特征在于,所述装置包括:
    界面显示模块,用于显示第一用户的直播界面,所述直播界面用于展示所述第一用户的直播内容;
    画面显示模块,用于在第二用户的互动拍摄指令被所述第一用户应答的情况下,显示所述第一用户在互动拍摄过程中的直播画面;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像;
    图像显示模块,用于显示所述第二用户获得的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
  18. 一种直播互动装置,其特征在于,所述装置包括:
    信息显示模块,用于在第一用户进行直播的过程中,显示基于第二用户的互动拍摄指令生成的互动拍摄信息;其中,所述第二用户的互动拍摄指令用于请求所述第一用户拍摄图像;
    画面显示模块,用于响应于针对所述第二用户的互动拍摄指令的应答指令,显示所述第一用户在互动拍摄过程中的直播画面;
    图像显示模块,用于显示发送给所述第二用户的互动拍摄图像,所述互动拍摄图像是在所述互动拍摄过程中得到的。
  19. 一种终端设备,其特征在于,所述终端设备包括处理器和存储器,所述存储器中存储有计算机程序,所述计算机程序由所述处理器加载并执行以实现如权利要求1至9任一项所述的方法,或实现如权利要求10至16任一项所述的方法。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至9任一项所述的方法,或实现如权利要求10至16任一项所述的方法。
  21. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序,所述计算机程序存储在计算机可读存储介质中,处理器从所述计算机可读存储介质读取并执行所述计算机程序,以实现如权利要求1至9任一项所述的方法,或实现如权利要求10至16任一项所述的方法。
PCT/CN2022/133768 2022-11-23 2022-11-23 直播互动方法、装置、设备、存储介质及程序产品 WO2024108431A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/133768 WO2024108431A1 (zh) 2022-11-23 2022-11-23 直播互动方法、装置、设备、存储介质及程序产品
CN202280004685.XA CN116076075A (zh) 2022-11-23 2022-11-23 直播互动方法、装置、设备、存储介质及程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/133768 WO2024108431A1 (zh) 2022-11-23 2022-11-23 直播互动方法、装置、设备、存储介质及程序产品

Publications (1)

Publication Number Publication Date
WO2024108431A1 true WO2024108431A1 (zh) 2024-05-30

Family

ID=86171836

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/133768 WO2024108431A1 (zh) 2022-11-23 2022-11-23 直播互动方法、装置、设备、存储介质及程序产品

Country Status (2)

Country Link
CN (1) CN116076075A (zh)
WO (1) WO2024108431A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405343A (zh) * 2020-03-18 2020-07-10 广州华多网络科技有限公司 直播互动方法、装置、电子设备及存储介质
CN113068053A (zh) * 2021-03-15 2021-07-02 北京字跳网络技术有限公司 一种直播间内的交互方法、装置、设备及存储介质
WO2022142944A1 (zh) * 2020-12-28 2022-07-07 北京达佳互联信息技术有限公司 直播互动方法及装置
CN115190365A (zh) * 2022-04-01 2022-10-14 广州方硅信息技术有限公司 直播间的互动处理方法、服务器、电子终端及存储介质
CN115209228A (zh) * 2022-06-30 2022-10-18 广州酷狗计算机科技有限公司 任务互动方法、装置、设备、存储介质及程序产品

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007240887A (ja) * 2006-03-08 2007-09-20 Make Softwear:Kk 自動写真撮影装置及びその方法
CN110213613B (zh) * 2018-08-09 2022-03-08 腾讯科技(深圳)有限公司 图像处理方法、装置及存储介质
US11044535B2 (en) * 2018-08-28 2021-06-22 Gree, Inc. Video distribution system for live distributing video containing animation of character object generated based on motion of distributor user, distribution method, and storage medium storing video distribution program
CN110830811B (zh) * 2019-10-31 2022-01-18 广州酷狗计算机科技有限公司 直播互动方法及装置、系统、终端、存储介质
CN111629223B (zh) * 2020-06-11 2022-09-13 网易(杭州)网络有限公司 视频同步方法及装置、计算机可读存储介质以及电子设备
CN111970533B (zh) * 2020-08-28 2022-11-04 北京达佳互联信息技术有限公司 直播间的互动方法、装置及电子设备
CN112383786B (zh) * 2020-11-03 2023-03-07 广州繁星互娱信息科技有限公司 直播互动方法、装置、系统、终端及存储介质
CN113727125B (zh) * 2021-08-30 2023-03-28 广州方硅信息技术有限公司 直播间的截图方法、装置、系统、介质以及计算机设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405343A (zh) * 2020-03-18 2020-07-10 广州华多网络科技有限公司 直播互动方法、装置、电子设备及存储介质
WO2022142944A1 (zh) * 2020-12-28 2022-07-07 北京达佳互联信息技术有限公司 直播互动方法及装置
CN113068053A (zh) * 2021-03-15 2021-07-02 北京字跳网络技术有限公司 一种直播间内的交互方法、装置、设备及存储介质
CN115190365A (zh) * 2022-04-01 2022-10-14 广州方硅信息技术有限公司 直播间的互动处理方法、服务器、电子终端及存储介质
CN115209228A (zh) * 2022-06-30 2022-10-18 广州酷狗计算机科技有限公司 任务互动方法、装置、设备、存储介质及程序产品

Also Published As

Publication number Publication date
CN116076075A (zh) 2023-05-05

Similar Documents

Publication Publication Date Title
CN108986192B (zh) 用于直播的数据处理方法及装置
WO2021179641A1 (zh) 一种图像拍摄方法、装置、计算机设备和存储介质
KR20180022866A (ko) 스펙테이팅 시스템과 게임 시스템들 통합
CN110677685B (zh) 网络直播显示方法及装置
CN114245221B (zh) 基于直播间的互动方法、装置、电子设备及存储介质
CN111768478B (zh) 一种图像合成方法、装置、存储介质和电子设备
CN114430494B (zh) 界面显示方法、装置、设备及存储介质
CN112188223B (zh) 直播视频播放方法、装置、设备及介质
CN110677610A (zh) 一种视频流控制方法、视频流控制装置及电子设备
CN109670385A (zh) 一种应用程序中表情更新的方法及装置
CN115239916A (zh) 虚拟形象的互动方法、装置和设备
WO2024108431A1 (zh) 直播互动方法、装置、设备、存储介质及程序产品
WO2023020509A1 (zh) 一种观看直播的用户信息处理方法、装置及设备
WO2023082737A1 (zh) 一种数据处理方法、装置、设备以及可读存储介质
JP6385543B1 (ja) サーバ装置、配信システム、配信方法及びプログラム
JP2019161474A (ja) 遊戯画像撮影システム
CN112235516B (zh) 视频生成方法、装置、服务器及存储介质
JP6491808B1 (ja) ゲームプログラムおよびゲーム装置
CN115222406A (zh) 基于业务服务账号的资源发放方法以及相关设备
JP7215628B1 (ja) 遊戯画像撮影システム
TW201108151A (en) Instant communication control system and its control method
Pettersson et al. A perceptual evaluation of social interaction with emotes and real-time facial motion capture
WO2023130715A1 (zh) 一种数据处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
CN112752159B (zh) 一种互动方法和相关装置
JP6583931B2 (ja) ゲームプログラムおよびゲーム装置