CN114760520A - Live small and medium video shooting interaction method, device, equipment and storage medium - Google Patents

Live small and medium video shooting interaction method, device, equipment and storage medium Download PDF

Info

Publication number
CN114760520A
CN114760520A CN202210415272.2A CN202210415272A CN114760520A CN 114760520 A CN114760520 A CN 114760520A CN 202210415272 A CN202210415272 A CN 202210415272A CN 114760520 A CN114760520 A CN 114760520A
Authority
CN
China
Prior art keywords
small
video shooting
live
audio
small video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210415272.2A
Other languages
Chinese (zh)
Inventor
许英俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202210415272.2A priority Critical patent/CN114760520A/en
Publication of CN114760520A publication Critical patent/CN114760520A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Abstract

The application relates to the technical field of network live broadcast, and provides a live small and medium video shooting interaction method, a live small and medium video shooting interaction device, computer equipment and a storage medium, wherein the method comprises the following steps: the anchor client responds to a small video shooting starting instruction and creates a small video shooting area on a live interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises the first audio and video stream data and live broadcast interactive data; the anchor client responds to a small video shooting ending instruction and sends the first audio and video stream data to the server; the server receives the first audio and video stream data; and generating a first small video according to the first audio and video stream data, so that the small video can be shot in live broadcasting, and meanwhile, the flow introduction is realized for the anchor broadcasting.

Description

Live small and medium video shooting interaction method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of network live broadcast, in particular to a live broadcast medium and small video shooting interaction method and device, computer equipment and a storage medium.
Background
The webcast refers to a technology that a webcast shares a webcast audio and video stream to audiences on the network through a webcast platform. Live webcasting is a new network business, which embodies the characteristics of internet openness and sharing, and enables common people to have an opportunity to show own talent on the network. In the process of the live performance talent of the anchor, audiences can send out virtual gifts in the live broadcasting room, and the anchor can obtain gift sharing, so that benefits can be created for the anchor, and the anchor can work at home without going out. Especially, a more convenient employment approach is created for people who are in remote areas or cannot normally go out of home to work, and social employment is driven.
Currently, users can use terminal devices to live or take small videos. Specifically, the user installs live APP on terminal equipment, have little video page and live page in the live APP, the user touches little video entry button and can gets into little video page and carry out little video shooting, touches live entry button and can get into live page and broadcast the start.
However, when the user shoots a small video on a small video page, the user wants to turn to live, or when the user shoots a small video on a live page, the user wants to turn to shoot the small video. The user needs to exit the small video page and then enter the live broadcast page, or exit the live broadcast page and then enter the small video page, and user experience is affected.
Disclosure of Invention
The embodiment of the application provides a live middle and small video shooting interaction method, a live middle and small video shooting interaction device, computer equipment and a storage medium, which can solve the technical problem of low efficiency when live broadcasting and small video shooting are switched, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a live medium-small video shooting interaction method, including the steps of:
the anchor client responds to a small video shooting starting instruction and creates a small video shooting area on a live interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises the first audio and video stream data and live broadcast interactive data;
the anchor client responds to a small video shooting ending instruction and sends the first audio and video stream data to the server;
the server receives the first audio and video stream data; and generating a first small video according to the first audio and video stream data.
In a second aspect, an embodiment of the present application provides an interactive device for shooting live small and medium videos, including:
The area creating module is used for responding to a small video shooting starting instruction by the anchor client and creating a small video shooting area on a live broadcast interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises the first audio and video stream data and live broadcast interactive data;
the data sending module is used for responding to a small video shooting ending instruction by the anchor client and sending the first audio and video stream data to the server;
the small video generation module is used for receiving the first audio and video stream data by the server; and generating a first small video according to the first audio and video stream data.
In a third aspect, embodiments of the present application provide a computer device, a processor, a memory, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of the method according to the first aspect.
According to the method and the device, the anchor client responds to the small video shooting starting instruction, and a small video shooting area is created in a live broadcast interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises the first audio and video stream data and live interactive data; the anchor client responds to a small video shooting ending instruction and sends the first audio and video stream data to the server; the server receives the first audio and video stream data; and generating a first small video according to the first audio and video stream data. According to the method, the small video shooting interactive playing method is started in the live broadcasting process, the small video shooting can be realized in the live broadcasting process, and the efficiency of live broadcasting and small video shooting switching is improved. Meanwhile, the anchor can shoot own small videos while live broadcasting, and the anchor client sends the small videos to the server for the audiences to watch, so that the interestingness of live broadcast interactive content is improved, flow introduction is realized for the anchor, and the live broadcast watching rate and the audience retention rate are improved.
For a better understanding and implementation, the technical solutions of the present application are described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic view of an application scene of a live medium-small video shooting interaction method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a live middle and small video shooting interaction method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of S11-S12 in the live middle and small video shooting interaction method provided in the embodiment of the present application;
fig. 4 is a schematic display diagram of a small video shooting area and a live interface on a live interface provided in an embodiment of the present application;
fig. 5 is a schematic flowchart of S31-S32 in the live middle and small video shooting interaction method provided in the embodiment of the present application;
fig. 6 is a schematic display diagram of a synthesized second small video according to an embodiment of the present application;
fig. 7 is another display diagram of a small video shooting area and a live interface on a live interface according to an embodiment of the present application;
fig. 8 is a schematic flow diagram of S101 to S103 in the live middle and small video shooting interaction method provided in the embodiment of the present application;
fig. 9 is a schematic structural diagram of a live medium-small video shooting interaction device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Referring to fig. 1, fig. 1 is a schematic view of an application scene of a live small and medium video shooting interaction method provided in an embodiment of the present application, where the application scene includes an anchor client 101, a server 102, and a viewer client 103 provided in the embodiment of the present application, and the anchor client 101 and the viewer client 103 interact with each other through the server 102.
The client proposed in the embodiment of the present application includes the anchor client 101 and the viewer client 103.
It is noted that there are many understandings of the concept of "client" in the prior art, such as: it may be understood as an application program installed in a computer device, or may be understood as a hardware device corresponding to a server.
In the embodiments of the present application, the term "client" refers to a hardware device corresponding to a server, and more specifically, refers to a computer device, such as: smart phones, smart interactive tablets, personal computers, and the like.
When the client is a mobile device such as a smart phone and an intelligent interactive tablet, a user can install a matched mobile application program on the client and can also access a Web application program on the client.
When the client is a non-mobile device such as a Personal Computer (PC), the user can install a matching PC application on the client, and similarly can access a Web application on the client.
The mobile terminal application refers to an application program that can be installed in the mobile device, the PC terminal application refers to an application program that can be installed in the non-mobile device, and the Web terminal application refers to an application program that needs to be accessed through a browser.
Specifically, the Web application program may be divided into a mobile version and a PC version according to the difference of the client types, and the page layout modes and the available server support of the two versions may be different.
In the embodiment of the application, the types of live application programs provided to the user are divided into a mobile end live application program, a PC end live application program and a Web end live application program. The user can autonomously select a mode of participating in the live webcasting according to different types of the client adopted by the user.
The present application can divide the clients into a main broadcasting client 101 and a spectator client 103, depending on the identity of the user using the clients. It should be noted that the viewer client 103 and the anchor client 101 are only divided according to the user identity, and in practical applications, the functions of the viewer client 103 and the anchor client 101 may be executed by the same client at different times. Therefore, the same client can be used as the viewer client 103 when watching the network live broadcast, and can be used as the anchor client 101 when publishing the live video.
The anchor client 101 is one end that sends a webcast video, and is typically a client used by an anchor (i.e., a webcast anchor user) in webcast.
The viewer client 103 refers to an end that receives and views webcast video, and is typically a client employed by a viewer viewing video in webcast (i.e., a live viewer user).
The hardware at which the anchor client 101 and viewer client 103 are directed is essentially a computer device, and in particular, as shown in fig. 1, it may be a type of computer device such as a smart phone, smart interactive tablet, and personal computer. Both the anchor client 101 and the viewer client 103 may access the internet via known network access means to establish a data communication link with the server 102.
Server 102, acting as a business server, may be responsible for further connecting with related audio data servers, video streaming servers, and other servers providing related support, etc., to form a logically associated server cluster for serving related terminal devices, such as anchor client 101 and viewer client 103 shown in fig. 1.
In the embodiment of the present application, the anchor client 101 and the audience client 103 may join in the same live broadcast room (i.e., a live broadcast channel), where the live broadcast room is a chat room implemented by means of an internet technology, and generally has an audio/video broadcast control function. The anchor user is live in the live room through the anchor client 101, and the audience of the audience client 103 can log in the server 102 to enter the live room to watch the live.
In the live broadcast room, interaction between the anchor and the audience can be realized through known online interaction modes such as voice, video, characters and the like, generally, the anchor user performs programs for the audience in the form of audio and video streams, and economic transaction behaviors can also be generated in the interaction process. Of course, the application form of the live broadcast room is not limited to online entertainment, and can also be popularized to other relevant scenes, such as: user pairing interaction scenarios, video conference scenarios, product recommendation sales scenarios, and any other scenario requiring similar interaction.
Specifically, the process of the viewer watching the live broadcast is as follows: the viewer can click to access a live application (e.g., YY) installed on the viewer client 103 and choose to enter any one of the live rooms, and the viewer client 103 is triggered to load a live room interface for the viewer, wherein the live room interface includes a plurality of interactive components, and the viewer can watch live in the live room by loading the interactive components and perform various online interactions.
In the embodiment of the application, the anchor and other anchors or audiences carry out various types of interactive playing methods. However, because the interest of the live broadcast interactive content generated in the interactive playing method is low, the anchor is difficult to introduce the flow in an interactive mode, the live broadcast watching rate and the audience retention rate are improved, and the initiative of the anchor in broadcasting is reduced to a certain extent.
Therefore, the embodiment of the application provides a live medium-small video shooting interaction method which takes a client and a server as execution main bodies.
Referring to fig. 2, fig. 2 is a schematic flowchart of an interaction method for shooting live small and medium videos according to an embodiment of the present application, where the method includes the following steps:
s10: the anchor client responds to a small video shooting starting instruction and creates a small video shooting area on a live interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises the first audio and video stream data and live interactive data.
In the embodiment of the application, before the live broadcast is started to carry out small video shooting, the anchor user can click to enter the live broadcast application program through the anchor client, and the live broadcast is started through interaction with the live broadcast starting control on the application program interface. And at the moment, a live broadcast interface is displayed in the anchor client, a small video shooting control is included under a play control in the live broadcast interface, the anchor client responds to the triggering operation of an anchor user on the small video shooting control, generates a small video shooting starting request according to an anchor identification, and sends the small video shooting starting request to the server. The server responds to a small video shooting starting request sent by a main broadcast client, analyzes the video shooting starting request to obtain a main broadcast identification, generates a small video shooting starting instruction according to the main broadcast identification and sends the small video shooting starting instruction to the main broadcast client.
The anchor client responds to the small video shooting starting instruction, a small video shooting area is established on a live interface corresponding to the anchor identification, the small video shooting area and the live interface share the same camera, at the moment, the small video shooting area of the anchor and the lens of the live interface are consistent, and two same portraits of the anchor can be seen. The first audio stream data are anchor voice and anchor video pictures collected by an anchor client, the second audio stream data are different from the first audio stream data, the second audio stream data further comprise live broadcast interactive data, and the live broadcast interactive data comprise barrage data, comment data, gift sending data and the like sent by viewers in a live broadcast room created by an anchor.
S20: and the anchor client responds to a small video shooting ending instruction and sends the first audio and video stream data to the server.
In the embodiment of the application, after the anchor finishes the small video shooting, the anchor sends a small video shooting ending instruction to the anchor client. And the anchor client responds to the small video shooting ending instruction and sends the first audio and video stream data to the server.
S30: the server receives the first audio and video stream data; and generating a first small video according to the first audio and video stream data.
In this embodiment of the application, the server receives the first audio/video stream data, and generates a first small video with a preset aspect ratio according to the first audio/video stream data, for example, the preset aspect ratio is 16: 9. The server publishes the first small video on a live broadcast platform for the audience to watch, and the audience can check the first small video by accessing a square tab page on a live broadcast client, and approve, comment, forward and the like the first small video. The server can sort the generated praise amount corresponding to each first small video from high to low, so that the exposure of the main broadcast corresponding to each first small video is increased, and the flow is introduced for the main broadcast.
According to the method and the device, the anchor client responds to the small video shooting starting instruction, and a small video shooting area is created in a live broadcast interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises the first audio and video stream data and live interactive data; the anchor client responds to a small video shooting ending instruction and sends the first audio and video stream data to the server; the server receives the first audio and video stream data; and generating a first small video according to the first audio and video stream data. According to the method, the small video shooting interactive playing method is started in the live broadcasting process, the small video shooting can be realized in the live broadcasting process, and the efficiency of live broadcasting and small video shooting switching is improved. Meanwhile, the anchor can shoot own small videos while live broadcasting, and the anchor client sends the small videos to the server for the audiences to watch, so that the interestingness of live broadcast interactive content is improved, flow introduction is realized for the anchor, and the live broadcast watching rate and the audience retention rate are improved.
In an alternative embodiment, referring to fig. 3, step S10 includes steps S11 to S32, which are as follows:
s11: the server responds to a small video shooting interaction starting instruction, obtains a plurality of anchor identifiers, and establishes a microphone connecting session connection between anchor clients corresponding to the anchor identifiers.
In the embodiment of the application, before the live microphone broadcasting is started for video shooting interaction, a anchor user can click to enter a live broadcasting application program through an anchor client, the live broadcasting application program is started through interaction with a live broadcasting starting control on an application program interface, at the moment, the anchor client displays a live broadcasting interface, a plurality of video interactive playing controls, such as a promotion video interactive playing control, are included under a playing control in the live broadcasting interface, the anchor client responds to triggering operation of the anchor user on the promotion video interactive playing control, obtains a triggered small video shooting interaction identifier corresponding to the promotion video interactive playing control, generates a small video shooting interaction starting request according to the small video shooting interaction identifier and the anchor identifier, and sends the small video shooting interaction starting request to a server.
The method comprises the steps that a server responds to a small video shooting interaction starting request sent by anchor client sides, analyzes the small video shooting interaction starting request to obtain a small video shooting interaction identifier and an anchor identifier, selects and sends at least two anchor client sides containing the small video shooting interaction identifier starting request, generates a small video shooting interaction instruction according to the anchor identifiers corresponding to the at least two anchor client sides, and sends the small video shooting interaction starting instruction.
Specifically, the server can randomly select the anchor for starting the video shooting interactive playing method in a random matching mode, and establish the microphone connecting session connection for the corresponding anchor client. The anchor can also start a video shooting interactive playing method in a friend mode, an anchor client firstly obtains an anchor identifier and a small video shooting interactive identifier corresponding to a wheat anchor (in a friend relationship with the current anchor) selected by the current anchor, generates a small video shooting interactive starting request according to the anchor identifier and the small video shooting interactive identifier, and sends the small video shooting interactive starting request to a server, and the server responds to the small video shooting interactive starting request, obtains the anchor identifier and the small video shooting interactive identifier, and then sends the wheat live broadcasting request to the corresponding anchor client. The method comprises the steps that a direct-broadcasting request of connecting the wheat is sent to a server, wherein the direct-broadcasting request of connecting the wheat comprises a main broadcasting identification and a small video shooting interaction identification which request to connect the wheat, so that a main broadcasting receiving the wheat connecting invitation determines which main broadcasting invites the main broadcasting to connect the wheat and which interactive playing method to carry out, and after the server receives the wheat connecting confirmation information sent by a corresponding main broadcasting client, a small video shooting interaction starting instruction is sent.
After the server establishes the connection of the microphone connection session between the anchor clients corresponding to the anchor identifications, the clients in the live broadcast room push the acquired audio and video stream data to the server. Specifically, after receiving audio and video stream data respectively acquired by a first anchor client and a second anchor client, a server mixes the audio and video stream data and sends the mixed audio and video stream data to the first anchor client, the second anchor client and a spectator client added in a live broadcast room, so that spectators added in the live broadcast room can watch real-time live broadcast of the first anchor and the second anchor in the live broadcast room. The audience client sides joining the live broadcast rooms comprise the audience client sides in the live broadcast room where the first anchor client side is located and the audience client sides in the live broadcast room where the second live broadcast client side is located.
S12: each anchor client responds to a small video shooting starting instruction and creates a small video shooting area on each live broadcast interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises first audio and video stream data and live broadcast interactive data corresponding to the anchor identifications.
Referring to fig. 4, in an embodiment, the small video shooting start instruction is sent by the server after the microphone connection is established, that is, the small video shooting interactive playing method is started immediately after the microphone connection is established. Specifically, the anchor client responds to a small video shooting starting instruction, a small video shooting area is created on a live interface corresponding to an anchor identifier, the small video shooting area and the live interface share the same camera, at the moment, the small video shooting area of the anchor and the camera lens of the live interface are consistent, and two same portraits of the anchor can be seen. The live interface is displayed in a full screen mode, and the small video shooting area is located on the live interface.
The anchor can record and shoot the small video in the small video shooting area, after the anchor shoots the small video, the anchor client obtains audio and video stream data corresponding to an anchor identification shot in the small video shooting area, and the audio and video stream data are sent to the server. Specifically, after the anchor shoots the small video, the anchor client cuts the current video to be the size of a target area in the middle, the aspect ratio of the target area is 8:9, uploads the current video to the server, and transmits anchor identifiers of two parties establishing the connection session connection to the server.
In an optional embodiment, referring to fig. 5, the live medium-sized and small video shooting interaction method includes steps S31 to S32, which are as follows:
s31: each anchor client responds to a small video shooting ending instruction and sends the first audio and video stream data corresponding to each anchor identifier to the server;
s32: the server acquires the first audio and video stream data corresponding to each anchor identifier; and synthesizing each first audio and video stream data into a second small video.
In the embodiment of the present application, please refer to fig. 6, after each anchor client that establishes a connection session connection completes small video shooting, in response to a small video shooting end instruction, the server sends the first audio/video stream data corresponding to each anchor identifier to the server, and the server obtains the first audio/video data corresponding to each anchor identifier. For example, if a first anchor client and a second anchor client perform video shooting interactive play, the server may obtain first audio/video stream data corresponding to a first anchor identifier shot by the first anchor client in a small video shooting area, that is, a small video shot by a first anchor; the second anchor client side is used for shooting a second anchor identification corresponding to first audio and video stream data in a small video shooting area, namely a small video shot by a second anchor; and the server performs left-right placement synthesis on the small video shot by the first anchor and the small video shot by the second anchor, so as to obtain a second small video. After the first anchor client and the second anchor client respectively shoot the small videos, the first anchor client and the second anchor client both centrally cut the shot small videos into the size of a target area and upload the current videos to the server. For example, the aspect ratios of the target regions are all 8:9, so that the aspect ratio of the synthesized second small video is 16: 9.
And the server publishes the synthesized second small video on a live broadcast platform for the audience to watch, and the audience can check the second small video by accessing a square tab page on a live broadcast client, and approve, comment, forward and the like the second small video. The server can sort from high to low according to the praise amount corresponding to each synthesized second small video, so that the exposure of the main broadcast corresponding to each second small video is increased, and the flow is introduced for the main broadcast.
By starting the small video shooting interactive playing method in the live broadcasting process of the connected wheat, the anchor client side of the connected wheat can shoot and record respective small videos, and the server synthesizes the small videos of the anchor client sides of the connected wheat into a small video for the audience to watch, so that the interestingness of the live broadcasting interactive content is improved, the flow introduction is realized for the anchor, and the live broadcasting watching rate and the audience retention rate are improved.
In an alternative embodiment, referring to fig. 7, step S10 includes steps S101 to S103, which are as follows:
s101: the anchor client responds to a small video shooting starting instruction and calls a page switching container to display a live broadcast interface; the page switching container is pre-constructed by the anchor client, and a live interface and a small video shooting area are stored in the page container.
S102: the anchor client zooms the live broadcast interface into a floating window with a preset size according to a preset zooming proportion;
s103: the anchor client calls the page switching container to display a small video shooting area; wherein the small video shooting area and the floating window are not overlapped with each other.
In the embodiment of the present application, please refer to fig. 8, the page switching container is a page view component, a page can be loaded through the page view component, and a process of loading the page is to analyze page data. The division of the preset scaling is limited within a range less than 1, and no specific limitation is made. Alternatively, the preset scaling may be 0.2 in consideration of convenience of operation and beauty. The floating window is displayed at an edge position of the live interface by default, such as the position of the upper right corner. After the floating display, the live broadcast logic is not interrupted, the live broadcast logic continuously runs in the floating window, and correspondingly, the live broadcast content in the floating window is updated in real time according to the live broadcast logic. The live interface and the small video shooting area are displayed through the page switching container, the time required by loading the live interface and the small video shooting area is shortened, and the loading efficiency is improved.
In an alternative embodiment, step S20 is followed by step S21, which is as follows:
s21: and canceling the display of the small video shooting area, and displaying the live broadcast interface in a full screen mode.
In the embodiment of the application, after the anchor video is shot, the anchor client responds to the small video shooting end instruction, cancels the display of the small video shooting area, and displays the live broadcast interface in a full screen mode, so that the anchor video can quickly recover the live broadcast.
In an optional embodiment, the live medium-small video shooting interaction method includes steps S41 to S42, which are as follows:
s41: the anchor client acquires first page configuration data of the live interface and stores the first page configuration data in a first public attribute library corresponding to the live interface; the first page configuration data comprise first functional attribute data applied to the live interface, and the first functional attribute data are generated based on a functional control triggering the live interface;
s42: the anchor client acquires second page configuration data of the small video shooting area, and stores the second page configuration data in a second public attribute library corresponding to the small video shooting area; the second page configuration data comprises second functional attribute data applied to the small video shooting area, and the second functional attribute data is generated based on a functional control triggering the small video shooting area;
The step of canceling the display of the small video shooting area and displaying the live interface in a full screen manner includes the step S211:
s211: and acquiring the first page configuration data from the first public attribute library, and recovering the live broadcast interface according to the first page configuration data.
In this embodiment, the page configuration data includes functional attribute data such as music, beauty, special effects, expressions, stickers, and gestures applied to the small video shooting area or the live interface. And adding a beauty effect and a special effect in the live broadcast interface by the anchor, and storing page configuration data corresponding to the beauty effect and the special effect in a public attribute library A corresponding to the live broadcast interface. And then, the anchor starts a video shooting playing method, the anchor adds the score and the expression in the small video shooting area, and the page configuration data corresponding to the score and the expression are stored in a public attribute library B corresponding to the small video shooting area. After the video shooting of the anchor broadcast is finished, the live broadcast interface is changed from a floating window to full screen display, corresponding page configuration data are obtained from the public attribute library A, the live broadcast interface is recovered according to the page configuration data, and therefore the effects of dubbing music and expressions of the live broadcast interface cannot be influenced by adding the beautification and special effects in the small video shooting area, and the situation that the data of the fun, beautification, special effects, expressions, stickers, gestures and other functional attributes added in the small video shooting area or the live broadcast interface are empty is avoided, and user experience is improved.
In an optional embodiment, after the step of the anchor client responding to the small video shooting start instruction, the method includes step S110, which is specifically as follows:
s110: the anchor client acquires countdown control data and displays a countdown control at a display position in a live interface according to the countdown control data; the countdown control is used for playing the remaining preparation time for the anchor to start the small video shooting.
In the embodiment of the application, the countdown control data at least comprises display data of a countdown control and functional data of the countdown control, the display data of the countdown control is used for determining the display style, the display size, the display position and the like of the countdown control, and the functional data of the countdown control is used for realizing the countdown function of the countdown control. Displayed in the countdown control is the remaining preparation time for the anchor to begin small video capture.
Specifically, the preparation time length may be set to 10s, and the small video shooting start instruction may be sent by the server when the remaining preparation time length of video shooting after the small video shooting control is triggered is 0s or the remaining preparation time length of start of the small video shooting interactive play method is 0 s. For example, after establishing a connection session connection between a first anchor client and a second anchor client, the server defaults to send a small video shooting start instruction when the remaining preparation time is 0s, the small video shooting start instruction is not directly sent, but the anchor is left for a certain preparation time, the anchor can be adjusted to the best shooting state of the anchor, the quality of small video shooting is improved, and the watching experience of subsequent audiences watching the small video is improved.
In an optional embodiment, the live medium-small video shooting interaction method includes step S50, which is as follows:
s50: and the anchor client responds to the small video shooting starting instruction, and stops receiving the audio data of the live broadcasting room after the small video shooting area is created on the live broadcasting interface.
In this application embodiment, under the live scene of company wheat, when the anchor is in when little video shooting region shoots the little video, can forbid live room sound, prevent to type in the video with the sound of the anchor of the other side, avoid producing the interference to the shooting of little video, guaranteed the quality that little video was shot to follow-up spectator has been promoted and has watched the watching experience of little video.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an interactive device for live medium-small video shooting according to the present application. The live middle and small video shooting interaction device 6 provided by the embodiment of the application comprises:
the area creating module 61 is used for the anchor client to respond to the small video shooting starting instruction and create a small video shooting area on a live broadcast interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises the first audio and video stream data and live broadcast interactive data;
The data sending module 62 is configured to send the first audio and video stream data to the server by the anchor client in response to a small video shooting end instruction;
a small video generating module 63, configured to receive the first audio and video stream data by the server; and generating a first small video according to the first audio and video stream data.
It should be noted that, when the live middle-sized and small-sized video shooting interaction apparatus provided in the above embodiment executes the live middle-sized and small-sized video shooting interaction method, only the division of the above functional modules is used as an example, in practical applications, the above function allocation may be completed by different functional modules as needed, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the live middle-sized and small-sized video shooting interaction device and the live middle-sized and small-sized video shooting interaction method provided by the embodiment belong to the same concept, the implementation process is shown in the embodiment of the method in detail, and the detailed description is omitted here.
Please refer to fig. 10, which is a schematic structural diagram of a computer device provided in the present application. As shown in fig. 10, the computer device 21 may include: a processor 210, a memory 211, and a computer program 212 stored in the memory 211 and operable on the processor 210, such as: live middle and small video shooting interactive programs; the steps in the above embodiments are implemented when the processor 210 executes the computer program 212.
The processor 210 may include one or more processing cores, among other things. The processor 210 is connected to various parts in the computer device 21 by various interfaces and lines, performs various functions of the computer device 21 and processes data by executing or executing instructions, programs, code sets or instruction sets stored in the memory 211 and calling data in the memory 211, and optionally, the processor 210 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), Programmable Logic Array (PLA). The processor 210 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the touch display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 210, and may be implemented by a single chip.
The Memory 211 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 211 includes a non-transitory computer-readable medium. The memory 211 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 211 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the above-mentioned method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 211 may optionally be at least one memory device located remotely from the processor 210.
The embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the method steps of the foregoing embodiment, and a specific execution process may refer to specific descriptions of the foregoing embodiment, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the description of each embodiment has its own emphasis, and reference may be made to the related description of other embodiments for parts that are not described or recited in any embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the steps of the above-described embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (11)

1. A live small and medium video shooting interaction method is characterized by comprising the following steps:
the anchor client responds to a small video shooting starting instruction and creates a small video shooting area on a live interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises the first audio and video stream data and live interactive data;
the anchor client responds to a small video shooting ending instruction and sends the first audio and video stream data to the server;
the server receives the first audio and video stream data; and generating a first small video according to the first audio and video stream data.
2. The live small and medium video shooting interaction method as claimed in claim 1, wherein:
The step that the anchor client responds to the small video shooting starting instruction and creates a small video shooting area on a live interface comprises the following steps:
the server responds to a small video shooting interaction starting instruction, a plurality of anchor identifiers are obtained, and a microphone connecting session connection between anchor clients corresponding to the anchor identifiers is established;
each anchor client responds to a small video shooting starting instruction and creates a small video shooting area on each live broadcast interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises first audio and video stream data and live broadcast interactive data corresponding to the anchor identifications.
3. The live small and medium video shooting interaction method according to claim 2, further comprising:
each anchor client responds to a small video shooting ending instruction and sends the first audio and video stream data corresponding to each anchor identification to the server;
the server acquires the first audio and video stream data corresponding to each anchor identifier; and synthesizing each first audio and video stream data into a second small video.
4. The live small and medium video shooting interaction method according to claim 1, characterized in that:
the step that the anchor client responds to the small video shooting starting instruction and creates a small video shooting area on a live interface comprises the following steps:
the anchor client responds to a small video shooting starting instruction and calls a page switching container to display a live broadcast interface; the page switching container is constructed by the anchor client in advance, and a live interface and a small video shooting area are stored in the page container;
the anchor client zooms the live broadcast interface into a floating window with a preset size according to a preset zooming proportion;
the anchor client calls the page switching container to display a small video shooting area; wherein the small video shooting area and the floating window are not overlapped with each other.
5. The live small and medium video shooting interaction method as claimed in claim 4, wherein:
the anchor client responds to a small video shooting ending instruction and sends the first audio and video stream data to the server, and the method further comprises the following steps:
and canceling the display of the small video shooting area, and displaying the live broadcast interface in a full screen mode.
6. The live small and medium video shooting interaction method as claimed in claim 5, further comprising the steps of:
the anchor client acquires first page configuration data of the live interface and stores the first page configuration data in a first public attribute library corresponding to the live interface; the first page configuration data comprise first functional attribute data applied to the live interface, and the first functional attribute data are generated based on a functional control triggering the live interface;
the anchor client acquires second page configuration data of the small video shooting area, and stores the second page configuration data in a second public attribute library corresponding to the small video shooting area; the second page configuration data comprises second functional attribute data applied to the small video shooting area, and the second functional attribute data is generated based on a functional control triggering the small video shooting area;
the step of canceling the display of the small video shooting area and displaying the live interface in a full screen mode comprises the following steps:
and acquiring the first page configuration data from the first public attribute library, and recovering the live broadcast interface according to the first page configuration data.
7. The live small and medium video shooting interaction method as claimed in claim 2, further comprising the steps of:
and the anchor client responds to the small video shooting starting instruction, and stops receiving the audio data of the live broadcasting room after the small video shooting area is created on the live broadcasting interface.
8. The live small and medium video shooting interaction method according to claim 1, characterized in that:
after the step of the anchor client responding to the small video shooting start instruction, the method comprises the following steps:
the anchor client acquires countdown control data and displays a countdown control at a display position in a live interface according to the countdown control data; the countdown control is used for playing the remaining preparation time for the anchor to start the small video shooting.
9. The utility model provides an interactive installation is shot to live well small video which characterized in that includes:
the area creating module is used for the anchor client to respond to the small video shooting starting instruction and create a small video shooting area on the live broadcast interface; the small video shooting area is used for playing first audio and video stream data shot in the small video shooting area; the live broadcast interface is used for playing second audio and video stream data; the second audio and video stream data comprises the first audio and video stream data and live broadcast interactive data;
The data sending module is used for sending the first audio and video stream data to the server by the anchor client in response to a small video shooting ending instruction;
the small video generation module is used for receiving the first audio and video stream data by the server; and generating a first small video according to the first audio and video stream data.
10. A computer device, comprising: processor, memory and computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 8 are implemented when the processor executes the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202210415272.2A 2022-04-20 2022-04-20 Live small and medium video shooting interaction method, device, equipment and storage medium Pending CN114760520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210415272.2A CN114760520A (en) 2022-04-20 2022-04-20 Live small and medium video shooting interaction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210415272.2A CN114760520A (en) 2022-04-20 2022-04-20 Live small and medium video shooting interaction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114760520A true CN114760520A (en) 2022-07-15

Family

ID=82331547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210415272.2A Pending CN114760520A (en) 2022-04-20 2022-04-20 Live small and medium video shooting interaction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114760520A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086703A (en) * 2022-07-21 2022-09-20 南京百家云科技有限公司 Auxiliary live broadcast method, background server, system and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769786A (en) * 2018-05-25 2018-11-06 网宿科技股份有限公司 A kind of method and apparatus of synthesis audio and video data streams
CN110099241A (en) * 2018-01-31 2019-08-06 北京视联动力国际信息技术有限公司 A kind of transmission method and device of audio/video flow
CN111083551A (en) * 2019-12-17 2020-04-28 腾讯科技(深圳)有限公司 Barrage rendering method and device, computer readable storage medium and computer equipment
CN111918085A (en) * 2020-08-06 2020-11-10 腾讯科技(深圳)有限公司 Live broadcast processing method and device, electronic equipment and computer readable storage medium
CN112135155A (en) * 2020-09-11 2020-12-25 上海七牛信息技术有限公司 Audio and video connecting and converging method and device, electronic equipment and storage medium
CN112533023A (en) * 2019-09-19 2021-03-19 聚好看科技股份有限公司 Method for generating Lian-Mai chorus works and display equipment
CN114125476A (en) * 2021-10-15 2022-03-01 广州方硅信息技术有限公司 Display processing method of display interface, electronic device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110099241A (en) * 2018-01-31 2019-08-06 北京视联动力国际信息技术有限公司 A kind of transmission method and device of audio/video flow
CN108769786A (en) * 2018-05-25 2018-11-06 网宿科技股份有限公司 A kind of method and apparatus of synthesis audio and video data streams
CN112533023A (en) * 2019-09-19 2021-03-19 聚好看科技股份有限公司 Method for generating Lian-Mai chorus works and display equipment
CN111083551A (en) * 2019-12-17 2020-04-28 腾讯科技(深圳)有限公司 Barrage rendering method and device, computer readable storage medium and computer equipment
CN111918085A (en) * 2020-08-06 2020-11-10 腾讯科技(深圳)有限公司 Live broadcast processing method and device, electronic equipment and computer readable storage medium
WO2022028126A1 (en) * 2020-08-06 2022-02-10 腾讯科技(深圳)有限公司 Live streaming processing method and apparatus, and electronic device and computer readable storage medium
CN112135155A (en) * 2020-09-11 2020-12-25 上海七牛信息技术有限公司 Audio and video connecting and converging method and device, electronic equipment and storage medium
CN114125476A (en) * 2021-10-15 2022-03-01 广州方硅信息技术有限公司 Display processing method of display interface, electronic device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086703A (en) * 2022-07-21 2022-09-20 南京百家云科技有限公司 Auxiliary live broadcast method, background server, system and electronic equipment

Similar Documents

Publication Publication Date Title
CN110519611B (en) Live broadcast interaction method and device, electronic equipment and storage medium
CN104363476B (en) It is a kind of based on online live active methods of forming a team, relevant apparatus and system
CN109068182B (en) Live broadcast room entering method, system, terminal and device for playing game based on live broadcast
CN113766340B (en) Dance music interaction method, system and device under live connected wheat broadcast and computer equipment
CN112714330A (en) Gift presenting method and device based on live broadcast with wheat and electronic equipment
CN112714334B (en) Music gift giving method and device, equipment and medium thereof
CN113676747B (en) Continuous wheat live broadcast fight interaction method, system and device and computer equipment
CN113453029B (en) Live broadcast interaction method, server and storage medium
CN109195003B (en) Interaction method, system, terminal and device for playing game based on live broadcast
CN112770135B (en) Live broadcast-based content explanation method and device, electronic equipment and storage medium
CN112423013B (en) Online interaction method, client, server, computing device and storage medium
CN114666672B (en) Live fight interaction method and system initiated by audience and computer equipment
CN113573083A (en) Live wheat-connecting interaction method and device and computer equipment
CN114007094A (en) Voice microphone-connecting interaction method, system, medium and computer equipment for live broadcast room
WO2023093698A1 (en) Interaction method for game live-streaming, and storage medium, program product and electronic device
CN113824976A (en) Method and device for displaying approach show in live broadcast room and computer equipment
CN113840156A (en) Live broadcast interaction method and device based on virtual gift and computer equipment
CN114201095A (en) Control method and device for live interface, storage medium and electronic equipment
CN114666671A (en) Live broadcast praise interaction method, system, device, equipment and storage medium
CN114760520A (en) Live small and medium video shooting interaction method, device, equipment and storage medium
CN115134621A (en) Live broadcast fight interaction method and device based on main and auxiliary picture display and electronic equipment
CN114125480A (en) Live broadcasting chorus interaction method, system and device and computer equipment
CN115314727A (en) Live broadcast interaction method and device based on virtual object and electronic equipment
CN114885191A (en) Interaction method, system, device and equipment based on exclusive nickname of live broadcast room
CN114760531A (en) Live broadcasting room team interaction method, device, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination