CN111741333B - Live broadcast data acquisition method and device, computer equipment and storage medium - Google Patents

Live broadcast data acquisition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111741333B
CN111741333B CN202010525422.6A CN202010525422A CN111741333B CN 111741333 B CN111741333 B CN 111741333B CN 202010525422 A CN202010525422 A CN 202010525422A CN 111741333 B CN111741333 B CN 111741333B
Authority
CN
China
Prior art keywords
data
live broadcast
template
broadcast data
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010525422.6A
Other languages
Chinese (zh)
Other versions
CN111741333A (en
Inventor
陈文琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu kugou business incubator management Co.,Ltd.
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202010525422.6A priority Critical patent/CN111741333B/en
Publication of CN111741333A publication Critical patent/CN111741333A/en
Application granted granted Critical
Publication of CN111741333B publication Critical patent/CN111741333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The application discloses a live data acquisition method and device, computer equipment and a storage medium, and belongs to the technical field of terminals. The method is performed by a server, and comprises the following steps: acquiring bullet screen data, wherein the bullet screen data comprises bullet screen contents with the same bullet screen number in the front N bits in a target live broadcast room, and N is an integer; matching the bullet screen data with the template content; when the bullet screen data is matched with the template content, acquiring live broadcast data in a preset time period in a target live broadcast room; matching the live broadcast data with the template live broadcast data; and when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in a preset time period as target live broadcast data. According to the method and the device, the user does not need to manually record the video, so that the accuracy of acquiring the target live broadcast data by the user is improved.

Description

Live broadcast data acquisition method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a live data acquisition method and apparatus, a computer device, and a storage medium.
Background
With the development of the technical field of terminals, more and more APPs (applications) are provided for people to entertain, wherein applications that can provide multimedia data, such as live broadcast applications, video applications, and the like, are more popular among people.
Currently, in a live broadcast application program or a video application program, when a user watches videos, the videos are often acquired through modes of screen recording, downloading and the like. In the process of watching a video by a user, if the user wants to acquire a certain piece of content in the video, the user can cut the video recorded by the user or the downloaded video to acquire the certain piece of video. For example, in a certain live broadcast room, a live broadcast video for playing games is taken as a main broadcast, and when a user sees a video content that the user wants to acquire, the user records a video displayed by the terminal by triggering a screen recording function in the terminal.
For the above-mentioned method of recording the video displayed by the terminal through the screen recording function, after the user performs a series of operations, the video is played, which causes a problem of low accuracy when the user acquires the video.
Disclosure of Invention
The embodiment of the application provides a live data acquisition method and device, computer equipment and a storage medium. The technical scheme is as follows:
in one aspect, the present application provides a live data acquisition method, where the method is performed by a server, and the method includes:
acquiring bullet screen data, wherein the bullet screen data comprises bullet screen contents with the same number of bullet screens in a target live broadcast room in the front N bits, and N is an integer;
matching the bullet screen data with the template content;
when the bullet screen data are matched with the template content, acquiring live broadcast data in a preset time period in the target live broadcast room;
matching the live broadcast data with template live broadcast data;
and when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in the preset time period as target live broadcast data.
In one aspect, the present application provides a live data acquisition apparatus, where the apparatus is used in a server, and the apparatus includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring bullet screen data, the bullet screen data comprises bullet screen contents with the number of the same bullet screens in a target live broadcast room being N-bit, and N is an integer;
the first matching module is used for matching the bullet screen data with the template content;
the second acquisition module is used for acquiring live broadcast data in a preset time period in the target live broadcast room when the bullet screen data is matched with the template content;
the second matching module is used for matching the live broadcast data with the template live broadcast data;
and the third acquisition module is used for acquiring the live broadcast data in the preset time period as target live broadcast data when the live broadcast data is matched with the template live broadcast data.
In one aspect, the present application provides a computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement a live data acquisition method as described in the one aspect above.
In one aspect, the present application provides a computer-readable storage medium having at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, which is loaded and executed by a processor to implement a live data acquisition method as described in the above one aspect.
In one aspect, the present application provides a computer program product having at least one instruction, at least one program, code set, or set of instructions stored therein, which is loaded and executed by a processor to implement a live data acquisition method as described in the above one aspect.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the terminal acquires bullet screen data, wherein the bullet screen data comprises bullet screen contents with the number of the same bullet screens in a target live broadcast room being N-bit, and N is an integer; matching the bullet screen data with the template content; when the bullet screen data is matched with the template content, acquiring live broadcast data in a preset time period in a target live broadcast room; matching the live broadcast data with the template live broadcast data; and when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in a preset time period as target live broadcast data. According to the method and the device, through acquiring the barrage data in the live broadcast room, the barrage data is matched with the template content, whether the live broadcast data in the preset time period is acquired is determined, after the live broadcast data in the preset time period is acquired, whether the acquired live broadcast data is matched with the template live broadcast data is detected, whether the acquired live broadcast data is the target live broadcast data wanted by a user is determined, the user does not need to manually operate to record a video, the accuracy of acquiring the target live broadcast data by the user is improved, the mode of acquiring the live broadcast data is expanded, and the flexibility of acquiring the target live broadcast data by the terminal is also increased.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a video live broadcast system according to an exemplary embodiment of the present application;
fig. 2 is a flowchart of a method for acquiring live data according to an exemplary embodiment of the present application;
fig. 3 is a flowchart of a method for acquiring live data according to an exemplary embodiment of the present application;
fig. 4 is a flowchart of a method for acquiring live data according to an exemplary embodiment of the present application;
fig. 5 is a block diagram illustrating a structure of a live data acquiring apparatus according to an exemplary embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The scheme provided by the application can be used in an actual scene in which a user plays music through a terminal in daily life, and for convenience of understanding, a few terms and application scenes related to the embodiment of the application are first briefly introduced below.
1) Caption
Subtitles refer to non-video contents such as dialogue or voice-over displayed in a network video, a television, a movie, and a stage work in a text form, and also generally refer to texts processed in a later stage of the movie and television work.
2) Live broadcast
Live broadcasting is a set of technology for showing rich elements such as images, sounds and characters to users through the internet by a streaming media technology, and relates to a series of service modules such as coding tools, streaming media data, servers, networks and players.
3) Plug flow
The push streaming refers to a process of transmitting video content packaged in an acquisition stage to a server by a live recording terminal (a terminal recording main live content). Optionally, stream pushing software may be installed in the live recording terminal, and the live recording terminal transmits the packaged video content to the server through the stream pushing software.
Along with the continuous increase of people's demand for entertainment and leisure modes, multimedia data's such as audio frequency, video product is more and more abundant, not only has the various APP that provide the video instant music, and the live broadcast type APP that the function is more abundant also appears thereupon, has received people's pursuit. Providing barrage interaction during playing video is one of the common functions of these APPs. That is, the user can speak through the barrage in the live broadcast room, or can speak through the barrage during the video playing process, express the own view, and the like.
Please refer to fig. 1, which illustrates a schematic structural diagram of a video live broadcasting system according to an exemplary embodiment of the present application. The system comprises: a server 110 and several terminals 120.
The server 110 is a server, or a plurality of servers, or a virtualization platform, or a cloud computing service center.
The terminal 120 may be a terminal device having a video playing function, for example, the terminal may be a mobile phone, a tablet computer, an e-book reader, smart glasses, a smart watch, an MP4(Moving Picture Experts Group Audio Layer IV) player, a laptop portable computer, a desktop computer, and the like.
The terminal 120 and the server 110 are connected through a communication network. Alternatively, the communication network may be a wired network or a wireless network.
In the embodiment of the present application, the server 110 may transmit the video stream data to the terminal 120, and the terminal 120 performs video playing according to the video stream data.
Optionally, the video live broadcasting system may further include a live recording terminal 130.
Live recording terminal 130 may be a cell phone, a tablet computer, an e-book reader, smart glasses, a smart watch, an MP4 player, a laptop portable computer, a desktop computer, and the like.
The live recording terminal 130 corresponds to an image capturing component and an audio capturing component. The image capture component and the audio capture component may be part of the live recording terminal 130, for example, the image capture component and the audio capture component may be a camera and a microphone built in the live recording terminal 130; or, the image capturing component and the audio capturing component may also be connected to the live recording terminal 130 as peripheral devices of the live recording terminal 130, for example, the image capturing component and the audio capturing component may be a camera and a microphone respectively connected to the live recording terminal 130; or, the image capturing component and the audio capturing component may also be partially built in the live recording terminal 130, and partially serve as peripheral equipment of the live recording terminal 130, for example, the image capturing component may be a camera built in the live recording terminal 130, and the audio capturing component may be a microphone in an earphone connected to the live recording terminal 130. The embodiment of the application does not limit the implementation forms of the image acquisition assembly and the audio acquisition assembly.
Optionally, in this embodiment of the application, stream pushing software may be further installed in the live recording terminal 130, and the live recording terminal transmits the packetized video content to a corresponding server through the stream pushing software. That is, the server 110 may include a server corresponding to the plug-flow software and a server corresponding to the live-broadcast software.
In this embodiment, the live recording terminal 130 may upload a live video stream recorded locally to the server 110, and the server 110 performs related processing such as transcoding on the live video stream and then pushes the live video stream to the terminal 120. In a possible implementation manner, a live Application (APP) client may be installed in the live recording terminal 130, and the server 110 may be a live server corresponding to the live Application.
During live broadcasting, a live broadcasting recording terminal operates a client of a live broadcasting application program, after a user A (also called a main broadcasting) triggers and starts a live broadcasting function in a live broadcasting application program interface, the client of the live broadcasting application program calls an image acquisition assembly and an audio acquisition assembly in the live broadcasting recording terminal to record a live broadcasting video stream, the recorded live broadcasting video stream is uploaded to a live broadcasting server, the live broadcasting server receives the live broadcasting video stream and establishes a live broadcasting channel for the live broadcasting video stream, a user corresponding to the terminal can access the live broadcasting server through the live broadcasting application program client or a browser client installed in the terminal, the live broadcasting server pushes the live broadcasting video stream to the terminal after the live broadcasting channel is selected in an access page, and the terminal plays the live broadcasting video stream in the live broadcasting application program interface or the browser interface.
Optionally, the wireless network or wired network described above uses standard communication techniques and/or protocols. The Network is typically the Internet, but may be any Network including, but not limited to, a Local Area Network (LAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a mobile, wireline or wireless Network, a private Network, or any combination of virtual private networks. In some embodiments, data exchanged over a network is represented using techniques and/or formats including Hypertext Mark-up Language (HTML), Extensible Markup Language (XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as Secure Socket Layer (SSL), Transport Layer Security (TLS), Virtual Private Network (VPN), Internet Protocol Security (IPsec). In other embodiments, custom and/or dedicated data communication techniques may also be used in place of, or in addition to, the data communication techniques described above.
Optionally, the user can log in the account of oneself in the terminal, gets into the live broadcast platform and watches the live broadcast, and in the live broadcast room, the user can interact with the anchor broadcast, for example: sending a barrage, sending a gift, etc. Optionally, the user B sends a barrage in the live broadcasting room through one terminal 120 shown in fig. 1, at this time, the server may also send the barrage sent by the user B to other terminals similarly to sending the live video, and the barrage sent by the user B is displayed in the live broadcasting application program interface or the browser interface by the other terminals.
In one possible implementation, when watching a live broadcast, if a user wants to review the live broadcast or intercept one of the video contents, the user can download the recorded live broadcast and edit the video content through video editing software. Or, the user can record the video content for a period of time through the screen recording function of the user terminal. Due to the fact that the live broadcast is high in real-time performance, after the user passes through the series of operations, the user can easily miss the video content which the user wants to record before, accuracy of the user obtaining the live broadcast video is low, and flexibility of obtaining the live broadcast video through the user terminal is affected.
Referring to fig. 2, a flowchart of a method for acquiring live data according to an exemplary embodiment of the present application is shown, where the method for acquiring live data may be used in a system architecture shown in fig. 1 and executed by a server in fig. 1, and as shown in fig. 2, the method for acquiring live data may include the following steps:
step 201, acquiring bullet screen data, where the bullet screen data includes bullet screen contents with the same number of bullet screens in a target live broadcast room in the top N bits, where N is an integer.
The target live broadcast room can be a live broadcast room in which any one of the servers is live. The user can send the barrage in the target live broadcast room, and different users can send the same barrage content. For example, the user a may send a "666" barrage, the user B and the user C may both send "666" barrages, the user D may send a "i love you" barrage, and the user E may also send a "i love you" barrage, so that the barrages sent by the user a, the user B, and the user C may be regarded as one same barrage (the number is 3), and the barrages sent by the user D and the user E may be regarded as another same barrage (the number is 2). In the target live broadcast room, if the number of the same barrages is sorted, the number of the same barrages has a corresponding sorting result.
Optionally, the server may obtain the bullet screen data through a terminal on the anchor side, or may perform statistical screening from a target live broadcast room received by the server itself.
And step 202, matching the bullet screen data with the template content.
The template content may be some vocabulary, phrases, sentences and other content related to the content live in the target live broadcast room. For example, when a game is live in the target live room, the template content may be words, phrases, sentences, etc. associated with the game. If dance is live in the target live broadcast room, the template content can be words, phrases, sentences and the like related to the dance. If the target live broadcasting room is live broadcasting singing, the template content can be words, phrases, sentences and the like related to the singing.
And 203, when the bullet screen data is matched with the template content, acquiring live broadcast data in a preset time period in the target live broadcast room.
That is, when the acquired bullet screen data matches the template content, the server may automatically acquire live broadcast data in the target live broadcast room. For example, the live data sent by the live recording terminal that has been received by the server is cut to obtain the live data within a preset time period. Alternatively, the preset time period may be preset in the server by a developer.
And step 204, matching the live broadcast data with the template live broadcast data.
Optionally, the template live broadcast data may be multimedia data such as audio, video, etc. related to the content live broadcast in the target live broadcast room, and the form of the template live broadcast data includes but is not limited to: any one or more of audio recording, video, and pictures.
The server matches the obtained live broadcast data with the template live broadcast data, and executes step 205 when the live broadcast data is matched with the template live broadcast data, otherwise, deletes the obtained live broadcast data.
And step 205, when the live broadcast data is matched with the template live broadcast data, acquiring live broadcast data in a preset time period as target live broadcast data.
That is, when the server determines that the live broadcast data matches the template live broadcast data, the obtained live broadcast data in the preset time period may be obtained as target live broadcast data.
In summary, the terminal acquires the bullet screen data, where the bullet screen data includes bullet screen contents with the same number of bullet screens in the target live broadcast room in the first N bits, and N is an integer; matching the bullet screen data with the template content; when the bullet screen data is matched with the template content, acquiring live broadcast data in a preset time period in a target live broadcast room; matching the live broadcast data with the template live broadcast data; and when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in a preset time period as target live broadcast data. According to the method and the device, through acquiring the barrage data in the live broadcast room, the barrage data is matched with the template content, whether the live broadcast data in the preset time period is acquired is determined, after the live broadcast data in the preset time period is acquired, whether the acquired live broadcast data is matched with the template live broadcast data is detected, whether the acquired live broadcast data is the target live broadcast data wanted by a user is determined, the user does not need to manually operate to record a video, the accuracy of acquiring the target live broadcast data by the user is improved, the mode of acquiring the live broadcast data is expanded, and the flexibility of acquiring the target live broadcast data by the terminal is also increased.
In a possible implementation manner, the server in the embodiment of fig. 2 may receive a data recording request sent by the first terminal, where the data recording request carries bullet screen data; and the server acquires the bullet screen data according to the data recording request. The first terminal may be the live recording terminal. The embodiment shown in fig. 2 is described by taking an example that the server acquires the bullet screen data according to the data recording request sent by the first terminal.
Referring to fig. 3, a flowchart of a method for acquiring live data according to an exemplary embodiment of the present application is shown, where the method for acquiring live data may be used in a system architecture shown in fig. 1 and executed by a server in fig. 1, and as shown in fig. 3, the method for acquiring live data may include the following steps:
step 301, receiving a data recording request sent by a first terminal, where the data recording request carries bullet screen data.
The bullet screen data comprise bullet screen contents with the number of the same bullet screens in a target live broadcast room being N bits in front, wherein N is an integer.
Alternatively, the first terminal may be the live recording terminal in fig. 1 described above. Optionally, after the anchor is played, the first terminal may periodically count the number of the barrages in a target live broadcast room (a live broadcast room of the anchor), and when the counted number of the barrages reaches a certain number, obtain the number of the same barrages contained therein, and sequence the same barrages in sequence.
For example, the certain number is 1 ten thousand, when the number of the bullet screens sent in the target live broadcast room in one period is counted by the first terminal to reach 1 ten thousand, the first terminal counts the number of the same bullet screens, sorts the number of the same bullet screens, and obtains the bullet screen contents of the first N bits. For example, in the 1 ten thousand bullet screens, when the number of the "i love you" bullet screens is 1000, the number of the "super-god" bullet screens is 4000, the number of the "anchor-live-6" bullet screens is 3000, the number of the "opposite-live-dish" bullet screens is 2000, and N is 3, the first 3-bit bullet screen contents obtained by the first terminal are respectively "super-god", "anchor-live-6" and "opposite-live-dish", the first terminal may generate a data recording request, carry the bullet screen contents of the "super-god", "anchor-live-6" and "opposite-live-dish" in the data recording request, and send the data recording request to the server. Alternatively, the period may be preset by a developer of plug-flow software in the first terminal.
Step 302, acquiring bullet screen data according to the data recording request.
Optionally, the server may analyze the data recording request according to the received data recording request, and acquire the bullet screen data carried therein.
And 303, identifying the live broadcast content in the target live broadcast room to obtain an identification result.
Optionally, the server may further obtain a corresponding recognition result by recognizing live content in the target live room. In a possible implementation manner, when the first terminal starts live broadcasting at the anchor, the server may further intercept one or more frames of images from the packaged video content sent by the first terminal, and identify the images, so as to obtain an identification result.
Alternatively, the above-described recognition process may be performed by an image recognition model in the server. For example, a recognition model in the server may recognize an image, resulting in information describing the image. Optionally, the live content is game content and the recognition result is a unique identification of the game content. For example, when the anchor plays a game live broadcast, the server identifies the game through the image identification model to obtain the game name of the game and determine which game the anchor plays the live broadcast; or the live content is dance content, and the recognition result is the unique identification of the dance content. For example, when the anchor dances in live broadcast, the server recognizes the dancing name through the image recognition model, and determines which dancing the anchor dances in live broadcast; or the live broadcast content is music content, and the identification result is the unique identification of the music content. For example, when the host broadcasts a live song, the server recognizes the live song through the image recognition model to obtain the song name of the song being sung, and determines which song the host broadcasts the live song.
And 304, acquiring template content and template live broadcast data corresponding to the identification result according to the identification result.
Optionally, the server may store a correspondence between the recognition result and the template content and the template live broadcast data, and query the template content and the template live broadcast data corresponding to the recognition result from the database according to the recognition result. Please refer to table 1, which shows a correspondence table between the recognition result and the template content and template live data according to an exemplary embodiment of the present application.
Recognition result Template content Template live data
Game 1 Template content one Template live data one
Game two Template content two Template live broadcast data II
Dancing 1 Template content three Template live broadcast data III
Song two Template content four Template live broadcast data four
…… …… ……
TABLE 1
As shown in table 1, the recognition result is any one of the game title, dance title and song title, each of the game, dance and song is corresponding to the template content and the template live broadcast data. Optionally, the table 1 may be collected and established in advance and stored in a database, and the server may query the template content and the template live broadcast data corresponding to the identification result from the database according to the identification result, so as to obtain the template content and the template live broadcast data corresponding to the identification result.
Optionally, the template content may be similar to that described in step 202 above, and is not described here again. For template live data, the template live data may be one or both of template audio data and template image data.
Step 305, detecting whether the bullet screen data contains bullet screen content including the template content.
Optionally, after the server acquires the template content, screening the bullet screen data through the template content, and checking whether bullet screen content containing the template content exists in the acquired bullet screen data. Taking the form of the template content as an example, for a game one, the template content may be "god", "glorio", "invincible", and the like, if the live broadcast content of the target live broadcast room is the game content, and the result of the server identification is the game one, then the template content acquired in step 304 is several vocabularies of "god", "glorio", and invincible ", and the server checks the acquired barrage data, if the barrage data includes the barrage content of the template content, execute step 306, and otherwise end the process. For example, N is 3, and the bullet screen contents of the first 3 bits included in the bullet screen data are: "666", "anchor true invincibility", "god operation". At this time, it is described that the bullet screen data includes the bullet screen content of "invincibility", step 306 is executed, and conversely, if none of the bullet screen contents of the first 3 bits included in the bullet screen data includes any of the template contents, the server may end the process.
In a possible implementation manner, the number of the template contents is at least two, the template contents further have a priority order, and the server matches the bullet screen data with the template contents according to the priority order of the template contents. That is, in this step, the server may detect the acquired bullet screen data according to the priority order of the template content. For example, for the template contents "grand spirit", "grand lao" and "invincibility", the priority order of the template contents is "invincibility", "grand spirit" and "grand lao", respectively. The server can sequentially detect the barrage data according to three priority orders of invincibility, spirit and glorio.
And step 306, when the bullet screen data contains bullet screen content containing the template content, generating a data acquisition request, wherein the data acquisition request is used for acquiring live broadcast data in a preset time period in a target live broadcast room.
That is, when detecting that the bullet screen content including the template content exists in the bullet screen data, the server may regard the bullet screen data as matching with the template content, generate a data acquisition request, and perform step 307. In one possible implementation, the preset time period may be proportional to the matching degree of the template content. For example, the server further stores a corresponding relation table between a preset time period and the matching degree. Please refer to table 2, which shows a table of correspondence between a preset time period and a matching degree according to an exemplary embodiment of the present application.
Degree of matching Preset time period
More than 50 percent 40 seconds
More than 10 percent and less than or equal to 50 percent 30 seconds
Less than or equal to 10 percent 20 seconds
…… ……
TABLE 2
As shown in table 2, after the server determines that the bullet screen data contains bullet screen content including the template content, the server may further calculate a corresponding matching degree, and obtain a corresponding preset time period through the lookup table 2. For example, in the above example, N is 3, and the bullet screen contents of the first 3 bits included in the bullet screen data are: 666, anchor true invincibility and spirit operation, wherein the template contents are "grand spirit", "grand man" and invincibility ", the server can calculate that the matching degree of the barrage data is 33.3%, and the preset time period is 30 seconds by inquiring the table 2, so that the data acquisition request generated by the server can acquire the live broadcast data of 30 seconds in the target live broadcast room.
And 307, acquiring live broadcast data in a preset time period in the target live broadcast room according to the data acquisition request.
Optionally, after the server acquires the preset time period, a timestamp corresponding to the barrage data may also be acquired, and according to the timestamp corresponding to the barrage data, 30 seconds of live data are intercepted from live data corresponding to the timestamp sent by the live recording terminal.
And 308, matching the live broadcast data with the template live broadcast data.
Namely, the server matches the acquired live broadcast data in the preset time period with the acquired template live broadcast data.
In a possible implementation manner, the template live broadcast data includes template audio data, and this step may be replaced with: and the server matches the audio data in the live broadcast data with the template audio data. Optionally, the manner in which the server matches the audio data in the live data with the template audio data may be as follows: comparing audio data in the live broadcast data with template audio data to obtain a first similarity value; and detecting the magnitude relation between the first similarity value and a first preset threshold value. Optionally, the server may input the audio data in the live broadcast data and the template audio data into a speech recognition model, where the speech recognition model may obtain a similarity value between the two input audio data, and the server determines whether the live broadcast data matches the template live broadcast data according to the similarity value output by the speech recognition model, and when the first similarity value is greater than a first preset threshold, it is determined that the live broadcast data matches the template live broadcast data, and otherwise, it is determined that the live broadcast data does not match the template live broadcast data. For example, the speech recognition model may output a percentage of the two audio data input, the percentage indicating the degree of similarity between the two. When the result output by the speech recognition model is 80% and the first preset threshold is 75%, the server may consider that the live data matches the template live data.
In a possible implementation manner, the template live broadcast data further includes template image data, that is, the template live broadcast data includes template audio data and template image data. And when the audio data in the live broadcast data are not matched with the template audio data, the server continuously matches the video data in the live broadcast data with the template image data. Optionally, the manner in which the server matches the video data in the live data with the template image data may be as follows: the server compares the video data of each frame in the live broadcast data with the template image data to obtain a second similar value; and detecting the magnitude relation between the second similarity value and a second preset threshold value.
Optionally, the server may input the video data of each frame in the live broadcast data and the template image data into an image recognition model, where the image recognition model may obtain a similarity value between two input image data, and the server determines whether the live broadcast data is matched with the template live broadcast data according to the similarity value output by the image recognition model, and when the second similarity value is greater than a second preset threshold, it is determined that the live broadcast data is matched with the template live broadcast data, and otherwise, it is determined that the live broadcast data is not matched. For example, the speech recognition model may output a percentage for two image data input, the percentage indicating a degree of similarity between the two. When the output result of the image recognition model is 90% and the first preset threshold is 80%, the server may consider that the live data matches the template live data. That is, for each frame of video data, a second similarity value between the video data of one frame and the template image data greater than a second preset threshold may be regarded as matching the live data with the template live data.
It should be noted that the template live broadcast data may include template image data, and the video data in the live broadcast data may be directly matched with the template image data. Alternatively, the order of determining whether the template audio data and the template image data match may be changed, which is not limited in the present application.
In one possible implementation, the number of the template live broadcast data is at least two, and the template live broadcast data also has a priority order. In this step, the server may match the live broadcast data with the template live broadcast data according to the priority order of the template live broadcast data.
And 309, when the live broadcast data is matched with the template live broadcast data, acquiring live broadcast data in a preset time period as target live broadcast data.
And when the live broadcast data are matched with the template live broadcast data, the server acquires the intercepted live broadcast data in the preset time period as target live broadcast data.
Corresponding to one implementation manner of the above step 308 (the template live broadcast data includes template audio data), step 309 may be replaced with: and when the audio data in the live broadcast data are matched with the template audio data, acquiring the live broadcast data in a preset time period as target live broadcast data. Or when the first similarity value is larger than a first preset threshold value, acquiring the live broadcast data in a preset time period as target live broadcast data.
Corresponding to another implementation manner in step 308 (the template live broadcast data includes template audio data and template image data), step 309 may be replaced with: and when the video data in the live broadcast data are matched with the template image data, acquiring the live broadcast data in a preset time period as target live broadcast data. Or when the second similarity value is larger than a second preset threshold value, acquiring the live broadcast data in a preset time period as target live broadcast data.
Optionally, after step 309, the server may further generate a prompt message and send the prompt message to the first terminal. And the prompt message is used for indicating the target live broadcast data generated in the target live broadcast room. For example, the server sends the generated prompt information to the first terminal and other user terminals to remind each user in the target live broadcast room, the anchor broadcast generates target live broadcast data in the live broadcast process, and accordingly each user can also see the prompt information.
Optionally, after the step 309, the server may also store the target live data; and receiving a data viewing request sent by the first terminal, and sending the target live broadcast data to the first terminal according to the data viewing request. The data viewing request is used for acquiring target live broadcast data. For example, after the server sends prompt information to the first terminal, the target live broadcast room may display a first control, each user or anchor in the target live broadcast room may view an interface of the stored target live broadcast data by clicking the first control, when the user or anchor clicks the target live broadcast data, the terminal of the user or anchor may send a data viewing request, and the request server sends the stored target live broadcast data, thereby implementing viewing of the target live broadcast data.
In summary, the terminal acquires the bullet screen data, where the bullet screen data includes bullet screen contents with the same number of bullet screens in the target live broadcast room in the first N bits, and N is an integer; matching the bullet screen data with the template content; when the bullet screen data is matched with the template content, acquiring live broadcast data in a preset time period in a target live broadcast room; matching the live broadcast data with the template live broadcast data; and when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in a preset time period as target live broadcast data. According to the method and the device, through acquiring the barrage data in the live broadcast room, the barrage data is matched with the template content, whether the live broadcast data in the preset time period is acquired is determined, after the live broadcast data in the preset time period is acquired, whether the acquired live broadcast data is matched with the template live broadcast data is detected, whether the acquired live broadcast data is the target live broadcast data wanted by a user is determined, the user does not need to manually operate to record a video, the accuracy of acquiring the target live broadcast data by the user is improved, the mode of acquiring the live broadcast data is expanded, and the flexibility of acquiring the target live broadcast data by the terminal is also increased.
The method embodiments shown in fig. 2 and fig. 3 are described below by taking as an example that the first terminal is the live recording terminal in fig. 1, and the server may include a streaming server and an identification server. The stream server can be regarded as a server corresponding to stream pushing software in the live broadcast recording terminal, and the recognition server can be a server providing functions of voice recognition, image recognition and the like.
Referring to fig. 4, a flowchart of a method for acquiring live data according to an exemplary embodiment of the present application is shown, where the method for acquiring live data may be used in a system architecture shown in fig. 1 and executed by a server and a live recording terminal in fig. 1, and as shown in fig. 4, the method for acquiring live data may include the following steps:
step 401, a live broadcast recording terminal records live broadcast video data.
Step 402, the live recording terminal sends the live video data to the streaming server.
In step 403, the recognition server obtains live video data from the streaming server.
At step 404, the recognition server recognizes that the live video data is game content.
Step 403 and step 404 may correspond to the description of the server identifying the live content in step 303 to step 304.
Step 405, the recognition server obtains the template content and the template live broadcast data corresponding to the game content from the database.
And 406, detecting that the number of the barrages in the live broadcast room reaches a certain number by the live broadcast recording terminal.
Step 407, the live recording terminal sends a data recording request to the identification server.
And step 408, the identification server acquires bullet screen data according to the data recording request.
The steps 406 to 408 may correspond to the server obtaining the description of the bullet screen data in the steps 301 to 302.
Step 409, the recognition server determines that the bullet screen data matches the template content.
At step 410, the recognition server obtains 30 seconds of live data from the streaming server.
In step 411, the recognition server determines that the live data matches the template live data.
The steps 409 to 411 may correspond to the description of the steps executed by the server in the steps 305 to 308.
In step 412, the recognition server generates the prompt message and stores the message.
And 413, the identification server sends the prompt message to the live recording terminal.
In step 414, the recognition server sends the storage information to the streaming server.
The streaming server stores the 30 seconds of live data, step 415.
Step 416, the live recording terminal sends a data viewing request to the streaming server.
And step 417, the streaming server pulls the previously stored 30-second live broadcast data and sends the data to the live broadcast recording terminal.
Steps 412 to 417 may correspond to the description of the related content in step 309.
In summary, the terminal acquires the bullet screen data, where the bullet screen data includes bullet screen contents with the same number of bullet screens in the target live broadcast room in the first N bits, and N is an integer; matching the bullet screen data with the template content; when the bullet screen data is matched with the template content, acquiring live broadcast data in a preset time period in a target live broadcast room; matching the live broadcast data with the template live broadcast data; and when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in a preset time period as target live broadcast data. According to the method and the device, through acquiring the barrage data in the live broadcast room, the barrage data is matched with the template content, whether the live broadcast data in the preset time period is acquired is determined, after the live broadcast data in the preset time period is acquired, whether the acquired live broadcast data is matched with the template live broadcast data is detected, whether the acquired live broadcast data is the target live broadcast data wanted by a user is determined, the user does not need to manually operate to record a video, the accuracy of acquiring the target live broadcast data by the user is improved, the mode of acquiring the live broadcast data is expanded, and the flexibility of acquiring the target live broadcast data by the terminal is also increased.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 5, a block diagram of a live data acquiring apparatus according to an exemplary embodiment of the present application is shown. The live data acquisition device can be used in a system architecture as shown in fig. 1 to execute all or part of the steps executed by the server in the method provided by the embodiment shown in fig. 2, fig. 3 or fig. 4. As shown in fig. 5, the apparatus mainly includes:
the first obtaining module 501 is configured to obtain bullet screen data, where the bullet screen data includes bullet screen contents with the same number of bullet screens in a target live broadcast room in the first N bits, and N is an integer;
a first matching module 502, configured to match the barrage data with template content;
a second obtaining module 503, configured to obtain live broadcast data in a preset time period in the target live broadcast room when the barrage data matches the template content;
a second matching module 504, configured to match the live broadcast data with template live broadcast data;
a third obtaining module 505, configured to obtain, when the live broadcast data matches the template live broadcast data, the live broadcast data in the preset time period as target live broadcast data.
Optionally, the template live broadcast data includes template audio data, and the first matching module 502 is configured to match the audio data in the live broadcast data with the template audio data;
the second matching module 504 is configured to, when audio data in the live broadcast data matches the template audio data, acquire the live broadcast data in the preset time period as the target live broadcast data.
Optionally, the second matching module 504 includes: a first acquisition unit and a first detection unit;
the first obtaining unit is used for comparing the audio data in the live broadcast data with the template audio data to obtain a first similarity value;
the first detection unit is used for detecting the magnitude relation between the first similarity value and a first preset threshold value.
The second matching module 504 is configured to, when the first similarity value is greater than the first preset threshold, obtain the live data in the preset time period as the target live data.
Optionally, the template live broadcast data further includes template image data, and the apparatus further includes:
the third matching module is used for matching the video data in the live broadcast data with the template image data when the audio data in the live broadcast data is not matched with the template audio data;
the second matching module 504 is configured to, when video data in the live broadcast data is matched with the template image data, acquire the live broadcast data in the preset time period as the target live broadcast data.
Optionally, the third matching module includes a second obtaining unit and a second detecting unit;
the second obtaining unit is configured to compare video data of each frame in the live broadcast data with the template image data to obtain a second similarity value;
the second detection unit is used for detecting the magnitude relation between the second similarity value and a second preset threshold value;
the second matching module 504 is configured to, when the second similarity value is greater than the second preset threshold, obtain the live data in the preset time period as the target live data.
The first matching module 502 is configured to detect whether bullet screen content including the template content exists in the bullet screen data;
the second obtaining module 503 includes: a first generation unit and a third acquisition unit;
the first generating unit is configured to generate a data obtaining request when the bullet screen data includes bullet screen content including the template content, where the data obtaining request is used to obtain live broadcast data in a preset time period in the target live broadcast room;
and the third acquisition unit is used for acquiring the live broadcast data in the preset time period in the target live broadcast room according to the data acquisition request.
Optionally, the apparatus further comprises:
a first receiving module, configured to receive a data recording request sent by a first terminal before the first obtaining module 501 obtains the bullet screen data, where the data recording request carries the bullet screen data;
the first obtaining module 501 is configured to obtain the bullet screen data according to the data recording request.
Optionally, the apparatus includes:
a first generating module, configured to generate a prompt message after the third obtaining module 505 obtains the live broadcast data in the preset time period as target live broadcast data, where the prompt message is used to indicate that the target live broadcast data is generated in the target live broadcast room;
and the first sending module is used for sending the prompt message to the first terminal.
Optionally, the apparatus includes:
a first storage module, configured to store the target live broadcast data after the third obtaining module 505 obtains the live broadcast data in the preset time period as the target live broadcast data;
a second receiving module, configured to receive a data viewing request sent by the first terminal, where the data viewing request is used to obtain the target live broadcast data;
and the second sending module is used for sending the target live broadcast data to the first terminal according to the data viewing request.
Optionally, the apparatus further comprises:
a first identification module, configured to identify live broadcast content in the target live broadcast room before the first matching module 502 matches the barrage data with the template content, so as to obtain an identification result;
and the fourth acquisition module is used for acquiring the template content and the template live broadcast data corresponding to the identification result according to the identification result.
Optionally, the live content is game content, and the identification result is a unique identifier of the game content.
Optionally, the number of the template content and the number of the template live broadcast data are at least two, the template content has a priority order, and the template live broadcast data has a priority order;
the first matching module 502 is configured to match the bullet screen data with the template content according to the priority order of the template content;
the second matching module 504 is configured to match the live broadcast data with the template live broadcast data according to the priority order of the template live broadcast data.
In summary, the terminal acquires the bullet screen data, where the bullet screen data includes bullet screen contents with the same number of bullet screens in the target live broadcast room in the first N bits, and N is an integer; matching the bullet screen data with the template content; when the bullet screen data is matched with the template content, acquiring live broadcast data in a preset time period in a target live broadcast room; matching the live broadcast data with the template live broadcast data; and when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in a preset time period as target live broadcast data. According to the method and the device, through acquiring the barrage data in the live broadcast room, the barrage data is matched with the template content, whether the live broadcast data in the preset time period is acquired is determined, after the live broadcast data in the preset time period is acquired, whether the acquired live broadcast data is matched with the template live broadcast data is detected, whether the acquired live broadcast data is the target live broadcast data wanted by a user is determined, the user does not need to manually operate to record a video, the accuracy of acquiring the target live broadcast data by the user is improved, the mode of acquiring the live broadcast data is expanded, and the flexibility of acquiring the target live broadcast data by the terminal is also increased.
Fig. 6 is a schematic structural diagram of a computer device according to an exemplary embodiment of the present application. As shown in fig. 6, the computer apparatus 600 includes a Central Processing Unit (CPU) 601, a system Memory 604 including a Random Access Memory (RAM) 602 and a Read Only Memory (ROM) 603, and a system bus 605 connecting the system Memory 604 and the CPU 601. The computer device 600 also includes a basic Input/Output System (I/O System) 606 for facilitating information transfer between devices within the computer, and a mass storage device 607 for storing an operating System 612, application programs 613, and other program modules 614.
The basic input/output system 606 includes a display 608 for displaying information and an input device 609 such as a mouse, keyboard, etc. for a user to input information. Wherein the display 608 and the input device 609 are connected to the central processing unit 601 through an input output controller 610 connected to the system bus 605. The basic input/output system 606 may also include an input/output controller 610 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, input/output controller 610 may also provide output to a display screen, a printer, or other type of output device.
The mass storage device 607 is connected to the central processing unit 601 through a mass storage controller (not shown) connected to the system bus 605. The mass storage device 607 and its associated computer-readable media provide non-volatile storage for the computer device 600. That is, the mass storage device 607 may include a computer-readable medium (not shown) such as a hard disk or a CD-ROM (Compact disk Read-Only Memory) drive.
The computer readable media may include computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other solid state Memory technology, CD-ROM, DVD (Digital Video Disc) or other optical, magnetic, tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will appreciate that the computer storage media is not limited to the foregoing. The system memory 604 and mass storage device 607 described above may be collectively referred to as memory.
The computer device 600 may be connected to the internet or other network devices through a network interface unit 611 connected to the system bus 605.
The memory further includes one or more programs, the one or more programs are stored in the memory, and the central processing unit 601 implements all or part of the steps performed by the server in the methods provided by the above embodiments of the present application by executing the one or more programs.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as a memory comprising computer programs (instructions), executable by a processor of a computer device to perform all or part of the steps of the methods shown in the various embodiments of the present application, is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM (Compact Disc Read-Only Memory) drive, a magnetic tape, a floppy disk, an optical data storage device, and the like. Optionally, the storage medium has at least one instruction, at least one program, a set of codes, or a set of instructions stored therein, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the live data acquisition method according to the above embodiment.
The embodiments of the present application further provide a computer program product, where at least one instruction, at least one program, a code set, or an instruction set is stored in the computer program product, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by a processor to implement all or part of the steps executed by a server in the live broadcast data acquisition method as described in the above embodiments.
It should be noted that: in the device provided in the above embodiment, when music playing and music live data acquisition are performed, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. A live data acquisition method, wherein the method is executed by a server, and wherein the method comprises:
acquiring bullet screen data, wherein the bullet screen data comprises bullet screen contents with the same number of bullet screens in a target live broadcast room in the front N bits, and N is an integer;
matching the bullet screen data with the template content; the template content is at least one of words, phrases or sentences related to the content live broadcast in the target live broadcast room;
when the bullet screen data are matched with the template content, acquiring live broadcast data in a preset time period in the target live broadcast room;
matching the live broadcast data with template live broadcast data; the template live broadcast data are multimedia data related to the live broadcast content of the target live broadcast room;
and when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in the preset time period as target live broadcast data.
2. The method of claim 1, wherein the template live data comprises template audio data, and wherein matching the live data with the template live data comprises:
matching audio data in the live broadcast data with template audio data;
when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in the preset time period as target live broadcast data, wherein the method comprises the following steps:
and when the audio data in the live broadcast data are matched with the template audio data, acquiring the live broadcast data in the preset time period as the target live broadcast data.
3. The method of claim 2, wherein matching audio data in the live data with template audio data comprises:
comparing audio data in the live broadcast data with the template audio data to obtain a first similarity value;
detecting the magnitude relation between the first similarity value and a first preset threshold value;
when the audio data in the live broadcast data are matched with the template audio data, acquiring the live broadcast data in the preset time period as the target live broadcast data, wherein the method comprises the following steps:
and when the first similarity value is larger than the first preset threshold value, acquiring the live broadcast data in the preset time period as the target live broadcast data.
4. The method of claim 2, wherein the template live data further comprises template image data, the method further comprising:
when the audio data in the live broadcast data are not matched with the template audio data, matching the video data in the live broadcast data with the template image data;
when the live broadcast data are matched with the template live broadcast data, acquiring the live broadcast data in the preset time period as target live broadcast data, wherein the method comprises the following steps:
and when the video data in the live broadcast data are matched with the template image data, acquiring the live broadcast data in the preset time period as the target live broadcast data.
5. The method of claim 4, wherein matching video data in the live data with the template image data comprises:
comparing the video data of each frame in the live broadcast data with the template image data to obtain a second similar value;
detecting the magnitude relation between the second similarity value and a second preset threshold value;
when the video data in the live broadcast data are matched with the template image data, acquiring the live broadcast data in the preset time period as the target live broadcast data, wherein the method comprises the following steps:
and when the second similarity value is larger than the second preset threshold value, acquiring the live broadcast data in the preset time period as the target live broadcast data.
6. The method of claim 1, wherein matching the bullet screen data with template content comprises:
detecting whether bullet screen content containing the template content exists in the bullet screen data;
when the bullet screen data is matched with the template content, acquiring live broadcast data of a preset time period in the target live broadcast room, wherein the live broadcast data comprises:
when the bullet screen data contains bullet screen content containing the template content, generating a data acquisition request, wherein the data acquisition request is used for acquiring live broadcast data in a preset time period in the target live broadcast room;
and acquiring the live broadcast data in a preset time period in the target live broadcast room according to the data acquisition request.
7. The method of claim 1, further comprising, prior to said obtaining bullet screen data:
receiving a data recording request sent by a first terminal, wherein the data recording request carries the bullet screen data;
the acquiring of the bullet screen data comprises the following steps:
and acquiring the bullet screen data according to the data recording request.
8. The method according to claim 7, wherein after the acquiring live data of the preset time period as target live data, further comprising:
generating prompt information, wherein the prompt information is used for indicating that the target live broadcast data are generated in the target live broadcast room;
and sending the prompt message to the first terminal.
9. The method according to claim 7, wherein after the acquiring live data of the preset time period as target live data, further comprising:
storing the target live broadcast data;
receiving a data viewing request sent by the first terminal, wherein the data viewing request is used for acquiring the target live broadcast data;
and sending the target live broadcast data to the first terminal according to the data viewing request.
10. The method of claim 1, wherein prior to said matching said bullet screen data to template content, said method further comprises:
identifying the live broadcast content in the target live broadcast room to obtain an identification result;
and acquiring the template content and the template live broadcast data corresponding to the identification result according to the identification result.
11. The method of claim 10, wherein the live content is game content and the identification result is a unique identification of the game content.
12. The method of claim 10, wherein the template content and the template live data are respectively at least two in number, the template content has a priority order, and the template live data has a priority order;
the matching of the bullet screen data and the template content comprises the following steps:
matching the bullet screen data with the template content according to the priority order of the template content;
the matching of the live broadcast data and the template live broadcast data comprises the following steps:
and matching the live broadcast data with the template live broadcast data according to the priority order of the target live broadcast data.
13. A live data acquisition device, wherein the device is used in a server, the device comprises:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring bullet screen data, the bullet screen data comprises bullet screen contents with the number of the same bullet screens in a target live broadcast room being N-bit, and N is an integer;
the first matching module is used for matching the bullet screen data with the template content; the template content is at least one of words, phrases or sentences related to the content live broadcast in the target live broadcast room;
the second acquisition module is used for acquiring live broadcast data in a preset time period in the target live broadcast room when the bullet screen data is matched with the template content;
the second matching module is used for matching the live broadcast data with the template live broadcast data; the template live broadcast data are multimedia data related to the live broadcast content of the target live broadcast room;
and the third acquisition module is used for acquiring the live broadcast data in the preset time period as target live broadcast data when the live broadcast data is matched with the template live broadcast data.
14. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, the at least one instruction, the at least one program, set of codes, or set of instructions being loaded and executed by the processor to implement a live data acquisition method as claimed in any one of claims 1 to 12.
15. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a live data acquisition method as claimed in any one of claims 1 to 12.
CN202010525422.6A 2020-06-10 2020-06-10 Live broadcast data acquisition method and device, computer equipment and storage medium Active CN111741333B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010525422.6A CN111741333B (en) 2020-06-10 2020-06-10 Live broadcast data acquisition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010525422.6A CN111741333B (en) 2020-06-10 2020-06-10 Live broadcast data acquisition method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111741333A CN111741333A (en) 2020-10-02
CN111741333B true CN111741333B (en) 2021-12-28

Family

ID=72648659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010525422.6A Active CN111741333B (en) 2020-06-10 2020-06-10 Live broadcast data acquisition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111741333B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014945B (en) * 2021-03-04 2022-07-22 网易(杭州)网络有限公司 Data processing method and device, storage medium and computer equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979348A (en) * 2016-06-28 2016-09-28 武汉斗鱼网络科技有限公司 Matching method and device based on video cutting and live commenting
WO2017023213A1 (en) * 2015-08-05 2017-02-09 Yürük Erdem Mobile social media application
CN107613392A (en) * 2017-09-22 2018-01-19 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN108600850A (en) * 2018-03-20 2018-09-28 腾讯科技(深圳)有限公司 Video sharing method, client, server and storage medium
CN108668163A (en) * 2018-05-03 2018-10-16 广州虎牙信息科技有限公司 Live play method, apparatus, computer readable storage medium and computer equipment
CN108810637A (en) * 2018-06-12 2018-11-13 优视科技有限公司 Video broadcasting method, device and terminal device
CN108924576A (en) * 2018-07-10 2018-11-30 武汉斗鱼网络科技有限公司 A kind of video labeling method, device, equipment and medium
CN109089154A (en) * 2018-07-10 2018-12-25 武汉斗鱼网络科技有限公司 A kind of video extraction method, apparatus, equipment and medium
CN109348239A (en) * 2018-10-18 2019-02-15 北京达佳互联信息技术有限公司 Piece stage treatment method, device, electronic equipment and storage medium is broadcast live

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017023213A1 (en) * 2015-08-05 2017-02-09 Yürük Erdem Mobile social media application
CN105979348A (en) * 2016-06-28 2016-09-28 武汉斗鱼网络科技有限公司 Matching method and device based on video cutting and live commenting
CN107613392A (en) * 2017-09-22 2018-01-19 广东欧珀移动通信有限公司 Information processing method, device, terminal device and storage medium
CN108600850A (en) * 2018-03-20 2018-09-28 腾讯科技(深圳)有限公司 Video sharing method, client, server and storage medium
CN108668163A (en) * 2018-05-03 2018-10-16 广州虎牙信息科技有限公司 Live play method, apparatus, computer readable storage medium and computer equipment
CN108810637A (en) * 2018-06-12 2018-11-13 优视科技有限公司 Video broadcasting method, device and terminal device
CN108924576A (en) * 2018-07-10 2018-11-30 武汉斗鱼网络科技有限公司 A kind of video labeling method, device, equipment and medium
CN109089154A (en) * 2018-07-10 2018-12-25 武汉斗鱼网络科技有限公司 A kind of video extraction method, apparatus, equipment and medium
CN109348239A (en) * 2018-10-18 2019-02-15 北京达佳互联信息技术有限公司 Piece stage treatment method, device, electronic equipment and storage medium is broadcast live

Also Published As

Publication number Publication date
CN111741333A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN108401192B (en) Video stream processing method and device, computer equipment and storage medium
US11956516B2 (en) System and method for creating and distributing multimedia content
US11375295B2 (en) Method and device for obtaining video clip, server, and storage medium
EP2901631B1 (en) Enriching broadcast media related electronic messaging
US9560411B2 (en) Method and apparatus for generating meta data of content
US8307403B2 (en) Triggerless interactive television
CN109788345B (en) Live broadcast control method and device, live broadcast equipment and readable storage medium
CN112653902B (en) Speaker recognition method and device and electronic equipment
CN102915320A (en) Extended videolens media engine for audio recognition
KR101916874B1 (en) Apparatus, method for auto generating a title of video contents, and computer readable recording medium
US10897658B1 (en) Techniques for annotating media content
CN112954390B (en) Video processing method, device, storage medium and equipment
CN110691271A (en) News video generation method, system, device and storage medium
CN111444415A (en) Barrage processing method, server, client, electronic device and storage medium
CN111741333B (en) Live broadcast data acquisition method and device, computer equipment and storage medium
CN114286169B (en) Video generation method, device, terminal, server and storage medium
CN111541906B (en) Data transmission method, data transmission device, computer equipment and storage medium
JP2011164681A (en) Device, method and program for inputting character and computer-readable recording medium recording the same
TW201225669A (en) System and method for synchronizing with multimedia broadcast program and computer program product thereof
CN114341866A (en) Simultaneous interpretation method, device, server and storage medium
CN113268635B (en) Video processing method, device, server and computer readable storage medium
US20230319346A1 (en) Systems and methods for automatically generating content items from identified events
CN114564614A (en) Automatic searching method, system and device for video clip and readable storage medium
WO2023060759A1 (en) Video pushing method, device, and storage medium
CN113840152A (en) Live broadcast key point processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220329

Address after: 4119, 41st floor, building 1, No.500, middle section of Tianfu Avenue, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610000

Patentee after: Chengdu kugou business incubator management Co.,Ltd.

Address before: No. 315, Huangpu Avenue middle, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU KUGOU COMPUTER TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right